Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Functional programming isn't just an alternative coding style—it's a paradigm that fundamentally changes how you think about computation. In this course, you're being tested on your ability to recognize why certain design choices lead to safer, more maintainable code. The principles here—immutability, pure functions, higher-order functions, and referential transparency—form the foundation for understanding concepts like concurrency, program correctness, type systems, and algorithmic efficiency.
These principles interconnect in powerful ways: immutability enables referential transparency, which makes reasoning about code straightforward. Pure functions combined with higher-order functions allow for elegant composition. When you encounter exam questions about debugging, optimization, or code design, you'll need to identify which principle applies and why. Don't just memorize definitions—know what problem each principle solves and how they work together to create robust programs.
These principles eliminate entire categories of bugs by ensuring that code behaves consistently and predictably. The core mechanism is removing hidden dependencies and side effects so that functions become reliable building blocks.
$$f(x) = 5$$, you can substitute 5 anywhere $$f(x)$$ appearsCompare: Pure functions vs. Referential transparency—both eliminate side effects, but pure functions describe function behavior while referential transparency describes expression substitution. If an FRQ asks about optimization or compiler transformations, referential transparency is your key concept.
These principles let you build complex behavior from simple, reusable pieces. The mechanism is treating functions as data that can be combined, passed around, and transformed like any other value.
map, filter, and reduce abstract common patterns, eliminating repetitive loop code$$f \circ g$$ or f(g(x))Compare: First-class functions vs. Function composition—first-class functions make composition possible, while composition is the technique of chaining them. Exam questions about code reuse often want you to identify both concepts working together.
These principles provide alternatives to imperative loops and conditionals. The mechanism is expressing computation as transformations and pattern-based dispatch rather than step-by-step instructions.
Compare: Recursion vs. Lazy evaluation—recursion processes data by breaking it down, while lazy evaluation delays processing entirely. Both can handle infinite structures, but laziness does so by not computing while recursion does so by computing incrementally.
These principles leverage the type system to make invalid states unrepresentable. The mechanism is encoding domain knowledge into types so the compiler catches logic errors before runtime.
Option is either Some(value) or None, nothing elsePoint has both an $$x$$ and a $$y$$ coordinatefilter(isEven, numbers) expresses intent without loop mechanicsCompare: Algebraic data types vs. Pattern matching—ADTs define what shapes data can take, while pattern matching provides syntax to work with those shapes. They're designed to work together; exam questions about type safety often involve both.
| Concept | Best Examples |
|---|---|
| Eliminating side effects | Immutability, Pure functions, Referential transparency |
| Code reuse and abstraction | First-class functions, Higher-order functions, Function composition |
| Recursive data processing | Recursion, Pattern matching |
| Performance optimization | Lazy evaluation, Referential transparency |
| Type safety | Algebraic data types, Pattern matching |
| Readability and maintenance | Declarative style, Function composition, Pattern matching |
| Concurrency support | Immutability, Pure functions |
| Testability | Pure functions, Referential transparency |
Which two principles work together to guarantee that a function can be safely called from multiple threads without locks?
A compiler replaces all calls to $$square(4)$$ with the literal 16. Which principle makes this optimization valid, and what property must the square function have?
Compare and contrast recursion and lazy evaluation as strategies for processing a potentially infinite stream of data. What are the trade-offs?
You're debugging code where a function returns different results on successive calls with identical arguments. Which principle has been violated, and what's the likely cause?
An FRQ asks you to refactor imperative loop code into functional style. Which three principles would guide your approach, and how would each contribute to the solution?