Fiveable

🧠Thinking Like a Mathematician Unit 1 Review

QR code for Thinking Like a Mathematician practice questions

1.6 Formal mathematical language

1.6 Formal mathematical language

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🧠Thinking Like a Mathematician
Unit & Topic Study Guides

Formal mathematical language gives you a precise toolkit for expressing ideas without ambiguity. Every proof you write, every definition you read, and every argument you construct depends on this shared vocabulary of symbols, connectives, and quantifiers. This topic covers the building blocks of that language and how they fit together.

Foundations of formal language

Formal language in mathematics exists to remove the vagueness of everyday speech. When you say "some numbers are even" in English, it's unclear exactly what you mean. Formal language forces you to specify which numbers, how many, and what property you're talking about. That precision is what makes rigorous proof possible.

Elements of mathematical statements

A mathematical statement has several components working together:

  • Subjects are the mathematical objects you're talking about (numbers, sets, functions).
  • Predicates describe properties or relationships of those subjects. For example, "is prime" or "is greater than" are predicates.
  • Constants denote specific, fixed values: 2, π\pi, ee.
  • Variables represent unknown or changeable quantities: xx, yy, zz. On their own, a variable paired with a predicate (like "xx is even") isn't a full statement yet because you haven't said which xx.
  • Quantifiers pin down the scope of variables. \forall means "for all" and \exists means "there exists." Adding a quantifier turns an open formula into a complete statement: xP(x)\forall x\, P(x).

Logical connectives and operators

Connectives let you build compound statements from simpler ones:

  • Conjunction (\wedge): "pp and qq." True only when both pp and qq are true.
  • Disjunction (\vee): "pp or qq." True when at least one is true (this is inclusive or).
  • Negation (¬\neg): Flips the truth value. If pp is true, ¬p\neg p is false.
  • Implication (\rightarrow): "If pp, then qq." False only when pp is true and qq is false. This trips people up: an implication with a false hypothesis is always true.
  • Biconditional (\leftrightarrow): "pp if and only if qq." True when both sides share the same truth value.

Quantifiers in mathematics

  • Universal quantifier (\forall): Asserts a property holds for every element in the domain. xP(x)\forall x\, P(x) means no exceptions.
  • Existential quantifier (\exists): Asserts at least one element has the property. xP(x)\exists x\, P(x) means you need just one witness.
  • Uniqueness quantifier (!\exists !): Asserts exactly one element has the property.
  • Quantifiers can be combined, and order matters. xy(y>x)\forall x\, \exists y\, (y > x) says "for every xx, there's some yy bigger than it" (true for real numbers). Swap the quantifiers to get yx(y>x)\exists y\, \forall x\, (y > x), which says "there's a single yy bigger than everything" (false for real numbers). Same symbols, completely different meaning.

Propositional logic

Propositional logic deals with statements that are simply true or false. You don't look inside the statements; you just care about how their truth values combine through connectives. It's the simplest logical system, and everything more advanced builds on it.

Truth tables and validity

A truth table lists every possible combination of truth values for the variables in an expression and shows the resulting truth value of the whole expression.

  • A tautology is true in every row of its truth table (e.g., p¬pp \vee \neg p).
  • A contradiction is false in every row (e.g., p¬pp \wedge \neg p).
  • A contingency is true in some rows and false in others.

An argument is valid if there's no row where all the premises are true but the conclusion is false. You check this by building a truth table for the entire argument.

Logical equivalence

Two statements are logically equivalent (written \equiv) if they produce identical truth values in every possible scenario. Key equivalences to know:

De Morgan's Laws:

  • ¬(pq)(¬p¬q)\neg(p \wedge q) \equiv (\neg p \vee \neg q)
  • ¬(pq)(¬p¬q)\neg(p \vee q) \equiv (\neg p \wedge \neg q)

Think of it this way: negation "flips" the connective and negates each part.

Distributive Laws:

  • p(qr)(pq)(pr)p \wedge (q \vee r) \equiv (p \wedge q) \vee (p \wedge r)
  • p(qr)(pq)(pr)p \vee (q \wedge r) \equiv (p \vee q) \wedge (p \vee r)

These work just like distributing multiplication over addition, but with logical connectives.

Conditional statements

Given a conditional pqp \rightarrow q, there are three related statements you should know:

  • Converse: qpq \rightarrow p (not equivalent to the original)
  • Inverse: ¬p¬q\neg p \rightarrow \neg q (not equivalent to the original)
  • Contrapositive: ¬q¬p\neg q \rightarrow \neg p (logically equivalent to the original)

The contrapositive equivalence is especially useful in proofs. If you're struggling to prove "if pp then qq" directly, try proving "if not qq then not pp" instead.

A biconditional pqp \leftrightarrow q is equivalent to (pq)(qp)(p \rightarrow q) \wedge (q \rightarrow p). To prove a biconditional, you need to prove both directions.

Predicate logic

Predicate logic extends propositional logic by letting you look inside statements. Instead of treating "5 is prime" as an indivisible unit, predicate logic breaks it into a predicate PP ("is prime") applied to an object (5). This lets you make general claims about entire collections of objects using quantifiers.

Predicates and variables

  • A predicate is a property or relationship that becomes true or false once you plug in specific values. P(x)P(x) might mean "xx is even." It's not a statement until xx gets a value or a quantifier.
  • Atomic formulas combine predicate symbols with variables: P(x)P(x), Q(x,y)Q(x, y).
  • A free variable isn't bound by any quantifier and can take any value in the domain. A bound variable is attached to a quantifier. In xP(x,y)\forall x\, P(x, y), xx is bound and yy is free.

Universal vs existential quantifiers

  • xP(x)\forall x\, P(x) is true if and only if P(x)P(x) holds for every xx in the domain.
  • xP(x)\exists x\, P(x) is true if and only if P(x)P(x) holds for at least one xx in the domain.

Negation rules for quantifiers are critical:

  • ¬(xP(x))x¬P(x)\neg(\forall x\, P(x)) \equiv \exists x\, \neg P(x) "Not everything has property PP" means "something lacks property PP."
  • ¬(xP(x))x¬P(x)\neg(\exists x\, P(x)) \equiv \forall x\, \neg P(x) "Nothing has property PP" means "everything lacks property PP."

Nested quantifiers

When a statement uses multiple quantifiers, you read them left to right, and each quantifier's variable depends on the ones before it.

  • xy(x+y=0)\forall x\, \exists y\, (x + y = 0): "For every xx, there exists a yy such that x+y=0x + y = 0." Here yy can depend on xx. (True for real numbers: pick y=xy = -x.)
  • yx(x+y=0)\exists y\, \forall x\, (x + y = 0): "There exists a single yy that works for every xx." (False for real numbers.)

Nested quantifiers show up in important definitions. For instance, the definition of a limit uses ϵδ\forall \epsilon\, \exists \delta, and the order of those quantifiers is the whole point.

To negate nested quantifiers, flip each quantifier and negate the predicate at the end:

¬(xyP(x,y))xy¬P(x,y)\neg(\forall x\, \exists y\, P(x,y)) \equiv \exists x\, \forall y\, \neg P(x,y)

Set theory notation

Set theory gives you a formal language for talking about collections of objects. Nearly every branch of mathematics uses set notation, so fluency here pays off everywhere.

Set builder notation

Set builder notation describes a set by stating the property its elements must satisfy. The general form is:

{xP(x)}\{x \mid P(x)\}

This reads "the set of all xx such that P(x)P(x) is true." For example:

  • {xZx>0}\{x \in \mathbb{Z} \mid x > 0\} is the set of positive integers.
  • {xRx2<4}\{x \in \mathbb{R} \mid x^2 < 4\} is the open interval (2,2)(-2, 2).

You can define both finite and infinite sets this way, and you can combine set builder notation with other set operations.

Elements of mathematical statements, vdash: What is Formal Math?

Set operations and symbols

  • Union (\cup): ABA \cup B contains every element in AA, in BB, or in both.
  • Intersection (\cap): ABA \cap B contains only elements in both AA and BB.
  • Set difference (\setminus): ABA \setminus B contains elements in AA that are not in BB.
  • Complement (AcA^c): All elements in the universal set that are not in AA.
  • Subset (\subseteq): ABA \subseteq B means every element of AA is also in BB.
  • Proper subset (\subset): ABA \subset B means ABA \subseteq B but ABA \neq B (BB has at least one element not in AA).

Venn diagrams

Venn diagrams are visual representations of sets using overlapping circles (or other closed shapes). Overlapping regions represent intersections, and shading indicates the result of a set operation. They're helpful for building intuition, especially when working with two or three sets, though they become unwieldy with more.

Functions and relations

Functions and relations describe how elements of one set connect to elements of another. They're the formal way to talk about dependencies, mappings, and transformations.

Domain and codomain

  • The domain is the set of all valid inputs to a function.
  • The codomain is the set where outputs are allowed to land.
  • The range (or image) is the set of outputs the function actually produces. The range is always a subset of the codomain, but it might not be the whole codomain.

For example, if f:RRf: \mathbb{R} \rightarrow \mathbb{R} is defined by f(x)=x2f(x) = x^2, the domain is R\mathbb{R}, the codomain is R\mathbb{R}, but the range is [0,)[0, \infty).

Function notation

  • f(x)f(x) denotes the output of function ff for input xx.
  • Piecewise functions use different expressions for different parts of the domain.
  • Composition: (fg)(x)=f(g(x))(f \circ g)(x) = f(g(x)). Apply gg first, then ff.
  • Inverse functions: f1(x)f^{-1}(x) reverses ff, but only exists when ff is bijective (one-to-one and onto).
  • Common special notations include sin\sin, cos\cos, log\log, ln\ln.

Relation symbols

  • == (equality): two expressions have the same value
  • <<, >> (strict ordering): less than, greater than
  • \leq, \geq (non-strict ordering): less than or equal to, greater than or equal to
  • \equiv (equivalence): logical or structural equivalence
  • \cong (congruence): same shape and size in geometry
  • \approx (approximation): values are close but not exactly equal

Proof writing

Proof writing is how mathematicians establish that statements are true beyond doubt. A proof is a chain of logical steps, each justified by a definition, axiom, or previously proven result. Writing clear proofs is a skill you build with practice.

Structure of mathematical proofs

A well-organized proof typically follows these steps:

  1. State the claim. Write the theorem or proposition clearly.
  2. List assumptions. Identify what's given or assumed.
  3. Build the argument. Proceed through logical steps, each one following from what came before.
  4. Justify each step. Cite the definition, axiom, or theorem that supports it.
  5. Conclude. Restate what you've shown and mark the end of the proof (QED or \blacksquare).

Common proof techniques

  • Direct proof: Start from the hypotheses and reason forward to the conclusion.
  • Proof by contraposition: Instead of proving pqp \rightarrow q, prove the equivalent ¬q¬p\neg q \rightarrow \neg p.
  • Proof by cases: Split the problem into exhaustive subcases and prove each one separately.
  • Existence proof: Show that an object with the desired properties exists (either by constructing it or by arguing indirectly).
  • Uniqueness proof: Show that at most one object satisfies the given conditions, often by assuming two such objects exist and showing they must be equal.
  • Constructive proof: Explicitly build or exhibit the object in question.

Proof by contradiction

This technique works by assuming the opposite of what you want to prove and showing that assumption leads to an impossibility.

  1. Assume ¬P\neg P (the negation of the statement you want to prove).
  2. Reason logically from that assumption.
  3. Arrive at a contradiction (something you know is false, or two statements that conflict).
  4. Conclude that ¬P\neg P must be false, so PP is true.

Classic examples include proving that 2\sqrt{2} is irrational and that there are infinitely many primes. Contradiction is especially powerful when a direct approach seems blocked.

Mathematical induction

Mathematical induction is a proof technique for showing that a statement holds for every natural number (or every integer from some starting point onward). It works like a chain of dominoes: knock over the first one, and if each domino knocks over the next, they all fall.

Principle of mathematical induction

  1. Base case: Prove the statement holds for the smallest value (usually n=0n = 0 or n=1n = 1).
  2. Inductive step: Assume the statement holds for some arbitrary n=kn = k (this assumption is called the inductive hypothesis). Then prove it holds for n=k+1n = k + 1.
  3. Conclusion: By the principle of induction, the statement holds for all natural numbers \geq the base case.

Induction is commonly used to prove summation formulas (like 1+2++n=n(n+1)21 + 2 + \cdots + n = \frac{n(n+1)}{2}) and divisibility properties.

Strong induction

Strong induction modifies the inductive step: instead of assuming the statement holds only for n=kn = k, you assume it holds for all values from the base case up through kk. Then you prove it for k+1k + 1.

This is useful when the proof for k+1k + 1 needs to reference cases earlier than kk. For example, proving that every integer 2\geq 2 can be written as a product of primes uses strong induction because factoring a composite number might produce factors much smaller than kk.

Strong induction and standard induction are logically equivalent (they prove the same things), but strong induction can make certain proofs much cleaner.

Elements of mathematical statements, Principia Mathematica - Wikipedia

Structural induction

Structural induction generalizes mathematical induction to recursively defined structures like lists, trees, or formal expressions.

  1. Base case: Prove the property for the simplest structure(s).
  2. Inductive step: Assume the property holds for smaller/simpler sub-structures, and prove it for a structure built from them.

This technique is widely used in computer science and formal language theory, where the objects of interest aren't naturally indexed by integers.

Formal systems

A formal system bundles together a language, a set of axioms, and rules of inference into a self-contained framework for reasoning. Understanding formal systems helps you see what mathematics can (and can't) do at a foundational level.

Axioms and theorems

  • Axioms are the starting assumptions you accept without proof. Different choices of axioms give different formal systems.
  • Theorems are statements derived from axioms using the system's rules of inference.
  • A system is consistent if you can never derive both a statement and its negation.
  • A system is complete if every true statement (within the system's language) can be proven.

Formal languages vs natural languages

Formal languages have precisely defined syntax (what counts as a well-formed expression) and semantics (what each expression means). Natural languages like English rely heavily on context and can be ambiguous. "The square root of a number is positive" could mean several things in English, but in formal language you'd have to specify exactly which numbers and exactly what "positive" means. Translating between the two requires careful attention to precision.

Consistency and completeness

These two properties are the gold standard for formal systems, but Gödel showed you can't always have both:

  • Gödel's First Incompleteness Theorem: Any consistent formal system powerful enough to express basic arithmetic contains true statements that cannot be proven within the system. In other words, such systems are necessarily incomplete.
  • Gödel's Second Incompleteness Theorem: Such a system cannot prove its own consistency.

These results don't break mathematics. They tell you there are inherent limits to what any single formal system can achieve.

Symbolic manipulation

Symbolic manipulation is the process of transforming mathematical expressions into equivalent forms. You use it constantly when simplifying, solving equations, or rewriting logical statements.

Algebraic manipulation rules

  • Commutative: a+b=b+aa + b = b + a and ab=baab = ba
  • Associative: (a+b)+c=a+(b+c)(a + b) + c = a + (b + c) and (ab)c=a(bc)(ab)c = a(bc)
  • Distributive: a(b+c)=ab+aca(b + c) = ab + ac
  • Exponent rules: aman=am+na^m \cdot a^n = a^{m+n}, (am)n=amn(a^m)^n = a^{mn}
  • Factoring: Extracting common factors or recognizing patterns like a2b2=(a+b)(ab)a^2 - b^2 = (a+b)(a-b)

Logical equivalence transformations

These let you rewrite logical expressions into equivalent forms:

  • De Morgan's Laws: ¬(pq)(¬p¬q)\neg(p \wedge q) \equiv (\neg p \vee \neg q) and ¬(pq)(¬p¬q)\neg(p \vee q) \equiv (\neg p \wedge \neg q)
  • Contrapositive: (pq)(¬q¬p)(p \rightarrow q) \equiv (\neg q \rightarrow \neg p)
  • Double negation: ¬¬pp\neg\neg p \equiv p
  • Implication elimination: (pq)(¬pq)(p \rightarrow q) \equiv (\neg p \vee q)
  • Quantifier negation: ¬(xP(x))x¬P(x)\neg(\forall x\, P(x)) \equiv \exists x\, \neg P(x) and ¬(xP(x))x¬P(x)\neg(\exists x\, P(x)) \equiv \forall x\, \neg P(x)

Simplification techniques

  • Combining like terms in algebraic expressions
  • Canceling common factors in fractions
  • Applying trigonometric identities (e.g., sin2θ+cos2θ=1\sin^2\theta + \cos^2\theta = 1)
  • Using logarithm properties (e.g., log(ab)=loga+logb\log(ab) = \log a + \log b)
  • Partial fraction decomposition for breaking complex rational expressions into simpler pieces

Formal definitions

A formal definition pins down exactly what a mathematical term means, leaving no room for interpretation. Good definitions are the foundation of everything else: you can't prove a theorem about "continuous functions" if you haven't defined continuity precisely.

Precision in mathematical definitions

  • Use well-defined terms and symbols to eliminate ambiguity.
  • Specify the domain or context (e.g., "for all real numbers" vs. "for all integers").
  • Avoid circular definitions that use the term being defined in its own definition.
  • Make sure the definition is neither too broad (capturing things it shouldn't) nor too narrow (excluding things it should include).
  • Use quantifiers and connectives to express complex conditions precisely.

Necessary vs sufficient conditions

  • A necessary condition must hold for a statement to be true. "Being a mammal" is necessary for "being a dog."
  • A sufficient condition guarantees the statement is true. "Being a dog" is sufficient for "being a mammal."
  • An "if and only if" statement combines both: the condition is exactly what's needed, no more and no less.

Recognizing which conditions are necessary, which are sufficient, and which are both is a key skill for writing definitions and constructing proofs.

Constructive vs non-constructive definitions

  • Constructive definitions tell you how to build or find the object. For example, defining the GCD of two numbers via the Euclidean algorithm is constructive.
  • Non-constructive definitions specify what properties the object must have without telling you how to produce it. For example, defining a limit as "the value LL such that for every ϵ>0\epsilon > 0..." describes what LL must satisfy but doesn't tell you how to compute it.

Constructive definitions often lead directly to algorithms, while non-constructive definitions can be more general or elegant. The choice depends on what you need from the definition.