Fiveable

๐Ÿคน๐ŸผFormal Logic II Unit 1 Review

QR code for Formal Logic II practice questions

1.3 Resolution and theorem proving

1.3 Resolution and theorem proving

Written by the Fiveable Content Team โ€ข Last updated August 2025
Written by the Fiveable Content Team โ€ข Last updated August 2025
๐Ÿคน๐ŸผFormal Logic II
Unit & Topic Study Guides

Resolution Rule for Clause Derivation

Resolution is a single, powerful inference rule that lets you derive new clauses from existing ones. When combined with a proof-by-contradiction strategy, it becomes the backbone of automated theorem proving: you can mechanically determine whether a set of clauses is satisfiable, or whether a conclusion follows from given premises.

Applying the Resolution Rule

The resolution rule works on two parent clauses that contain complementary literals, meaning some literal LL appears in one clause and ยฌL\neg L appears in the other. The rule produces a new clause called the resolvent by combining all the remaining literals from both parents, dropping the complementary pair.

For example, given the clauses {P,Q}\{P, Q\} and {ยฌP,R}\{\neg P, R\}, the complementary literals are PP and ยฌP\neg P. Resolving on PP yields the resolvent {Q,R}\{Q, R\}.

A few things to keep in mind:

  • You resolve on exactly one complementary pair at a time. If two clauses share more than one complementary pair, you still pick just one per resolution step.
  • The resolvent is a logical consequence of the parent clauses. If both parents are true under some interpretation, the resolvent must be true under that interpretation too.
  • Duplicate literals in the resolvent are removed (since {P,P,Q}\{P, P, Q\} is the same clause as {P,Q}\{P, Q\}).

Properties of the Resolution Rule

Each clause is a disjunction of literals (propositional variables or their negations), and a clause set represents the conjunction of those clauses. Resolution operates within this conjunctive normal form (CNF) framework.

  • Soundness. Resolution preserves logical consequence. Any resolvent is entailed by its parent clauses, so you never derive something false from something true.
  • Refutation completeness. If a set of clauses is unsatisfiable, resolution is guaranteed to derive the empty clause โ–ก\square. This is a slightly different claim than "you can derive any entailed clause directly." Resolution is complete for refutation, which is exactly what theorem proving by contradiction needs.

Resolution Proofs for Satisfiability

Constructing Resolution Proofs

A resolution proof is a sequence of resolution steps aimed at deriving the empty clause โ–ก\square, which is a clause with no literals. Since a clause is a disjunction, a clause with no disjuncts is trivially false. Deriving โ–ก\square means the clause set is unsatisfiable.

To prove that a conclusion ฯ†\varphi follows from premises ฮ“\Gamma:

  1. Convert all premises in ฮ“\Gamma to CNF.
  2. Negate the conclusion: take ยฌฯ†\neg \varphi and convert it to CNF as well.
  3. Combine all resulting clauses into a single clause set.
  4. Repeatedly apply the resolution rule to pairs of clauses, adding each resolvent to the set.
  5. If you derive โ–ก\square, the original set ฮ“โˆช{ยฌฯ†}\Gamma \cup \{\neg \varphi\} is unsatisfiable, which means ฮ“โŠจฯ†\Gamma \models \varphi.

Resolution proofs are often drawn as a tree (or DAG) where each internal node is a resolvent and its two children are the parent clauses used to produce it. The root of the tree is โ–ก\square.

Interpreting Resolution Proofs

  • Empty clause derived. The clause set is unsatisfiable. If you added ยฌฯ†\neg \varphi to your premises, this means the premises entail ฯ†\varphi.
  • No new clauses can be generated. Resolution has saturated the clause set without producing โ–ก\square. The clause set is satisfiable, and the premises do not entail the negated conclusion. You can read off a satisfying assignment from the saturated set.

This gives resolution a decision-procedure character for propositional logic: it always terminates (the number of distinct clauses over a finite set of variables is finite) and gives a definitive yes-or-no answer.

Applying the Resolution Rule, ASIC-System on Chip-VLSI Design: Theorem proving

Unification in Resolution

Role of Unification

Unification becomes relevant when you move beyond propositional logic into first-order logic, where clauses can contain variables, constants, and function symbols. In propositional resolution, checking for complementary literals is just a syntactic match. In first-order resolution, two literals might become complementary only after substituting appropriate terms for variables.

Unification is the process of finding a substitution ฮธ\theta (a mapping from variables to terms) that makes two expressions syntactically identical. For instance, P(x,f(a))P(x, f(a)) and P(b,y)P(b, y) unify under ฮธ={xโ†ฆb,โ€…โ€Šyโ†ฆf(a)}\theta = \{x \mapsto b,\; y \mapsto f(a)\}.

Unification Algorithm

The standard unification algorithm takes two expressions as input and either returns a substitution or reports failure:

  1. If both expressions are identical, return the empty substitution.
  2. If one is a variable xx and the other is a term tt that does not contain xx (the occurs check), return {xโ†ฆt}\{x \mapsto t\}.
  3. If both are compound expressions with the same top-level symbol, recursively unify their arguments, composing substitutions as you go.
  4. Otherwise, fail.

The result, when it exists, is the most general unifier (MGU): the substitution that imposes the fewest constraints on variables. Any other unifier can be obtained by further instantiating the MGU.

While unification is primarily a first-order concept, understanding it here gives you the full picture of how resolution scales beyond propositional logic. For this unit, the propositional case (where complementary literals are identified by simple syntactic matching) is the main focus.

Resolution-Based Theorem Proving Algorithms

Basic Steps

A resolution-based theorem prover automates the refutation procedure described above. The core algorithm:

  1. Convert to CNF. Transform all input formulas (premises and the negated conclusion) into conjunctive normal form.
  2. Initialize the clause set. Collect all resulting clauses.
  3. Select a pair of clauses that contain complementary literals.
  4. Resolve them to produce a resolvent.
  5. Add the resolvent to the clause set (if it isn't already present).
  6. Terminate if the empty clause โ–ก\square is derived (unsatisfiable) or if no new resolvents can be produced (satisfiable).

Efficiency and Optimizations

Naively resolving every possible pair of clauses leads to an explosion of redundant clauses. Several strategies keep the search manageable:

  • Unit preference. Prioritize clauses with a single literal (unit clauses). Resolving with a unit clause always produces a resolvent shorter than the other parent, driving the proof toward โ–ก\square faster.
  • Set of support. Only resolve clause pairs where at least one clause descends from the negated conclusion. This prevents aimless derivations among the premises alone.
  • Subsumption. If clause C1C_1 is a subset of clause C2C_2 (every literal in C1C_1 appears in C2C_2), then C2C_2 is redundant and can be discarded. C1C_1 is strictly more informative.
  • Tautology elimination. Discard any clause containing both LL and ยฌL\neg L, since it's trivially true and contributes nothing to a refutation.
  • Ordering strategies. Fix an ordering on literals and only resolve on the maximal literal in each clause. This restricts the number of possible resolution steps without sacrificing completeness (under appropriate conditions).

Resolution-based provers built on these ideas are used in formal verification, automated mathematics, and AI reasoning systems. The key takeaway for this course: resolution gives you a mechanical, complete method for propositional theorem proving, and the same architecture extends naturally to first-order logic once you add unification.