Fiveable

🤹🏼Formal Logic II Unit 6 Review

QR code for Formal Logic II practice questions

6.2 Unification and the resolution algorithm

6.2 Unification and the resolution algorithm

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🤹🏼Formal Logic II
Unit & Topic Study Guides

Unification for Resolution

Understanding Unification in First-Order Logic

Unification is the process of finding a substitution that makes two logical expressions syntactically identical. In the context of resolution, this is what lets you combine clauses that contain complementary literals and derive new clauses from them.

Two literals are complementary when one is the negation of the other and they can be unified. For instance, P(x)P(x) and ¬P(a)\neg P(a) are complementary because the substitution {xa}\{x \mapsto a\} makes P(x)P(x) identical to P(a)P(a), which directly contradicts ¬P(a)\neg P(a). Without unification, resolution would only work on ground (variable-free) literals, which severely limits its power.

Most General Unifier (MGU)

When two expressions can be unified, there are often many substitutions that work. The most general unifier (MGU) is the one that commits to as little as possible while still making the expressions identical. Any other valid unifier is just a more specific instance of the MGU.

  • Given P(x,f(y))P(x, f(y)) and P(a,f(g(z)))P(a, f(g(z))), the MGU is {xa,  yg(z)}\{x \mapsto a,\; y \mapsto g(z)\}. Notice that zz remains a free variable; the MGU doesn't bind it to anything because it doesn't need to.
  • A substitution like {xa,  yg(c),  zc}\{x \mapsto a,\; y \mapsto g(c),\; z \mapsto c\} also unifies these expressions, but it's more specific than necessary. The MGU keeps maximum generality, which matters for the efficiency of the resolution process.

If a set of clauses is unsatisfiable, the resolution algorithm can derive the empty clause (\square) using unification. Deriving \square constitutes a proof of unsatisfiability.

Applying the Unification Algorithm

Recursive Matching of Terms

The unification algorithm takes two expressions as input and attempts to build a substitution that makes them identical. Here's how it proceeds:

  1. Compare outermost symbols. If both expressions are the same constant or the same function symbol with the same arity, proceed to their arguments. If they're different function symbols (or different arities), unification fails immediately.
  2. Handle variables. If one expression is a variable, bind it to the other expression (subject to the occurs check, described below). Apply this binding throughout both expressions before continuing.
  3. Recurse on arguments. For function terms, unify corresponding arguments left to right, accumulating bindings as you go. Each new binding is applied to all remaining arguments before the next comparison.

For example, to unify f(x,g(y))f(x, g(y)) and f(a,g(b))f(a, g(b)):

  1. The outermost symbol ff matches in both, so recurse on the argument pairs.
  2. First argument pair: xx and aa. Bind xax \mapsto a.
  3. Second argument pair: g(y)g(y) and g(b)g(b). The symbol gg matches, so recurse. Bind yby \mapsto b.
  4. Result: MGU is {xa,  yb}\{x \mapsto a,\; y \mapsto b\}.

Occurs Check and Failure Conditions

Unification can fail in two ways:

  • Symbol clash: Two non-variable terms have different outermost function symbols or constants (e.g., f(x)f(x) and g(x)g(x)).
  • Occurs check failure: A variable would need to be bound to a term that contains that same variable, producing an infinite term.

The occurs check is the subtler failure. Consider unifying P(x,f(x))P(x, f(x)) with P(y,y)P(y, y). The first argument pair gives xyx \mapsto y. Applying this, the second pair becomes: unify f(y)f(y) with yy. This would require yf(y)y \mapsto f(y), but yy appears inside f(y)f(y). Allowing this would create the infinite structure f(f(f()))f(f(f(\ldots))), so unification correctly fails.

If no failure occurs, the algorithm returns the MGU.

Understanding Unification in First-Order Logic, Backward substitution - Algowiki

Implementing Resolution with Unification

Applying the Resolution Inference Rule

The resolution rule combines two clauses that contain complementary literals. The full procedure for a single resolution step:

  1. Select two clauses from your clause set, say C1C_1 and C2C_2.
  2. Identify a complementary pair: a literal LL in C1C_1 and a literal ¬L\neg L' in C2C_2 (or vice versa) such that LL and LL' are unifiable.
  3. Compute the MGU θ\theta of LL and LL'.
  4. Form the resolvent: Apply θ\theta to all literals in both clauses, then remove the complementary pair. The resolvent is (C1θ{Lθ})(C2θ{¬Lθ})(C_1\theta \setminus \{L\theta\}) \cup (C_2\theta \setminus \{\neg L'\theta\}).

Example: Given clauses {P(x),Q(x)}\{P(x), Q(x)\} and {¬P(a),R(b)}\{\neg P(a), R(b)\}:

  • Complementary pair: P(x)P(x) and ¬P(a)\neg P(a).
  • MGU: {xa}\{x \mapsto a\}.
  • Apply MGU to both clauses: {P(a),Q(a)}\{P(a), Q(a)\} and {¬P(a),R(b)}\{\neg P(a), R(b)\}.
  • Remove the complementary literals P(a)P(a) and ¬P(a)\neg P(a).
  • Resolvent: {Q(a),R(b)}\{Q(a), R(b)\}.

Systematic Exploration and Unsatisfiability Proof

The full resolution algorithm works as follows:

  1. Start with the initial set of clauses SS.
  2. Select two clauses from SS that contain a complementary pair.
  3. Compute the resolvent and add it to SS.
  4. If the resolvent is the empty clause \square, stop: the original clause set is unsatisfiable.
  5. If no new resolvents can be generated (every possible resolution has been tried), stop: the clause set is satisfiable, and resolution cannot derive a contradiction.
  6. Otherwise, return to step 2.

Deriving \square means you resolved two unit clauses like {L}\{L\} and {¬L}\{\neg L'\} where LL and LL' unify. Once both literals are removed, nothing remains, giving you the empty clause.

Unification's Role in Resolution Efficiency

Efficiency Benefits of Unification

The MGU keeps derived clauses as general as possible, which directly reduces redundant work. Consider clauses {P(x),Q(x)}\{P(x), Q(x)\} and {¬P(y),R(y)}\{\neg P(y), R(y)\}. Unification yields MGU {xy}\{x \mapsto y\} (or equivalently {yx}\{y \mapsto x\}), producing the resolvent {Q(y),R(y)}\{Q(y), R(y)\}. This single resolvent covers every ground instance at once. If you had instead picked a specific constant and resolved with {xa,ya}\{x \mapsto a, y \mapsto a\}, you'd get {Q(a),R(a)}\{Q(a), R(a)\}, which only covers one case. You'd need to repeat the resolution for every constant in the domain.

Unification also prunes the search space: when two literals fail to unify, you know immediately that no resolution is possible between those clauses on those literals, so you skip them entirely.

Optimizations and Challenges

Even with MGUs, the resolution search space can grow rapidly. Several factors affect performance:

  • Term depth: Deeply nested function terms increase the cost of each unification call and can produce large, complex resolvents.
  • Variable count: Many variables in a clause increase the number of potential unification targets.

Practical resolution systems use additional techniques to manage this:

  • Indexing organizes clauses by their predicate symbols and argument structure, so you can quickly find candidate clauses for resolution instead of checking every pair.
  • Subsumption removes clauses that are strictly less general than another clause already in the set. If {P(x)}\{P(x)\} is in your clause set, there's no reason to keep {P(a)}\{P(a)\} since it's a special case.
  • Linear-time unification algorithms (such as those based on union-find structures) keep the per-step cost of unification low even for large terms.