upgrade
upgrade

🔢Coding Theory

Key Concepts of Convolutional Codes

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Convolutional codes are the workhorses of modern communication systems—from deep-space probes transmitting data across billions of miles to your smartphone maintaining a call in a noisy environment. Unlike block codes that process fixed chunks of data, convolutional codes operate on continuous streams, making them ideal for real-time applications. You're being tested on understanding how memory-based encoding creates redundancy, why certain code parameters affect error correction, and how decoding algorithms exploit the code's structure.

The key insight is that convolutional codes introduce controlled dependencies between output bits. This memory structure is what gives them power—and what makes concepts like state diagrams, trellis representations, and the Viterbi algorithm so interconnected. Don't just memorize definitions; know how each concept relates to the fundamental trade-off between redundancy (more protection) and efficiency (higher data rates).


Code Structure and Parameters

The foundation of any convolutional code lies in its defining parameters—these determine everything from encoding complexity to error-correcting power. The constraint length controls memory depth, while the code rate balances protection against throughput.

Definition and Basic Structure

  • Convolutional codes process continuous data streams—unlike block codes, they encode bits sequentially using memory of previous inputs
  • Constraint length (K) determines how many input bits influence each output, with KK memory elements storing past bit values
  • Code rate R=k/nR = k/n specifies that for every kk input bits, the encoder produces nn output bits—lower rates mean more redundancy

Rate and Constraint Length Trade-offs

  • Higher constraint length improves error correction—more memory means outputs depend on more input history, creating stronger dependencies
  • Complexity scales exponentially with KK, as the number of encoder states equals 2K12^{K-1}
  • Typical practical values range from K=3K = 3 to K=9K = 9, balancing performance against decoder computational requirements

Compare: A rate R=1/2R = 1/2 code vs. rate R=1/3R = 1/3 code—both can use the same constraint length, but the R=1/3R = 1/3 code provides more redundancy at the cost of bandwidth. If an exam asks about improving reliability without changing encoder memory, adjusting rate is your answer.


Encoding Mechanisms

Understanding how input bits transform into coded output is essential for analyzing code behavior. Generator polynomials define the specific connections between memory elements and output bits.

Encoding Process and Generator Polynomials

  • Generator polynomials specify tap connections—each polynomial gi(D)g_i(D) indicates which memory positions contribute to output bit ii
  • Binary representation shows connections directly: g=(111)g = (111) means all three positions (current input plus two memory elements) are XORed
  • Linear combination of inputs produces each output bit, where the encoding operation is equivalent to polynomial multiplication in GF(2)GF(2)

Catastrophic Error Propagation

  • Catastrophic codes allow finite input errors to cause infinite output errors—a devastating failure mode where decoder never recovers
  • Occurs when generator polynomials share common factors—specifically, when gcd(g1(D),g2(D))1\gcd(g_1(D), g_2(D)) \neq 1
  • Always verify polynomial coprimality during code design; catastrophic behavior makes a code practically useless despite good distance properties

Compare: Generator polynomials (111,101)(111, 101) vs. (110,100)(110, 100)—the first pair is coprime and safe, while the second shares a common factor of (10)(10), creating catastrophic behavior. Exam questions often ask you to identify which polynomial sets are catastrophic.


Graphical Representations

Visualizing encoder behavior through diagrams is crucial for understanding decoding. State and trellis diagrams transform abstract polynomial operations into tractable path-finding problems.

State Diagram Representation

  • States represent encoder memory contents—for K=3K = 3, there are 22=42^2 = 4 states corresponding to the two stored bits
  • Transitions show input/output relationships—each arrow labeled with input bit and resulting output bits
  • Closed graph structure means all paths eventually revisit states, enabling analysis of code properties like free distance

Trellis Diagram Representation

  • Time-expanded state diagram unfolds transitions across discrete time steps, showing all possible encoding paths
  • Branches connect states at adjacent time instants, with branch labels indicating output bits for each transition
  • Decoding becomes path selection—finding the path through the trellis that best matches the received sequence

Compare: State diagrams vs. trellis diagrams—state diagrams are compact and show the encoder's complete behavior in one figure, while trellis diagrams explicitly show time evolution needed for decoding. FRQs asking about "decoding complexity" want trellis analysis; questions about "code structure" want state diagrams.


Decoding and Performance

The power of convolutional codes lies in efficient optimal decoding. The Viterbi algorithm exploits trellis structure to find maximum likelihood paths without exhaustive search.

Viterbi Decoding Algorithm

  • Maximum likelihood decoding finds the codeword sequence closest (in Hamming distance) to the received sequence
  • Survivor path selection keeps only the best path into each state, reducing complexity from exponential to linear in block length
  • Computational complexity scales as O(2K1)O(2^{K-1}) per decoded bit—manageable for practical constraint lengths

Free Distance and Error-Correcting Capability

  • Free distance dfreed_{free} is the minimum Hamming distance between any two valid infinite-length code sequences
  • Error correction capability allows correction of up to t=(dfree1)/2t = \lfloor(d_{free} - 1)/2\rfloor errors in a constraint length span
  • Dominates performance at high SNR—asymptotic error probability decreases exponentially with dfreed_{free}

Compare: Free distance in convolutional codes vs. minimum distance in block codes—both measure separation between codewords, but dfreed_{free} applies to infinite sequences and determines asymptotic performance. When asked about "error floor" or "high-SNR behavior," free distance is the key parameter.


Code Modifications and Variants

Practical systems often modify basic convolutional codes to meet specific requirements. Puncturing trades error protection for rate, while termination strategies affect boundary behavior.

Punctured Convolutional Codes

  • Selective bit deletion increases code rate—removing output bits according to a puncturing pattern yields rates like 2/32/3 or 3/43/4 from a base 1/21/2 code
  • Rate-compatible families use nested puncturing patterns, enabling adaptive coding without redesigning the encoder
  • Performance degrades gracefully—punctured codes maintain good distance properties if patterns are chosen carefully

Terminated and Tail-Biting Codes

  • Terminated codes flush encoder memory—appending K1K-1 known bits forces return to zero state, enabling clean trellis boundaries
  • Tail-biting codes wrap circularly—initial state equals final state, eliminating rate loss from termination bits
  • Decoding differs significantly: terminated codes use standard Viterbi; tail-biting requires multiple passes or circular trellis processing

Compare: Terminated vs. tail-biting codes—termination wastes K1K-1 bits per block but simplifies decoding, while tail-biting preserves rate but complicates the decoder. Short blocks favor tail-biting (rate loss matters more); long blocks favor termination (simpler decoding, negligible overhead).


Quick Reference Table

ConceptBest Examples
Code parametersConstraint length KK, code rate R=k/nR = k/n, generator polynomials
Encoder representationsState diagram, trellis diagram, polynomial notation
Distance metricsFree distance dfreed_{free}, Hamming distance between paths
Decoding methodsViterbi algorithm, maximum likelihood, survivor paths
Rate modificationPuncturing patterns, rate-compatible codes
Termination strategiesZero-tail termination, tail-biting, direct truncation
Failure modesCatastrophic error propagation, common polynomial factors
Complexity factorsNumber of states 2K12^{K-1}, trellis depth, survivor storage

Self-Check Questions

  1. A convolutional code has generator polynomials g1=(111)g_1 = (111) and g2=(101)g_2 = (101) with K=3K = 3. How many states does its trellis have, and what is the code rate?

  2. Compare the Viterbi algorithm's approach to decoding with exhaustive search—what property of the trellis allows Viterbi to achieve optimal decoding with reduced complexity?

  3. Two codes have identical constraint lengths but different free distances. Which code performs better at high SNR, and why does free distance matter more than constraint length for error probability?

  4. Explain why punctured convolutional codes are useful for adaptive communication systems. What trade-off does puncturing represent?

  5. Given generator polynomials (110)(110) and (100)(100), determine whether this code is catastrophic. What mathematical condition must you check, and what would happen during decoding if errors occurred?