upgrade
upgrade

🔢Coding Theory

Key Concepts of Turbo Codes

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Turbo codes represent one of the most significant breakthroughs in coding theory since Shannon's 1948 theorem—they were the first practical codes to approach the Shannon limit within a fraction of a decibel. When you study Turbo codes, you're learning about the clever engineering that makes modern wireless communication, deep-space missions, and 3G/4G networks possible. The concepts here—parallel concatenation, iterative decoding, soft information exchange—form the foundation for understanding not just Turbo codes but also LDPC codes and other modern near-capacity-achieving schemes.

You're being tested on more than just definitions. Exam questions will probe whether you understand why iterative decoding works, how interleavers break up error patterns, and what makes the BCJR algorithm optimal for soft-output decoding. Don't just memorize that Turbo codes use two encoders—know that the parallel structure with interleaving creates statistically independent error patterns that iterative decoding can exploit. That conceptual understanding is what separates strong answers from weak ones.


Encoder Architecture

The power of Turbo codes begins with how they structure redundancy. Rather than using a single powerful code, Turbo codes combine simpler convolutional codes in a way that creates pseudo-random codewords with excellent distance properties.

Parallel Concatenated Convolutional Codes (PCCC)

  • Two or more convolutional encoders operate in parallel on the same input data—this parallel structure is the defining feature that distinguishes Turbo codes from serial concatenation
  • Each encoder sees a different sequence because an interleaver permutes the input before the second encoder, creating statistically independent parity streams
  • Redundancy compounds intelligently—the parallel arrangement means errors that are hard for one decoder to correct often become easy for the other due to the interleaving

Interleaver Design and Function

  • Interleavers permute the input sequence to spread adjacent bits far apart before the second encoding—this is what breaks correlation between error events
  • Burst error mitigation results from ensuring that bits neighboring in one encoder's view are distant in the other's, so channel burst errors don't overwhelm both decoders simultaneously
  • Interleaver choice critically affects performance—random interleavers work well for long blocks, while structured designs like S-random interleavers optimize minimum distance for shorter codes

Trellis Termination

  • Tail bits force the encoder to a known final state—typically the all-zero state—ensuring the trellis has defined boundaries for decoding
  • Termination improves finite-length performance by eliminating edge effects where the decoder would otherwise have uncertain boundary conditions
  • Rate loss occurs because tail bits add overhead without carrying information, though techniques like tail-biting can avoid this penalty

Compare: Interleaver vs. Trellis Termination—both improve Turbo code performance but address different problems. The interleaver decorrelates error patterns between decoders, while termination ensures clean trellis boundaries within each decoder. If asked about design trade-offs, note that interleaver size affects latency while termination affects rate.


Iterative Decoding Mechanism

The real magic of Turbo codes lies in their decoder. By passing refined probability estimates back and forth, two relatively simple decoders achieve what neither could alone—this is the turbo principle that gives these codes their name.

Iterative Decoding Process

  • Multiple decoding passes refine bit estimates—each component decoder improves upon the other's output, with performance gains continuing for 6–18 iterations typically
  • Soft information flows between decoders rather than hard bit decisions, preserving uncertainty that allows subsequent iterations to make better-informed corrections
  • Stopping criteria include reaching a maximum iteration count, achieving a valid codeword (syndrome check), or detecting convergence when estimates stop changing significantly

Extrinsic Information Exchange

  • Extrinsic information is the new knowledge one decoder contributes—specifically, it excludes information that decoder already received as input, preventing feedback loops from reinforcing errors
  • Separation of intrinsic and extrinsic is mathematically crucial: Ltotal=Lchannel+LextrinsicL_{total} = L_{channel} + L_{extrinsic}, where only the extrinsic term passes to the other decoder
  • Convergence depends on this exchange—well-designed codes produce extrinsic information that is approximately independent of the input, enabling the iterative gain

Compare: Iterative Decoding vs. Single-Pass Decoding—traditional Viterbi decoding makes one pass and outputs hard decisions. Turbo decoding makes multiple passes with soft outputs, achieving 2–3 dB better performance at the cost of latency and complexity. FRQ tip: when discussing complexity-performance trade-offs, iteration count is your key variable.


Soft-Decision Decoding Mathematics

Turbo decoding operates entirely in the probability domain. Understanding log-likelihood ratios and the BCJR algorithm is essential—these are the mathematical tools that make soft iterative decoding tractable.

Log-Likelihood Ratios (LLRs) in Decoding

  • LLRs quantify bit confidence as L(b)=lnP(b=0)P(b=1)L(b) = \ln\frac{P(b=0)}{P(b=1)}—positive values indicate likely 0, negative values indicate likely 1, and magnitude indicates confidence
  • Soft decisions preserve information that hard decisions discard—a received value of +0.1 and +5.0 both decode to 0, but the LLR captures that the second is far more reliable
  • LLRs enable optimal combining of information from multiple sources (channel, other decoder) through simple addition, thanks to logarithmic representation

BCJR Algorithm for Component Decoders

  • BCJR computes bit-wise MAP estimates by calculating forward (α\alpha) and backward (β\beta) probabilities through the trellis, then combining them with branch metrics (γ\gamma)
  • Optimal soft-output decoding results because BCJR considers all paths through the trellis, not just the single best path like Viterbi
  • Complexity scales as O(N2m)O(N \cdot 2^m) per iteration, where NN is block length and mm is encoder memory—manageable for typical constraint lengths of 3–4

Compare: BCJR vs. Viterbi Algorithm—both operate on the same trellis structure, but Viterbi finds the single most likely sequence (ML) while BCJR finds the most likely value of each bit (MAP). For Turbo codes, BCJR's soft outputs are essential; Viterbi's hard outputs would break the iterative exchange. This distinction is a common exam topic.


Performance and Design Trade-offs

Practical Turbo code design requires balancing error correction capability against bandwidth, latency, and complexity constraints. These trade-offs appear frequently in system design questions.

Performance Analysis

  • BER vs. SNR curves show Turbo codes operating within 0.5–1.0 dB of the Shannon limit for large block sizes—this near-capacity performance revolutionized the field
  • Waterfall and error floor regions characterize performance: the waterfall shows rapid BER improvement with SNR, while the error floor at low BER results from low-weight codewords
  • Block length matters significantly—longer interleavers improve performance by better approximating the random coding arguments that underlie Shannon's theorem

Code Rate and Puncturing

  • Code rate R=k/nR = k/n measures information efficiency—a rate-1/3 Turbo code transmits 3 bits for every information bit, while puncturing can raise this to rate-1/2 or higher
  • Puncturing selectively deletes parity bits according to a puncturing pattern, increasing rate at the cost of reduced error correction—the systematic bits are typically preserved
  • Rate-compatible puncturing enables adaptive systems that adjust protection level based on channel conditions, a key feature for practical implementations like 3GPP standards

Compare: High-Rate vs. Low-Rate Turbo Codes—lower rates (more redundancy) achieve better BER at a given SNR but consume more bandwidth. Higher rates are more bandwidth-efficient but require better channel conditions. Design question tip: always frame this as a trade-off, not a simple "lower is better" answer.


Quick Reference Table

ConceptBest Examples
Encoder StructurePCCC architecture, parallel convolutional encoders
Error DecorrelationInterleaver design, S-random interleavers
Soft InformationLLRs, extrinsic information, soft decisions
Optimal DecodingBCJR algorithm, MAP criterion, forward-backward recursion
Iterative ProcessingExtrinsic exchange, convergence behavior, stopping criteria
Boundary HandlingTrellis termination, tail bits, tail-biting
Rate AdaptationPuncturing, rate-compatible codes, systematic bits
Performance MetricsBER curves, waterfall region, error floor, Shannon gap

Self-Check Questions

  1. Conceptual link: Why must Turbo decoders exchange extrinsic information rather than total LLRs? What would happen if they exchanged total information instead?

  2. Compare and contrast: Both the BCJR and Viterbi algorithms operate on trellis structures. Explain why BCJR is preferred for Turbo decoding while Viterbi is standard for non-iterative convolutional decoding.

  3. Design trade-off: A system designer wants to reduce Turbo decoder latency. What are two approaches they could take, and what performance penalty would each incur?

  4. Mechanism identification: Which two components of a Turbo code system work together to ensure that error patterns affecting one decoder are statistically independent from those affecting the other?

  5. FRQ-style: Explain how puncturing allows a single Turbo encoder design to achieve multiple code rates. Why might a communication system want this capability, and what is the fundamental trade-off involved?