Fiveable

📡Advanced Signal Processing Unit 2 Review

QR code for Advanced Signal Processing practice questions

2.2 Discrete-time signals and systems

2.2 Discrete-time signals and systems

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
📡Advanced Signal Processing
Unit & Topic Study Guides

Properties of discrete-time signals

Discrete-time signals are functions of an integer variable nn, represented as sequences x[n]x[n]. Unlike continuous-time signals, they're defined only at integer time indices. Classifying these signals by their properties determines which analysis techniques apply and how you design systems to process them.

Energy and power signals

Every discrete-time signal falls into one of three categories: energy signal, power signal, or neither.

Energy signals have finite total energy and are square-summable:

E=n=x[n]2<E = \sum_{n=-\infty}^{\infty} |x[n]|^2 < \infty

Finite-length pulses and decaying exponentials are typical energy signals. Their energy is concentrated in a finite interval (or decays fast enough to sum finitely).

Power signals have finite average power but infinite total energy:

P=limN12N+1n=NNx[n]2P = \lim_{N \to \infty} \frac{1}{2N+1} \sum_{n=-N}^{N} |x[n]|^2

Sinusoids and constant sequences are power signals. They persist indefinitely, so their energy is infinite, but the energy per sample stays bounded.

A signal can't be both an energy signal and a power signal. Energy signals have zero average power, and power signals have infinite energy.

Even and odd signals

  • Even signals satisfy x[n]=x[n]x[n] = x[-n] for all nn (symmetric about n=0n = 0). Cosine sequences and constant signals are even.
  • Odd signals satisfy x[n]=x[n]x[n] = -x[-n] for all nn (antisymmetric about n=0n = 0). Sine sequences are odd. Note that odd signals must satisfy x[0]=0x[0] = 0.

Any signal can be decomposed into even and odd components:

xe[n]=x[n]+x[n]2,xo[n]=x[n]x[n]2x_e[n] = \frac{x[n] + x[-n]}{2}, \quad x_o[n] = \frac{x[n] - x[-n]}{2}

This decomposition is useful for simplifying Fourier analysis, since even components contribute only to the real part of the spectrum and odd components only to the imaginary part.

Periodic and aperiodic signals

A discrete-time signal is periodic with fundamental period NN if:

x[n]=x[n+N]for all nx[n] = x[n + N] \quad \text{for all } n

where NN is the smallest positive integer satisfying this condition. Discrete-time sinusoids are periodic only when 2πf0fs\frac{2\pi f_0}{f_s} is a rational multiple of 2π2\pi, which is a subtlety that doesn't exist in continuous time.

Aperiodic signals (like a unit impulse or a decaying exponential) don't repeat. Periodic signals can be represented using the discrete Fourier series, while aperiodic signals require the DTFT or Z-transform for frequency-domain analysis.

Deterministic vs. random signals

  • Deterministic signals are completely specified by a mathematical expression or rule. Given the rule, you can compute x[n]x[n] for any nn. Fourier analysis and Z-transforms are the standard tools here.
  • Random (stochastic) signals have values that can't be predicted exactly. They're characterized by statistical properties: mean, variance, autocorrelation, and power spectral density. Noise is the classic example.

In practice, most real-world signals contain both deterministic and random components. Statistical signal processing methods (Wiener filtering, spectral estimation) handle the random parts.

Discrete-time systems

A discrete-time system maps an input sequence x[n]x[n] to an output sequence y[n]y[n]. The system's properties constrain what operations it can perform and determine which analysis tools apply.

Properties of systems

Linearity: A system TT is linear if superposition holds:

T[ax1[n]+bx2[n]]=aT[x1[n]]+bT[x2[n]]T[ax_1[n] + bx_2[n]] = aT[x_1[n]] + bT[x_2[n]]

This means scaling and addition pass through the system unchanged. Linearity is what makes convolution-based analysis possible.

Time-invariance: A system is time-invariant if shifting the input by kk samples shifts the output by the same amount:

x[n]y[n]    x[nk]y[nk]x[n] \rightarrow y[n] \implies x[n-k] \rightarrow y[n-k]

The system's behavior doesn't change over time.

Causality: A causal system's output at time nn depends only on inputs at times knk \leq n. This is required for real-time implementation since you can't use future samples that haven't arrived yet.

Stability (BIBO): A system is bounded-input bounded-output (BIBO) stable if every bounded input produces a bounded output. For LTI systems, BIBO stability is equivalent to the impulse response being absolutely summable:

n=h[n]<\sum_{n=-\infty}^{\infty} |h[n]| < \infty

Linear time-invariant (LTI) systems

LTI systems are the workhorse of signal processing because they combine linearity and time-invariance, which gives you two powerful results:

  1. The system is completely characterized by its impulse response h[n]h[n].
  2. The output for any input is computed via the convolution sum:

y[n]=k=x[k]h[nk]y[n] = \sum_{k=-\infty}^{\infty} x[k]h[n-k]

In the frequency domain, convolution becomes multiplication: Y(ejω)=X(ejω)H(ejω)Y(e^{j\omega}) = X(e^{j\omega})H(e^{j\omega}). This is why LTI systems are so tractable for filter design, modulation analysis, and deconvolution.

Causal vs. non-causal systems

  • Causal systems have impulse responses where h[n]=0h[n] = 0 for n<0n < 0. They're realizable in real time. Moving average filters and causal IIR filters fall in this category.
  • Non-causal systems have h[n]0h[n] \neq 0 for some n<0n < 0, meaning they require future input values. These are only usable in offline (batch) processing or when you can tolerate a processing delay. Signal interpolation and certain optimal filters (like the non-causal Wiener filter) are non-causal.

Stable vs. unstable systems

  • Stable systems keep outputs bounded for any bounded input. For LTI systems, this means all poles of the transfer function lie inside the unit circle (for causal systems).
  • Unstable systems can produce outputs that grow without bound. A classic example is an IIR filter with poles outside the unit circle in the z-plane. In practice, unstable systems cause overflow and divergence, so stability verification is a critical design step.

Convolution in discrete-time

Convolution is the operation that connects an LTI system's impulse response to its input-output behavior. It's also the basis for understanding filtering in both time and frequency domains.

Discrete-time convolution

The convolution of x[n]x[n] and h[n]h[n] is:

y[n]=x[n]h[n]=k=x[k]h[nk]y[n] = x[n] * h[n] = \sum_{k=-\infty}^{\infty} x[k]h[n-k]

To compute this by hand:

  1. Flip h[k]h[k] to get h[k]h[-k].
  2. Shift the flipped sequence by nn to get h[nk]h[n-k].
  3. Multiply x[k]x[k] and h[nk]h[n-k] element-wise.
  4. Sum all the products. This gives y[n]y[n] for that particular nn.
  5. Repeat for each value of nn you need.

Convolution is commutative (xh=hxx * h = h * x), associative ((xh1)h2=x(h1h2)(x * h_1) * h_2 = x * (h_1 * h_2)), and distributive over addition. The associative property is especially useful: cascading two LTI systems is equivalent to convolving their impulse responses.

Energy and power signals, Discrete-time Fourier transform - Wikipedia

Linear convolution

Linear convolution is the standard, unrestricted form. If x[n]x[n] has length LL and h[n]h[n] has length MM, the output y[n]y[n] has length L+M1L + M - 1. This is the convolution you use for FIR filtering and deconvolution when you need the complete, undistorted output.

Circular convolution

Circular convolution treats both sequences as periodic with period NN. The output also has length NN, and the computation "wraps around" at the boundaries.

The key relationship: the DFT converts circular convolution into pointwise multiplication. That is, if y[n]y[n] is the NN-point circular convolution of x[n]x[n] and h[n]h[n], then:

Y[k]=X[k]H[k]Y[k] = X[k] \cdot H[k]

To use the DFT/FFT for linear convolution (avoiding wrap-around artifacts), you zero-pad both sequences to at least length L+M1L + M - 1 before computing the circular convolution. This is the basis of fast convolution via overlap-add and overlap-save methods.

Convolution vs. correlation

Correlation measures similarity between two signals as a function of a time lag. The cross-correlation of x[n]x[n] and y[n]y[n] is:

rxy[n]=k=x[k]y[n+k]=k=x[k]y[kn] (for complex signals)r_{xy}[n] = \sum_{k=-\infty}^{\infty} x[k]y[n+k] = \sum_{k=-\infty}^{\infty} x[k]y^*[k-n] \text{ (for complex signals)}

The difference from convolution: in convolution you flip one sequence, in correlation you don't. Equivalently, rxy[n]=x[n]y[n]r_{xy}[n] = x[n] * y[-n] (or x[n]y[n]x[n] * y^*[-n] for complex signals).

  • Convolution computes the output of an LTI system.
  • Correlation measures signal similarity and is used in matched filtering, pattern recognition, time-delay estimation, and system identification.

Discrete-time Fourier transform (DTFT)

The DTFT maps a discrete-time sequence to a continuous function of frequency, giving you the complete spectral representation of a signal.

Definition and properties of DTFT

The forward DTFT:

X(ejω)=n=x[n]ejωnX(e^{j\omega}) = \sum_{n=-\infty}^{\infty} x[n]e^{-j\omega n}

The inverse DTFT:

x[n]=12πππX(ejω)ejωndωx[n] = \frac{1}{2\pi} \int_{-\pi}^{\pi} X(e^{j\omega})e^{j\omega n} \, d\omega

Here ω\omega is the normalized angular frequency in radians per sample, and X(ejω)X(e^{j\omega}) is always periodic in ω\omega with period 2π2\pi. This periodicity is a direct consequence of discrete-time sampling.

Key properties:

  • Linearity: ax1[n]+bx2[n]aX1(ejω)+bX2(ejω)ax_1[n] + bx_2[n] \leftrightarrow aX_1(e^{j\omega}) + bX_2(e^{j\omega})
  • Time shift: x[nn0]ejωn0X(ejω)x[n - n_0] \leftrightarrow e^{-j\omega n_0}X(e^{j\omega})
  • Frequency shift (modulation): ejω0nx[n]X(ej(ωω0))e^{j\omega_0 n}x[n] \leftrightarrow X(e^{j(\omega - \omega_0)})
  • Convolution: x[n]h[n]X(ejω)H(ejω)x[n] * h[n] \leftrightarrow X(e^{j\omega})H(e^{j\omega})
  • Parseval's theorem: n=x[n]2=12πππX(ejω)2dω\sum_{n=-\infty}^{\infty} |x[n]|^2 = \frac{1}{2\pi}\int_{-\pi}^{\pi} |X(e^{j\omega})|^2 \, d\omega

Convergence of DTFT

The DTFT converges uniformly if x[n]x[n] is absolutely summable:

n=x[n]<\sum_{n=-\infty}^{\infty} |x[n]| < \infty

All energy signals satisfy this condition. Power signals (like sinusoids or the unit step) don't converge in the ordinary sense, but their DTFTs can be expressed using impulse functions (Dirac deltas in frequency). For example, the DTFT of x[n]=ejω0nx[n] = e^{j\omega_0 n} is 2πδ(ωω0)2\pi\delta(\omega - \omega_0), periodically extended.

Relationship between DTFT and Fourier series

For a periodic signal with period NN, the DTFT consists of impulses spaced at intervals of 2πN\frac{2\pi}{N} in frequency:

X(ejω)=2πNk=0N1ckδ(ω2πkN)X(e^{j\omega}) = \frac{2\pi}{N} \sum_{k=0}^{N-1} c_k \, \delta\left(\omega - \frac{2\pi k}{N}\right)

where ckc_k are the discrete Fourier series (DFS) coefficients. This connects the DFS representation of periodic signals to the DTFT framework.

Frequency response of LTI systems using DTFT

The frequency response of an LTI system is the DTFT of its impulse response:

H(ejω)=n=h[n]ejωnH(e^{j\omega}) = \sum_{n=-\infty}^{\infty} h[n]e^{-j\omega n}

For any input x[n]x[n], the output spectrum is:

Y(ejω)=X(ejω)H(ejω)Y(e^{j\omega}) = X(e^{j\omega})H(e^{j\omega})

This multiplicative relationship is why frequency-domain analysis is so powerful. You can read off the system's behavior directly:

  • H(ejω)|H(e^{j\omega})| is the magnitude response (gain at each frequency)
  • H(ejω)\angle H(e^{j\omega}) is the phase response (phase shift at each frequency)

Filter design amounts to shaping H(ejω)H(e^{j\omega}) to pass desired frequencies and reject others.

Z-transform

The Z-transform generalizes the DTFT by replacing ejωe^{j\omega} with a complex variable zz. This gives you algebraic tools for analyzing system stability, causality, and transfer functions that the DTFT alone can't provide.

Definition and properties of Z-transform

The bilateral Z-transform:

X(z)=n=x[n]znX(z) = \sum_{n=-\infty}^{\infty} x[n]z^{-n}

Key properties parallel those of the DTFT:

  • Linearity: ax1[n]+bx2[n]aX1(z)+bX2(z)ax_1[n] + bx_2[n] \leftrightarrow aX_1(z) + bX_2(z)
  • Time shift: x[nn0]zn0X(z)x[n - n_0] \leftrightarrow z^{-n_0}X(z)
  • Convolution: x[n]h[n]X(z)H(z)x[n] * h[n] \leftrightarrow X(z)H(z)
  • Scaling: anx[n]X(a1z)a^n x[n] \leftrightarrow X(a^{-1}z)

The convolution property is particularly important: it turns convolution sums into polynomial multiplication in zz, which is far easier to manipulate algebraically.

Region of convergence (ROC)

The Z-transform only converges for certain values of zz. The set of zz values where the sum converges is the region of convergence (ROC).

The ROC determines uniqueness: different signals can have the same X(z)X(z) expression but different ROCs, leading to different time-domain sequences. Key rules:

  • Right-sided signals (x[n]=0x[n] = 0 for n<n0n < n_0): ROC is the exterior of a circle, z>r|z| > r
  • Left-sided signals (x[n]=0x[n] = 0 for n>n0n > n_0): ROC is the interior of a circle, z<r|z| < r
  • Two-sided signals: ROC is an annular region, r1<z<r2r_1 < |z| < r_2
  • A causal LTI system is BIBO stable if and only if the ROC of H(z)H(z) includes the unit circle, which requires all poles to be inside the unit circle
Energy and power signals, Frontiers | Classification of EEG Signals Based on Pattern Recognition Approach

Inverse Z-transform

Three standard methods for computing the inverse Z-transform:

  1. Partial fraction expansion: Decompose X(z)X(z) into simpler rational terms, then use known transform pairs. This is the most common approach for rational X(z)X(z).
  2. Power series expansion: Expand X(z)X(z) as a Laurent series in z1z^{-1}. The coefficients directly give x[n]x[n]. Useful for non-rational transforms or for reading off a few specific values.
  3. Contour integration: Evaluate x[n]=12πjX(z)zn1dzx[n] = \frac{1}{2\pi j}\oint X(z)z^{n-1}dz using the residue theorem. More general but typically reserved for complex cases.

The ROC must be specified to get the correct time-domain signal, since the same X(z)X(z) expression with different ROCs yields different sequences.

Relationship between Z-transform and DTFT

The DTFT is the Z-transform evaluated on the unit circle:

X(ejω)=X(z)z=ejωX(e^{j\omega}) = X(z)\big|_{z=e^{j\omega}}

This evaluation is valid only if the ROC of X(z)X(z) includes the unit circle. For stable causal systems, the ROC always includes the unit circle, so the frequency response is always well-defined.

The Z-transform is more general than the DTFT because it handles signals and systems whose DTFT doesn't converge (e.g., unstable systems with poles outside the unit circle).

System analysis using Z-transform

The transfer function of an LTI system is:

H(z)=Y(z)X(z)=k=0Mbkzkk=0NakzkH(z) = \frac{Y(z)}{X(z)} = \frac{\sum_{k=0}^{M} b_k z^{-k}}{\sum_{k=0}^{N} a_k z^{-k}}

for a system described by a linear constant-coefficient difference equation. The transfer function's poles and zeros encode the system's behavior:

  • Poles (roots of the denominator) determine stability and natural response modes. Poles inside the unit circle correspond to decaying modes; poles outside correspond to growing (unstable) modes.
  • Zeros (roots of the numerator) determine where the frequency response goes to zero, shaping the filter's stopband.
  • The magnitude response is shaped by the proximity of poles and zeros to the unit circle. A pole near the unit circle at angle ω0\omega_0 creates a peak in H(ejω)|H(e^{j\omega})| near ω0\omega_0; a zero creates a notch.

Filter design in the z-domain involves placing poles and zeros strategically to achieve desired frequency-selective behavior.

Discrete Fourier transform (DFT)

The DFT is the computable version of the DTFT. It operates on finite-length sequences and produces finite-length frequency-domain representations, making it the practical tool for spectral analysis and fast filtering.

Definition and properties of DFT

For an NN-point sequence x[n]x[n]:

X[k]=n=0N1x[n]ej2πNkn,k=0,1,,N1X[k] = \sum_{n=0}^{N-1} x[n]e^{-j\frac{2\pi}{N}kn}, \quad k = 0, 1, \ldots, N-1

x[n]=1Nk=0N1X[k]ej2πNkn,n=0,1,,N1x[n] = \frac{1}{N} \sum_{k=0}^{N-1} X[k]e^{j\frac{2\pi}{N}kn}, \quad n = 0, 1, \ldots, N-1

The DFT maps NN time-domain samples to NN frequency-domain samples. Both the input and output are finite and periodic (with period NN), which is why all DFT operations are circular.

Key properties:

  • Linearity: Standard superposition applies
  • Circular time shift: x[(nm)modN]ej2πNkmX[k]x[(n - m) \mod N] \leftrightarrow e^{-j\frac{2\pi}{N}km}X[k]
  • Circular convolution: Pointwise multiplication in the DFT domain corresponds to circular convolution in time
  • Parseval's theorem: n=0N1x[n]2=1Nk=0N1X[k]2\sum_{n=0}^{N-1}|x[n]|^2 = \frac{1}{N}\sum_{k=0}^{N-1}|X[k]|^2

Relationship between DFT and DTFT

The DFT samples the DTFT at NN equally spaced frequencies:

X[k]=X(ejω)ω=2πkNX[k] = X(e^{j\omega})\big|_{\omega = \frac{2\pi k}{N}}

This means the DFT gives you NN snapshots of the continuous spectrum. Increasing NN (by zero-padding) gives you finer frequency resolution in the sense of more densely sampled points on the same underlying DTFT, but it doesn't add new spectral information.

Two artifacts to watch for:

  • Spectral leakage: Occurs when the signal isn't periodic within the NN-point window. Energy from one frequency "leaks" into adjacent bins. Windowing functions (Hamming, Hanning, Blackman) reduce leakage at the cost of wider main lobes.
  • Aliasing in time: If the DTFT has features that aren't captured by NN samples, the implicit periodicity of the DFT causes time-domain aliasing.

Circular convolution using DFT

The circular convolution theorem makes the DFT practical for filtering:

x[n]h[n]DFTX[k]H[k]x[n] \circledast h[n] \xleftrightarrow{\text{DFT}} X[k] \cdot H[k]

To perform linear convolution using the DFT:

  1. Zero-pad both x[n]x[n] (length LL) and h[n]h[n] (length MM) to length NL+M1N \geq L + M - 1.

  2. Compute the NN-point DFTs: X[k]X[k] and H[k]H[k].

  3. Multiply pointwise: Y[k]=X[k]H[k]Y[k] = X[k] \cdot H[k].

  4. Compute the inverse DFT to get y[n]y[n].

Without sufficient zero-padding, the circular convolution wraps around and corrupts the result. The overlap-add and overlap-save methods use this approach to efficiently filter long signals in blocks.

Fast Fourier transform (FFT) algorithms

The FFT isn't a different transform; it's a family of algorithms that compute the DFT efficiently. Direct computation of an NN-point DFT requires O(N2)O(N^2) complex multiplications. The FFT reduces this to O(Nlog2N)O(N \log_2 N).

The Cooley-Tukey radix-2 algorithm is the most common variant. It recursively splits an NN-point DFT (where NN is a power of 2) into two N/2N/2-point DFTs using the decimation-in-time or decimation-in-frequency decomposition. The speedup is dramatic: for N=1024N = 1024, the FFT requires roughly 10,000 operations versus 1,000,000 for direct computation.

Other FFT variants include split-radix, mixed-radix (for non-power-of-2 lengths), and prime-factor algorithms. The FFT is what makes real-time spectral analysis, fast convolution, and large-scale signal processing computationally feasible.

Sampling and reconstruction

Sampling converts continuous-time signals to discrete-time sequences; reconstruction does the reverse. Getting this right is the bridge between the analog and digital worlds.

Sampling theorem

The Nyquist-Shannon sampling theorem states that a band-limited continuous-time signal with maximum frequency fmaxf_{max} can be perfectly reconstructed from its samples if the sampling rate satisfies:

fs2fmaxf_s \geq 2f_{max}

The quantity 2fmax2f_{max} is the Nyquist rate. The corresponding Nyquist frequency is fs/2f_s/2, which is the highest frequency that can be represented without distortion at sampling rate fsf_s.

Perfect reconstruction uses an ideal low-pass filter (sinc interpolation) applied to the sample sequence. In practice, reconstruction filters approximate this ideal.

Aliasing and anti-aliasing filters

When fs<2fmaxf_s < 2f_{max}, frequency components above fs/2f_s/2 fold back into the baseband spectrum. This is aliasing, and it's irreversible once the signal has been sampled. A 1.5 kHz tone sampled at 2 kHz, for example, appears as a 500 Hz tone in the discrete-time signal.

Anti-aliasing filters are analog low-pass filters placed before the sampler. They attenuate frequency content above fs/2f_s/2 so that aliasing is negligible. The ideal anti-aliasing filter has a brick-wall cutoff at fs/2f_s/2, but real filters have a finite transition band. This is why practical systems oversample slightly and use a transition band between the signal bandwidth and fs/2f_s/2.

In the reconstruction path, a similar low-pass filter (the reconstruction filter or anti-imaging filter) removes the spectral replicas created by the digital-to-analog conversion process.