Properties of discrete-time signals
Discrete-time signals are functions of an integer variable , represented as sequences . Unlike continuous-time signals, they're defined only at integer time indices. Classifying these signals by their properties determines which analysis techniques apply and how you design systems to process them.
Energy and power signals
Every discrete-time signal falls into one of three categories: energy signal, power signal, or neither.
Energy signals have finite total energy and are square-summable:
Finite-length pulses and decaying exponentials are typical energy signals. Their energy is concentrated in a finite interval (or decays fast enough to sum finitely).
Power signals have finite average power but infinite total energy:
Sinusoids and constant sequences are power signals. They persist indefinitely, so their energy is infinite, but the energy per sample stays bounded.
A signal can't be both an energy signal and a power signal. Energy signals have zero average power, and power signals have infinite energy.
Even and odd signals
- Even signals satisfy for all (symmetric about ). Cosine sequences and constant signals are even.
- Odd signals satisfy for all (antisymmetric about ). Sine sequences are odd. Note that odd signals must satisfy .
Any signal can be decomposed into even and odd components:
This decomposition is useful for simplifying Fourier analysis, since even components contribute only to the real part of the spectrum and odd components only to the imaginary part.
Periodic and aperiodic signals
A discrete-time signal is periodic with fundamental period if:
where is the smallest positive integer satisfying this condition. Discrete-time sinusoids are periodic only when is a rational multiple of , which is a subtlety that doesn't exist in continuous time.
Aperiodic signals (like a unit impulse or a decaying exponential) don't repeat. Periodic signals can be represented using the discrete Fourier series, while aperiodic signals require the DTFT or Z-transform for frequency-domain analysis.
Deterministic vs. random signals
- Deterministic signals are completely specified by a mathematical expression or rule. Given the rule, you can compute for any . Fourier analysis and Z-transforms are the standard tools here.
- Random (stochastic) signals have values that can't be predicted exactly. They're characterized by statistical properties: mean, variance, autocorrelation, and power spectral density. Noise is the classic example.
In practice, most real-world signals contain both deterministic and random components. Statistical signal processing methods (Wiener filtering, spectral estimation) handle the random parts.
Discrete-time systems
A discrete-time system maps an input sequence to an output sequence . The system's properties constrain what operations it can perform and determine which analysis tools apply.
Properties of systems
Linearity: A system is linear if superposition holds:
This means scaling and addition pass through the system unchanged. Linearity is what makes convolution-based analysis possible.
Time-invariance: A system is time-invariant if shifting the input by samples shifts the output by the same amount:
The system's behavior doesn't change over time.
Causality: A causal system's output at time depends only on inputs at times . This is required for real-time implementation since you can't use future samples that haven't arrived yet.
Stability (BIBO): A system is bounded-input bounded-output (BIBO) stable if every bounded input produces a bounded output. For LTI systems, BIBO stability is equivalent to the impulse response being absolutely summable:
Linear time-invariant (LTI) systems
LTI systems are the workhorse of signal processing because they combine linearity and time-invariance, which gives you two powerful results:
- The system is completely characterized by its impulse response .
- The output for any input is computed via the convolution sum:
In the frequency domain, convolution becomes multiplication: . This is why LTI systems are so tractable for filter design, modulation analysis, and deconvolution.
Causal vs. non-causal systems
- Causal systems have impulse responses where for . They're realizable in real time. Moving average filters and causal IIR filters fall in this category.
- Non-causal systems have for some , meaning they require future input values. These are only usable in offline (batch) processing or when you can tolerate a processing delay. Signal interpolation and certain optimal filters (like the non-causal Wiener filter) are non-causal.
Stable vs. unstable systems
- Stable systems keep outputs bounded for any bounded input. For LTI systems, this means all poles of the transfer function lie inside the unit circle (for causal systems).
- Unstable systems can produce outputs that grow without bound. A classic example is an IIR filter with poles outside the unit circle in the z-plane. In practice, unstable systems cause overflow and divergence, so stability verification is a critical design step.
Convolution in discrete-time
Convolution is the operation that connects an LTI system's impulse response to its input-output behavior. It's also the basis for understanding filtering in both time and frequency domains.
Discrete-time convolution
The convolution of and is:
To compute this by hand:
- Flip to get .
- Shift the flipped sequence by to get .
- Multiply and element-wise.
- Sum all the products. This gives for that particular .
- Repeat for each value of you need.
Convolution is commutative (), associative (), and distributive over addition. The associative property is especially useful: cascading two LTI systems is equivalent to convolving their impulse responses.

Linear convolution
Linear convolution is the standard, unrestricted form. If has length and has length , the output has length . This is the convolution you use for FIR filtering and deconvolution when you need the complete, undistorted output.
Circular convolution
Circular convolution treats both sequences as periodic with period . The output also has length , and the computation "wraps around" at the boundaries.
The key relationship: the DFT converts circular convolution into pointwise multiplication. That is, if is the -point circular convolution of and , then:
To use the DFT/FFT for linear convolution (avoiding wrap-around artifacts), you zero-pad both sequences to at least length before computing the circular convolution. This is the basis of fast convolution via overlap-add and overlap-save methods.
Convolution vs. correlation
Correlation measures similarity between two signals as a function of a time lag. The cross-correlation of and is:
The difference from convolution: in convolution you flip one sequence, in correlation you don't. Equivalently, (or for complex signals).
- Convolution computes the output of an LTI system.
- Correlation measures signal similarity and is used in matched filtering, pattern recognition, time-delay estimation, and system identification.
Discrete-time Fourier transform (DTFT)
The DTFT maps a discrete-time sequence to a continuous function of frequency, giving you the complete spectral representation of a signal.
Definition and properties of DTFT
The forward DTFT:
The inverse DTFT:
Here is the normalized angular frequency in radians per sample, and is always periodic in with period . This periodicity is a direct consequence of discrete-time sampling.
Key properties:
- Linearity:
- Time shift:
- Frequency shift (modulation):
- Convolution:
- Parseval's theorem:
Convergence of DTFT
The DTFT converges uniformly if is absolutely summable:
All energy signals satisfy this condition. Power signals (like sinusoids or the unit step) don't converge in the ordinary sense, but their DTFTs can be expressed using impulse functions (Dirac deltas in frequency). For example, the DTFT of is , periodically extended.
Relationship between DTFT and Fourier series
For a periodic signal with period , the DTFT consists of impulses spaced at intervals of in frequency:
where are the discrete Fourier series (DFS) coefficients. This connects the DFS representation of periodic signals to the DTFT framework.
Frequency response of LTI systems using DTFT
The frequency response of an LTI system is the DTFT of its impulse response:
For any input , the output spectrum is:
This multiplicative relationship is why frequency-domain analysis is so powerful. You can read off the system's behavior directly:
- is the magnitude response (gain at each frequency)
- is the phase response (phase shift at each frequency)
Filter design amounts to shaping to pass desired frequencies and reject others.
Z-transform
The Z-transform generalizes the DTFT by replacing with a complex variable . This gives you algebraic tools for analyzing system stability, causality, and transfer functions that the DTFT alone can't provide.
Definition and properties of Z-transform
The bilateral Z-transform:
Key properties parallel those of the DTFT:
- Linearity:
- Time shift:
- Convolution:
- Scaling:
The convolution property is particularly important: it turns convolution sums into polynomial multiplication in , which is far easier to manipulate algebraically.
Region of convergence (ROC)
The Z-transform only converges for certain values of . The set of values where the sum converges is the region of convergence (ROC).
The ROC determines uniqueness: different signals can have the same expression but different ROCs, leading to different time-domain sequences. Key rules:
- Right-sided signals ( for ): ROC is the exterior of a circle,
- Left-sided signals ( for ): ROC is the interior of a circle,
- Two-sided signals: ROC is an annular region,
- A causal LTI system is BIBO stable if and only if the ROC of includes the unit circle, which requires all poles to be inside the unit circle

Inverse Z-transform
Three standard methods for computing the inverse Z-transform:
- Partial fraction expansion: Decompose into simpler rational terms, then use known transform pairs. This is the most common approach for rational .
- Power series expansion: Expand as a Laurent series in . The coefficients directly give . Useful for non-rational transforms or for reading off a few specific values.
- Contour integration: Evaluate using the residue theorem. More general but typically reserved for complex cases.
The ROC must be specified to get the correct time-domain signal, since the same expression with different ROCs yields different sequences.
Relationship between Z-transform and DTFT
The DTFT is the Z-transform evaluated on the unit circle:
This evaluation is valid only if the ROC of includes the unit circle. For stable causal systems, the ROC always includes the unit circle, so the frequency response is always well-defined.
The Z-transform is more general than the DTFT because it handles signals and systems whose DTFT doesn't converge (e.g., unstable systems with poles outside the unit circle).
System analysis using Z-transform
The transfer function of an LTI system is:
for a system described by a linear constant-coefficient difference equation. The transfer function's poles and zeros encode the system's behavior:
- Poles (roots of the denominator) determine stability and natural response modes. Poles inside the unit circle correspond to decaying modes; poles outside correspond to growing (unstable) modes.
- Zeros (roots of the numerator) determine where the frequency response goes to zero, shaping the filter's stopband.
- The magnitude response is shaped by the proximity of poles and zeros to the unit circle. A pole near the unit circle at angle creates a peak in near ; a zero creates a notch.
Filter design in the z-domain involves placing poles and zeros strategically to achieve desired frequency-selective behavior.
Discrete Fourier transform (DFT)
The DFT is the computable version of the DTFT. It operates on finite-length sequences and produces finite-length frequency-domain representations, making it the practical tool for spectral analysis and fast filtering.
Definition and properties of DFT
For an -point sequence :
The DFT maps time-domain samples to frequency-domain samples. Both the input and output are finite and periodic (with period ), which is why all DFT operations are circular.
Key properties:
- Linearity: Standard superposition applies
- Circular time shift:
- Circular convolution: Pointwise multiplication in the DFT domain corresponds to circular convolution in time
- Parseval's theorem:
Relationship between DFT and DTFT
The DFT samples the DTFT at equally spaced frequencies:
This means the DFT gives you snapshots of the continuous spectrum. Increasing (by zero-padding) gives you finer frequency resolution in the sense of more densely sampled points on the same underlying DTFT, but it doesn't add new spectral information.
Two artifacts to watch for:
- Spectral leakage: Occurs when the signal isn't periodic within the -point window. Energy from one frequency "leaks" into adjacent bins. Windowing functions (Hamming, Hanning, Blackman) reduce leakage at the cost of wider main lobes.
- Aliasing in time: If the DTFT has features that aren't captured by samples, the implicit periodicity of the DFT causes time-domain aliasing.
Circular convolution using DFT
The circular convolution theorem makes the DFT practical for filtering:
To perform linear convolution using the DFT:
-
Zero-pad both (length ) and (length ) to length .
-
Compute the -point DFTs: and .
-
Multiply pointwise: .
-
Compute the inverse DFT to get .
Without sufficient zero-padding, the circular convolution wraps around and corrupts the result. The overlap-add and overlap-save methods use this approach to efficiently filter long signals in blocks.
Fast Fourier transform (FFT) algorithms
The FFT isn't a different transform; it's a family of algorithms that compute the DFT efficiently. Direct computation of an -point DFT requires complex multiplications. The FFT reduces this to .
The Cooley-Tukey radix-2 algorithm is the most common variant. It recursively splits an -point DFT (where is a power of 2) into two -point DFTs using the decimation-in-time or decimation-in-frequency decomposition. The speedup is dramatic: for , the FFT requires roughly 10,000 operations versus 1,000,000 for direct computation.
Other FFT variants include split-radix, mixed-radix (for non-power-of-2 lengths), and prime-factor algorithms. The FFT is what makes real-time spectral analysis, fast convolution, and large-scale signal processing computationally feasible.
Sampling and reconstruction
Sampling converts continuous-time signals to discrete-time sequences; reconstruction does the reverse. Getting this right is the bridge between the analog and digital worlds.
Sampling theorem
The Nyquist-Shannon sampling theorem states that a band-limited continuous-time signal with maximum frequency can be perfectly reconstructed from its samples if the sampling rate satisfies:
The quantity is the Nyquist rate. The corresponding Nyquist frequency is , which is the highest frequency that can be represented without distortion at sampling rate .
Perfect reconstruction uses an ideal low-pass filter (sinc interpolation) applied to the sample sequence. In practice, reconstruction filters approximate this ideal.
Aliasing and anti-aliasing filters
When , frequency components above fold back into the baseband spectrum. This is aliasing, and it's irreversible once the signal has been sampled. A 1.5 kHz tone sampled at 2 kHz, for example, appears as a 500 Hz tone in the discrete-time signal.
Anti-aliasing filters are analog low-pass filters placed before the sampler. They attenuate frequency content above so that aliasing is negligible. The ideal anti-aliasing filter has a brick-wall cutoff at , but real filters have a finite transition band. This is why practical systems oversample slightly and use a transition band between the signal bandwidth and .
In the reconstruction path, a similar low-pass filter (the reconstruction filter or anti-imaging filter) removes the spectral replicas created by the digital-to-analog conversion process.