Fiveable

📚Signal Processing Unit 1 Review

QR code for Signal Processing practice questions

1.1 Classification of Signals and Systems

1.1 Classification of Signals and Systems

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
📚Signal Processing
Unit & Topic Study Guides

Continuous vs. Discrete Signals and Systems

Continuous-Time Signals and Systems

A continuous-time signal is defined for every value of time and is represented using a continuous variable, typically tt. Think of an analog audio waveform or a voltage reading from a sensor: the signal has a value at every instant, with no gaps.

Continuous-time systems take in continuous-time signals and produce continuous-time outputs. Their input-output relationships are described by differential equations.

Discrete-Time Signals and Systems

A discrete-time signal is defined only at specific, equally spaced time instants, represented by an integer index nn. Sampled audio (like a .wav file) is a classic example: you only have values at the sample points, not between them.

Discrete-time systems operate on these signals and produce discrete-time outputs. Their input-output relationships are described by difference equations rather than differential equations.

The process of converting a continuous-time signal into a discrete-time signal is called sampling. Going the other direction (discrete back to continuous) is called reconstruction.

Signal Classification

Periodicity

A periodic signal repeats at regular intervals. Formally, a continuous-time signal is periodic if:

x(t)=x(t+T)x(t) = x(t + T)

for some smallest positive value TT, called the fundamental period. For discrete-time signals, the condition is:

x[n]=x[n+N]x[n] = x[n + N]

where NN is a positive integer. Sinusoidal waves and square waves are standard examples.

An aperiodic signal never repeats. Exponential decays and random noise fall into this category.

One thing to watch for: in discrete time, a sinusoid is only periodic if its normalized frequency is a rational number. That's a subtlety that doesn't exist in continuous time, where every sinusoid is periodic.

Energy and Power

Energy signals have finite total energy. You calculate this by integrating (or summing) the squared magnitude over all time:

  • Continuous-time: E=x(t)2dtE = \int_{-\infty}^{\infty} |x(t)|^2 \, dt
  • Discrete-time: E=n=x[n]2E = \sum_{n=-\infty}^{\infty} |x[n]|^2

A single pulse or a damped sinusoid are typical energy signals. They die out eventually, so their total energy stays finite.

Power signals have finite average power but infinite total energy. Average power is computed as:

  • Continuous-time: P=limT12TTTx(t)2dtP = \lim_{T \to \infty} \frac{1}{2T} \int_{-T}^{T} |x(t)|^2 \, dt
  • Discrete-time: P=limN12N+1n=NNx[n]2P = \lim_{N \to \infty} \frac{1}{2N+1} \sum_{n=-N}^{N} |x[n]|^2

Periodic signals (like a constant DC value or a cosine wave) are power signals. They persist forever, so their energy is infinite, but their average power is finite.

A signal is either an energy signal, a power signal, or neither. It can't be both.

Signals can also be classified as deterministic (future values can be predicted exactly from a mathematical expression) or random (future values are described statistically).

System Properties

Linearity and Nonlinearity

A linear system satisfies two conditions, often tested together:

  1. Superposition (additivity): If input x1x_1 produces output y1y_1 and input x2x_2 produces output y2y_2, then input x1+x2x_1 + x_2 produces output y1+y2y_1 + y_2.
  2. Homogeneity (scaling): If input xx produces output yy, then input axax produces output ayay for any constant aa.

These two conditions are often combined into a single test: input ax1+bx2a x_1 + b x_2 must produce output ay1+by2a y_1 + b y_2. Filters and ideal integrators are examples of linear systems.

A nonlinear system violates at least one of these conditions. Systems with saturation, clipping, or squaring operations (like y(t)=x2(t)y(t) = x^2(t)) are nonlinear.

Time-Invariance and Time-Variance

A time-invariant system behaves the same regardless of when the input is applied. Formally: if input x(t)x(t) produces output y(t)y(t), then input x(tt0)x(t - t_0) produces output y(tt0)y(t - t_0) for any shift t0t_0. Delaying the input simply delays the output by the same amount. Systems with constant coefficients in their differential or difference equations are time-invariant.

A time-varying system changes its behavior over time. Adaptive filters and communication channels with fading are examples. Shifting the input does not necessarily shift the output in the same way.

Systems that are both linear and time-invariant (LTI systems) are especially important because they can be fully characterized by their impulse response, and powerful tools like Fourier analysis, Laplace transforms, and the Z-transform apply directly to them.

Causality, Stability, and Invertibility

Causality

A causal system produces outputs that depend only on current and past inputs, never on future inputs. This is the condition required for real-time operation.

For the impulse response, causality means:

  • Continuous-time: h(t)=0h(t) = 0 for t<0t < 0
  • Discrete-time: h[n]=0h[n] = 0 for n<0n < 0

Real-time control systems and live audio filters must be causal. Non-causal systems rely on future input values, which is only possible in offline processing (e.g., editing a pre-recorded audio file where all samples are already available).

Stability

A BIBO-stable (Bounded-Input, Bounded-Output) system guarantees that every bounded input produces a bounded output. The condition on the impulse response is:

  • Continuous-time: h(t)dt<\int_{-\infty}^{\infty} |h(t)| \, dt < \infty (absolutely integrable)
  • Discrete-time: n=h[n]<\sum_{n=-\infty}^{\infty} |h[n]| < \infty (absolutely summable)

Stable filters and well-designed feedback controllers satisfy this. An unstable system can produce outputs that grow without bound even from a perfectly reasonable input. Positive feedback loops without proper gain control are a common source of instability.

Invertibility

An invertible system maps each distinct input to a distinct output, so you can uniquely recover the input from the output. If the system has impulse response h(t)h(t), then an inverse system with impulse response g(t)g(t) exists such that:

h(t)g(t)=δ(t)h(t) * g(t) = \delta(t)

where * denotes convolution and δ(t)\delta(t) is the Dirac delta function. The same idea applies in discrete time with h[n]h[n], g[n]g[n], and δ[n]\delta[n].

Lossless compression and reversible transformations are invertible. Non-invertible systems map multiple inputs to the same output, making recovery impossible. Lossy compression and squaring operations (where you lose sign information) are non-invertible.