๐Ÿ“กSystems Approach to Computer Networks

Network Performance Metrics

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Understanding performance metrics is about grasping how networks behave under real conditions and why certain applications succeed or fail. These metrics form the foundation for network design decisions, troubleshooting, and optimization. You'll encounter questions that ask you to diagnose performance problems, compare trade-offs between metrics, or explain why a particular application requires specific network characteristics.

The metrics are interconnected: bandwidth constrains throughput, latency affects round-trip time, and jitter is really just packet delay variation by another name. Don't just memorize what each metric measures. Understand which metrics matter for which applications and how they influence each other. When you see a question about video streaming quality or TCP performance, you should immediately know which metrics are relevant and why.


Capacity vs. Actual Performance

One of the most fundamental distinctions in networking is between what a link could carry and what it actually carries. This difference explains why a "fast" connection can still feel slow.

Bandwidth

Bandwidth is the maximum theoretical data rate of a network link, measured in bits per second (bps). Think of it as the width of a pipe: it sets the upper bound on how much data can flow through.

  • Often confused with throughput. Bandwidth represents potential, not what you actually get.
  • If bandwidth is insufficient, it creates a bottleneck that no other optimization can fix.
  • Typically determined by the physical medium and link-layer encoding (e.g., a Cat6 Ethernet cable supports up to 10 Gbps).

Throughput

Throughput is the actual data transfer rate achieved over a network path. It's always less than or equal to bandwidth.

  • Reduced by protocol overhead, congestion, packet loss, and physical layer limitations.
  • This is the metric users actually experience. A 1 Gbps link with 50% throughput delivers only 500 Mbps of useful data.
  • Goodput is a related term you may see: it refers to throughput minus protocol overhead, capturing only application-level useful data.

Network Utilization

Network utilization is the percentage of available bandwidth currently in use:

Utilization=ThroughputBandwidthร—100%\text{Utilization} = \frac{\text{Throughput}}{\text{Bandwidth}} \times 100\%

  • High utilization (above ~80%) typically signals impending congestion and increased queuing delays.
  • Low utilization may indicate overprovisioned resources or underperforming applications.
  • Monitoring utilization over time helps you spot trends before they become outages.

Compare: Bandwidth vs. Throughput: both measured in bps, but bandwidth is capacity while throughput is reality. If asked to explain poor performance on a "high-speed" link, start by distinguishing these two.


Time-Based Metrics

These metrics capture when data arrives, which matters as much as how much data arrives for many applications. Time-based metrics are critical for understanding TCP behavior and real-time application quality.

Latency

Latency is the time for a packet to travel from source to destination, measured in milliseconds (ms). It breaks down into four components:

  1. Propagation delay โ€” determined by the physical distance and the speed of the signal in the medium (close to the speed of light in fiber, slower in copper).
  2. Transmission delay โ€” the time to push all bits of a packet onto the link, calculated as packetย sizebandwidth\frac{\text{packet size}}{\text{bandwidth}}. On high-bandwidth links this is tiny; on slow links it adds up.
  3. Queuing delay โ€” time spent waiting in router buffers. This is the most variable component and the one most affected by congestion.
  4. Processing delay โ€” time a router spends examining the header, performing a lookup, and forwarding the packet. Usually very small on modern hardware.

Interactive applications like online gaming and video calls become unusable above roughly 150 ms of one-way latency.

Round-Trip Time (RTT)

RTT is the time for a packet to travel to its destination and for a response to come back:

RTTโ‰ˆ2ร—one-wayย latency+processingย timeย atย destinationRTT \approx 2 \times \text{one-way latency} + \text{processing time at destination}

  • Directly impacts TCP throughput because TCP's sliding window mechanism requires acknowledgments to return before the sender can advance. Higher RTT means the sender waits longer, reducing effective throughput.
  • Measured easily with ping. Lower RTT means more responsive connections and faster TCP window growth during slow start and congestion avoidance.

Compare: Latency vs. RTT: latency is one-way, RTT is round-trip. TCP performance depends on RTT (the sender waits for ACKs), while streaming video cares more about one-way latency. Know which metric applies to which protocol behavior.


Variability and Consistency Metrics

Networks rarely deliver perfectly consistent performance. These metrics capture how much performance varies, which can matter more than average performance for certain applications.

Jitter

Jitter is the variation in packet arrival times. If packets are sent at even intervals but arrive at uneven intervals, that unevenness is jitter.

  • Devastating for real-time applications. VoIP and video conferencing use jitter buffers to smooth out arrival times, but those buffers add latency as a trade-off.
  • Caused primarily by variable queuing delays as packets encounter different congestion levels at each hop, or take different paths through the network.
  • A network with low average latency but high jitter can perform worse for real-time traffic than one with slightly higher but consistent latency.

Packet Delay Variation

Packet delay variation (PDV) is the formal ITU-T term for what most people call jitter. It measures the statistical distribution of delays across packets in a flow.

  • High variation forces larger jitter buffers, which increases end-to-end latency to maintain smooth playback.
  • QoS mechanisms target this metric by providing consistent treatment (priority queuing, traffic shaping) for time-sensitive traffic.

Compare: Jitter vs. Packet Delay Variation: these terms are essentially synonymous. "Jitter" appears more in practical and informal contexts, while "packet delay variation" is the formal ITU-T terminology. Both describe the same phenomenon of inconsistent timing.


Reliability and Error Metrics

Not all packets make it to their destination intact. These metrics quantify what goes wrong during transmission and help diagnose whether problems stem from congestion or physical layer issues.

Packet Loss

Packet loss is the percentage of packets that never arrive at their destination, typically due to buffer overflow during congestion or link failures.

  • TCP retransmits lost packets, which adds latency (the sender must detect the loss, then resend). UDP applications simply lose data permanently since UDP provides no retransmission mechanism.
  • Even 1-2% packet loss can severely degrade TCP throughput because TCP interprets loss as a congestion signal and reduces its sending rate. For video streams, loss causes visible artifacts or freezes.
  • Distinguishing where loss occurs (congested router vs. flaky wireless link) is a key troubleshooting skill.

Bit Error Rate (BER)

BER is the ratio of corrupted bits to total bits transmitted:

BER=Erroneousย bitsTotalย bitsย transmittedBER = \frac{\text{Erroneous bits}}{\text{Total bits transmitted}}

  • This is a physical layer metric, caused by signal degradation, electromagnetic interference, or noise on the transmission medium.
  • When errors exceed what forward error correction (FEC) can fix, the corrupted frame is dropped, contributing to packet loss at higher layers.
  • Fiber optic links typically have extremely low BER (on the order of 10โˆ’1210^{-12}), while wireless links tend to have higher BER due to interference and fading.

Compare: Packet Loss vs. BER: BER measures physical layer corruption (bits flipped), while packet loss measures network/transport layer failure (packets dropped). High BER causes packet loss, but packet loss can also occur from congestion with zero bit errors. Knowing which one is the root cause changes how you fix the problem.


Service Quality Management

These metrics and mechanisms focus on managing performance rather than just measuring it, ensuring that critical applications get the network resources they need.

Quality of Service (QoS)

QoS is a framework for prioritizing traffic to guarantee performance levels for specific applications or users. It's not a single measurement but a set of mechanisms that manage multiple metrics simultaneously.

  • Traffic classification identifies flows (e.g., marking VoIP packets with DSCP values) so routers know how to treat them.
  • Queuing disciplines (priority queuing, weighted fair queuing) determine the order in which packets are forwarded.
  • Traffic shaping and policing control the rate at which traffic enters the network to prevent bursts from causing congestion.
  • QoS is essential for converged networks where voice, video, and data compete for the same links. Without it, a large file transfer could starve a voice call of the consistent, low-latency delivery it needs.

Compare: QoS vs. Individual Metrics: QoS isn't a single measurement but a system that manages multiple metrics (latency, jitter, packet loss, bandwidth) simultaneously. When asked about ensuring application performance, QoS is the mechanism; the other metrics are what you're optimizing.


Quick Reference Table

ConceptBest Examples
Capacity measurementBandwidth, Network Utilization
Actual performanceThroughput, Goodput
Timing (absolute)Latency, RTT
Timing (variability)Jitter, Packet Delay Variation
ReliabilityPacket Loss, BER
Management frameworkQoS
TCP performance factorsRTT, Packet Loss, Throughput
Real-time application factorsLatency, Jitter, Packet Loss

Self-Check Questions

  1. A user complains their 100 Mbps connection "feels slow." Which two metrics would you check first to distinguish between capacity problems and actual performance problems?

  2. Compare and contrast how TCP and UDP applications respond differently to packet loss. Which metric becomes more critical for each protocol type?

  3. A video conferencing application works fine on a wired connection but stutters on WiFi despite similar throughput measurements. Which variability metric best explains this, and why?

  4. If you measured RTT as 80 ms and estimated one-way latency as 35 ms, what accounts for the remaining 10 ms? How would this affect TCP window calculations?

  5. An engineer proposes solving network congestion by simply adding more bandwidth. Using at least three metrics from this guide, explain why this solution might be insufficient and what else should be monitored.