Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Understanding performance metrics is about grasping how networks behave under real conditions and why certain applications succeed or fail. These metrics form the foundation for network design decisions, troubleshooting, and optimization. You'll encounter questions that ask you to diagnose performance problems, compare trade-offs between metrics, or explain why a particular application requires specific network characteristics.
The metrics are interconnected: bandwidth constrains throughput, latency affects round-trip time, and jitter is really just packet delay variation by another name. Don't just memorize what each metric measures. Understand which metrics matter for which applications and how they influence each other. When you see a question about video streaming quality or TCP performance, you should immediately know which metrics are relevant and why.
One of the most fundamental distinctions in networking is between what a link could carry and what it actually carries. This difference explains why a "fast" connection can still feel slow.
Bandwidth is the maximum theoretical data rate of a network link, measured in bits per second (bps). Think of it as the width of a pipe: it sets the upper bound on how much data can flow through.
Throughput is the actual data transfer rate achieved over a network path. It's always less than or equal to bandwidth.
Network utilization is the percentage of available bandwidth currently in use:
Compare: Bandwidth vs. Throughput: both measured in bps, but bandwidth is capacity while throughput is reality. If asked to explain poor performance on a "high-speed" link, start by distinguishing these two.
These metrics capture when data arrives, which matters as much as how much data arrives for many applications. Time-based metrics are critical for understanding TCP behavior and real-time application quality.
Latency is the time for a packet to travel from source to destination, measured in milliseconds (ms). It breaks down into four components:
Interactive applications like online gaming and video calls become unusable above roughly 150 ms of one-way latency.
RTT is the time for a packet to travel to its destination and for a response to come back:
ping. Lower RTT means more responsive connections and faster TCP window growth during slow start and congestion avoidance.Compare: Latency vs. RTT: latency is one-way, RTT is round-trip. TCP performance depends on RTT (the sender waits for ACKs), while streaming video cares more about one-way latency. Know which metric applies to which protocol behavior.
Networks rarely deliver perfectly consistent performance. These metrics capture how much performance varies, which can matter more than average performance for certain applications.
Jitter is the variation in packet arrival times. If packets are sent at even intervals but arrive at uneven intervals, that unevenness is jitter.
Packet delay variation (PDV) is the formal ITU-T term for what most people call jitter. It measures the statistical distribution of delays across packets in a flow.
Compare: Jitter vs. Packet Delay Variation: these terms are essentially synonymous. "Jitter" appears more in practical and informal contexts, while "packet delay variation" is the formal ITU-T terminology. Both describe the same phenomenon of inconsistent timing.
Not all packets make it to their destination intact. These metrics quantify what goes wrong during transmission and help diagnose whether problems stem from congestion or physical layer issues.
Packet loss is the percentage of packets that never arrive at their destination, typically due to buffer overflow during congestion or link failures.
BER is the ratio of corrupted bits to total bits transmitted:
Compare: Packet Loss vs. BER: BER measures physical layer corruption (bits flipped), while packet loss measures network/transport layer failure (packets dropped). High BER causes packet loss, but packet loss can also occur from congestion with zero bit errors. Knowing which one is the root cause changes how you fix the problem.
These metrics and mechanisms focus on managing performance rather than just measuring it, ensuring that critical applications get the network resources they need.
QoS is a framework for prioritizing traffic to guarantee performance levels for specific applications or users. It's not a single measurement but a set of mechanisms that manage multiple metrics simultaneously.
Compare: QoS vs. Individual Metrics: QoS isn't a single measurement but a system that manages multiple metrics (latency, jitter, packet loss, bandwidth) simultaneously. When asked about ensuring application performance, QoS is the mechanism; the other metrics are what you're optimizing.
| Concept | Best Examples |
|---|---|
| Capacity measurement | Bandwidth, Network Utilization |
| Actual performance | Throughput, Goodput |
| Timing (absolute) | Latency, RTT |
| Timing (variability) | Jitter, Packet Delay Variation |
| Reliability | Packet Loss, BER |
| Management framework | QoS |
| TCP performance factors | RTT, Packet Loss, Throughput |
| Real-time application factors | Latency, Jitter, Packet Loss |
A user complains their 100 Mbps connection "feels slow." Which two metrics would you check first to distinguish between capacity problems and actual performance problems?
Compare and contrast how TCP and UDP applications respond differently to packet loss. Which metric becomes more critical for each protocol type?
A video conferencing application works fine on a wired connection but stutters on WiFi despite similar throughput measurements. Which variability metric best explains this, and why?
If you measured RTT as 80 ms and estimated one-way latency as 35 ms, what accounts for the remaining 10 ms? How would this affect TCP window calculations?
An engineer proposes solving network congestion by simply adding more bandwidth. Using at least three metrics from this guide, explain why this solution might be insufficient and what else should be monitored.