Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
When you're studying computer networks, understanding performance metrics isn't just about memorizing definitions—it's about grasping how networks behave under real conditions and why certain applications succeed or fail. These metrics form the foundation for network design decisions, troubleshooting, and optimization. You'll encounter questions that ask you to diagnose performance problems, compare trade-offs between metrics, or explain why a particular application requires specific network characteristics.
The key insight here is that metrics are interconnected: bandwidth constrains throughput, latency affects round-trip time, and jitter is really just packet delay variation by another name. Don't just memorize what each metric measures—understand which metrics matter for which applications and how they influence each other. When you see an exam question about video streaming quality or TCP performance, you should immediately know which metrics are relevant and why.
One of the most fundamental distinctions in networking is between what a link could carry and what it actually carries. This difference explains why a "fast" connection can still feel slow.
Compare: Bandwidth vs. Throughput—both measured in bps, but bandwidth is capacity while throughput is reality. If asked to explain poor performance on a "high-speed" link, start by distinguishing these two.
These metrics capture when data arrives, which matters as much as how much data arrives for many applications. Time-based metrics are critical for understanding TCP behavior and real-time application quality.
Compare: Latency vs. RTT—latency is one-way, RTT is round-trip. TCP performance depends on RTT (waiting for ACKs), while streaming video cares more about one-way latency. Know which metric applies to which protocol behavior.
Networks rarely deliver perfectly consistent performance. These metrics capture how much performance varies, which can matter more than average performance for certain applications.
Compare: Jitter vs. Packet Delay Variation—these terms are essentially synonymous, but "jitter" appears more in practical contexts while "packet delay variation" is the formal ITU-T terminology. Both describe the same phenomenon of inconsistent timing.
Not all packets make it to their destination intact. These metrics quantify what goes wrong during transmission and help diagnose whether problems stem from congestion or physical layer issues.
Compare: Packet Loss vs. BER—BER measures physical layer corruption (bits flipped), while packet loss measures transport layer failure (packets dropped). High BER causes packet loss, but packet loss can also occur from congestion with zero bit errors.
These metrics and mechanisms focus on managing performance rather than just measuring it, ensuring that critical applications get the network resources they need.
Compare: QoS vs. Individual Metrics—QoS isn't a single measurement but a system that manages multiple metrics (latency, jitter, packet loss, bandwidth) simultaneously. When asked about ensuring application performance, QoS is the mechanism while the other metrics are what you're optimizing.
| Concept | Best Examples |
|---|---|
| Capacity measurement | Bandwidth, Network Utilization |
| Actual performance | Throughput |
| Timing (absolute) | Latency, RTT |
| Timing (variability) | Jitter, Packet Delay Variation |
| Reliability | Packet Loss, BER |
| Management framework | QoS |
| TCP performance factors | RTT, Packet Loss, Throughput |
| Real-time application factors | Latency, Jitter, Packet Loss |
A user complains their 100 Mbps connection "feels slow." Which two metrics would you check first to distinguish between capacity problems and actual performance problems?
Compare and contrast how TCP and UDP applications respond differently to packet loss. Which metric becomes more critical for each protocol type?
A video conferencing application works fine on a wired connection but stutters on WiFi despite similar throughput measurements. Which variability metric best explains this, and why?
If you measured RTT as 80ms and estimated one-way latency as 35ms, what accounts for the remaining 10ms? How would this affect TCP window calculations?
An engineer proposes solving network congestion by simply adding more bandwidth. Using at least three metrics from this guide, explain why this solution might be insufficient and what else should be monitored.