Fiveable

📡Systems Approach to Computer Networks Unit 3 Review

QR code for Systems Approach to Computer Networks practice questions

3.1 Packet Switching Principles

3.1 Packet Switching Principles

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
📡Systems Approach to Computer Networks
Unit & Topic Study Guides

Packet Switching Fundamentals

Packet switching is the core technique that makes modern networks work: instead of reserving a dedicated path between two endpoints (as circuit switching does), it breaks data into smaller units called packets and sends them independently across shared infrastructure. This approach is what allows the internet to serve billions of users simultaneously without requiring a dedicated wire for every conversation.

This section covers how packet switching works, the two main forwarding strategies (store-and-forward vs. cut-through), how packet size affects performance, and how statistical multiplexing makes it all efficient.

Fundamentals of Packet Switching

Data is divided into smaller, manageable units called packets before being sent across a network. Each packet contains a portion of the original data plus control information: source and destination addresses, sequence numbers, and error-detection fields. Packets are independently routed through the network and reassembled at the destination.

This design has several advantages over circuit switching:

  • Resource sharing. Network bandwidth and buffer space are shared among many users and applications rather than reserved for a single connection. This means a link sitting idle during one user's pause can carry another user's traffic.
  • Fault tolerance. If a link or router fails, packets can be rerouted through alternative paths. The network can still deliver data even when individual components go down (cable cuts, power outages, etc.).
  • Scalability. Adding new users doesn't require provisioning new dedicated circuits. The shared infrastructure accommodates growth more naturally.
  • Cost-effectiveness. No dedicated end-to-end circuit is needed between communicating parties, which reduces infrastructure costs significantly.
Fundamentals of packet switching, Internet Connection | ICND2 200-105

Store-and-Forward vs. Cut-Through Switching

These are the two main strategies a network device (router or switch) can use when it receives a packet.

Store-and-forward switching waits until the entire packet has arrived before sending it to the next hop.

  1. The device receives the full packet into its buffer.
  2. It runs error checking (typically a CRC check) on the complete packet.
  3. If the packet is valid, it looks up the destination and forwards it. If corrupted, it discards the packet.

The tradeoff: you get strong data integrity guarantees, but each hop adds latency equal to the full packet reception time. For a packet of size LL on a link of rate RR, each hop adds at least L/RL/R seconds of store-and-forward delay. Most routers on the internet use store-and-forward.

Cut-through switching begins forwarding a packet as soon as it reads the destination address from the header, without waiting for the rest of the packet to arrive.

There are two variants:

  1. Fast-forward switching forwards immediately after reading the destination address (typically the first few bytes). This gives the lowest latency but performs no error checking, so corrupted packets propagate through the network.
  2. Fragment-free switching waits for the first 64 bytes before forwarding. Why 64 bytes? That's the minimum Ethernet frame size, and most collision-induced errors show up within that window. This catches the most common errors while still being faster than full store-and-forward.

Key tradeoff: Store-and-forward prioritizes integrity (corrupted packets are caught early). Cut-through prioritizes speed (lower per-hop latency). The right choice depends on whether your network values low latency or reliable delivery at the link layer.

Fundamentals of packet switching, TCP/IP networking basics: hubs, switches, gateways and routing (in Technology > TCP/IP ...

Impact of Packet Size

Packet size directly affects two competing factors: transmission delay and overhead efficiency.

Transmission delay is the time to push all bits of a packet onto a link:

dtrans=LRd_{trans} = \frac{L}{R}

where LL is the packet size in bits and RR is the link bandwidth in bits per second. Larger packets take longer to transmit, which increases delay at every hop (especially under store-and-forward).

Overhead ratio goes the other direction. Every packet carries a header (and sometimes a trailer) for addressing, sequencing, and error detection. If a header is 40 bytes and your payload is 20 bytes, two-thirds of your transmission is overhead. Larger packets amortize that fixed header cost over more payload data, so the overhead-to-payload ratio drops.

This creates a fundamental tradeoff when choosing packet size:

  • Smaller packets have lower per-packet transmission delay and better responsiveness, which matters for latency-sensitive applications like VoIP or online gaming. But they carry proportionally more overhead and increase the number of packets the network must process.
  • Larger packets are more efficient in terms of overhead and suit bulk transfers like file downloads or video streaming. But they increase transmission delay per packet and can cause longer queuing delays for other traffic sharing the same link.

In practice, protocols like TCP use a Maximum Segment Size (MSS) that balances these concerns, typically around 1460 bytes of payload within a 1500-byte Ethernet frame.

Role of Statistical Multiplexing

Statistical multiplexing is the reason packet-switched networks can serve far more users than the raw bandwidth would suggest under dedicated allocation. Instead of reserving a fixed slice of bandwidth for each user (as time-division or frequency-division multiplexing would), it allocates bandwidth dynamically based on who actually needs it right now.

The core insight: not all users transmit at peak rate at the same time. A user reading a web page generates zero traffic for seconds at a stretch, and that idle bandwidth can carry someone else's video stream. This bursty nature of most network traffic is what makes statistical multiplexing so effective.

How it improves efficiency:

  • Unused bandwidth from idle connections is immediately available to active ones, so overall link utilization stays high.
  • More users and applications can share the same infrastructure than would be possible with fixed allocation.
  • Resources are allocated on demand rather than pre-provisioned, which avoids waste.

The catch: congestion. Statistical multiplexing works on the assumption that demand won't exceed capacity most of the time. When many users burst simultaneously (think a major live event or a sudden traffic spike), the aggregate demand can exceed link capacity. This leads to:

  • Packets queuing in router buffers, increasing delay
  • Buffer overflow, causing packet loss
  • Overall performance degradation for all users sharing the link

This is why packet-switched networks need congestion control mechanisms (like TCP's congestion window) and traffic management strategies (like quality-of-service policies). Careful capacity planning and monitoring are essential to keep the statistical assumptions valid and maintain acceptable performance.