Fiveable

📡Systems Approach to Computer Networks Unit 15 Review

QR code for Systems Approach to Computer Networks practice questions

15.2 Switches and Switch Operation

15.2 Switches and Switch Operation

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
📡Systems Approach to Computer Networks
Unit & Topic Study Guides

Ethernet Switches

Function of Ethernet Switches

Ethernet switches connect multiple devices within a LAN, operating at the data link layer (Layer 2) of the OSI model. Their core job is to receive Ethernet frames and forward them to the correct destination based on the destination MAC address in each frame's header.

Each switch port is its own collision domain, which means devices don't compete for bandwidth the way they do on a shared hub segment. This enables full-duplex communication, where a device can send and receive simultaneously.

  • Every switch maintains a MAC address table that maps MAC addresses to specific ports
  • Connected devices (PCs, printers, servers, etc.) are identified by their MAC addresses, not IP addresses, at this layer

Address Learning in Switches

Switches populate their MAC address table automatically through a process called address learning. Here's how it works:

  1. A frame arrives on a switch port
  2. The switch reads the source MAC address and records it alongside the port number in the MAC address table
  3. The switch then looks up the destination MAC address in the table
  4. If a match is found, the switch forwards the frame out the corresponding port only (known unicast)
  5. If no match is found, the switch floods the frame out all ports except the one it arrived on

That flooding behavior is sometimes called an unknown unicast flood. It's not the same as a broadcast, though the effect looks similar. The difference: a broadcast frame has a destination MAC of FF:FF:FF:FF:FF:FF and is intentionally sent to all ports, while flooding is what the switch does when it simply doesn't know where the destination lives yet. Once the destination device replies, the switch learns its MAC address and won't need to flood again for that address.

MAC address table entries are not permanent. They age out after a configurable timeout (typically 300 seconds) if no new frames arrive from that source address.

Function of Ethernet switches, Collision Domain and Broadcast Domain | ICND1 100-105

Switch Operation Modes and Performance

Types of Switching Modes

Three switching modes determine how a switch handles incoming frames, each trading off between latency and error checking:

  1. Cut-through switching begins forwarding the frame as soon as it reads the destination MAC address (the first 6 bytes after the preamble). This gives the lowest latency but provides no error checking, so corrupted frames get forwarded along with good ones.

  2. Store-and-forward switching waits until the entire frame has been received, then runs a CRC (cyclic redundancy check) on it. If the CRC passes, the frame is forwarded; if not, it's dropped. This guarantees data integrity but adds latency proportional to the frame size.

  3. Fragment-free switching is a middle ground. It buffers the first 64 bytes of each frame before forwarding. Why 64 bytes? That's the minimum Ethernet frame size, and collision fragments are always shorter than 64 bytes. So this mode filters out collision damage without the full latency cost of store-and-forward.

Most modern managed switches default to store-and-forward because link speeds are fast enough that the added latency is negligible, and the error-checking benefit is worth it.

Function of Ethernet switches, Network Switches Explained | Packet Switching | ICND1 100-105

Switch Buffering and Performance

When an outgoing port is already busy transmitting, the switch needs somewhere to hold incoming frames. That's what buffering does: frames are temporarily stored in memory until the port is free.

Buffering also handles mismatches in port speed. If traffic arrives on a 1 Gbps port but needs to exit through a 100 Mbps port, the buffer absorbs the difference.

  • If buffers fill up completely, the switch has no choice but to drop frames, which degrades performance
  • Dynamic buffer allocation lets the switch assign more memory to congested ports rather than giving every port a fixed share
  • Quality of Service (QoS) policies can prioritize certain traffic (like VoIP) so that high-priority frames get buffer space and forwarding preference even during congestion

Proper buffer sizing matters. Too little buffer memory leads to drops during traffic bursts; too much can increase latency if frames sit in queues for too long.

Switches vs. Hubs in Networks

Hubs are simple Layer 1 repeaters: every frame received on one port gets copied to all other ports. All devices share a single collision domain and must use half-duplex CSMA/CD to avoid collisions. Switches improve on this in several ways:

  • Collision domains: Each switch port is its own collision domain, enabling full-duplex operation. A 100 Mbps hub gives devices a shared 100 Mbps (half-duplex), while a 100 Mbps switch port provides 100 Mbps in each direction (200 Mbps aggregate per port).
  • Intelligent forwarding: Switches send frames only to the port where the destination device resides, rather than broadcasting everything. This dramatically reduces unnecessary traffic.
  • Security: Because frames aren't copied to every port, it's harder for an attacker to passively eavesdrop on traffic not destined for their port. (Note: this isn't foolproof, since techniques like MAC flooding can force a switch into hub-like behavior.)
  • VLAN support: Switches can logically segment a single physical network into multiple Virtual LANs, isolating traffic between departments, projects, or security zones without additional hardware.
  • Advanced features: Managed switches support QoS for traffic prioritization (e.g., giving VoIP packets priority over file downloads), port mirroring for traffic analysis and troubleshooting, and link aggregation for combining multiple physical links into one logical link for increased bandwidth and redundancy.