Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Understanding distributed computing architectures isn't just about memorizing definitions—it's about recognizing why different systems are designed the way they are and when to apply each model. You're being tested on your ability to analyze trade-offs between centralization vs. decentralization, latency vs. throughput, and scalability vs. complexity. These architectures form the backbone of everything from web applications to scientific simulations, and exam questions frequently ask you to compare approaches or recommend solutions for specific scenarios.
The key insight is that each architecture solves a particular set of problems while introducing its own limitations. Whether you're dealing with fault tolerance, resource sharing, real-time processing, or service modularity, the architecture you choose determines your system's behavior under load, failure, and growth. Don't just memorize what each architecture does—know what problem it solves and what trade-offs it accepts.
The fundamental architectural decision in distributed systems is where control and coordination live. Centralized models simplify management but create bottlenecks and single points of failure; decentralized models distribute risk but increase coordination complexity.
Compare: Client-Server vs. P2P—both enable resource sharing, but client-server centralizes control (easier management, single point of failure) while P2P distributes it (fault-tolerant, harder to secure). If an FRQ asks about trade-offs in system design, this contrast is your go-to example.
Modern distributed systems often break applications into discrete services that communicate over networks. The granularity and coupling of these services determine flexibility, deployment speed, and operational complexity.
Compare: SOA vs. Microservices—both decompose applications into services, but SOA uses larger, shared services with enterprise-wide governance while microservices favor smaller, autonomous units with decentralized control. Microservices trade coordination simplicity for deployment flexibility.
Where computation happens matters enormously for latency, bandwidth, and real-time responsiveness. These architectures represent a spectrum from fully centralized (cloud) to fully distributed (edge) processing.
Compare: Cloud vs. Edge vs. Fog—cloud centralizes processing (high latency, unlimited scale), edge distributes it maximally (low latency, limited compute), and fog occupies the middle ground. FRQs about IoT or real-time systems often require you to justify which layer should handle specific tasks.
Some architectures focus primarily on aggregating distributed resources into unified pools for large-scale computation. The key mechanism is abstracting heterogeneous resources into a coherent, schedulable whole.
Compare: Grid Computing vs. Cluster Computing—both aggregate multiple machines, but clusters are typically homogeneous, tightly coupled, and locally managed, while grids span organizations, tolerate heterogeneity, and coordinate loosely. Clusters optimize for performance; grids optimize for resource sharing across boundaries.
| Concept | Best Examples |
|---|---|
| Centralized control | Client-Server |
| Decentralized control | Peer-to-Peer, Blockchain applications |
| Fault tolerance through redundancy | P2P, Cluster Computing, Grid Computing |
| Service decomposition | SOA, Microservices, Distributed Object Architecture |
| Latency optimization | Edge Computing, Fog Computing |
| Elastic scalability | Cloud Computing, Microservices |
| Legacy integration | SOA |
| High-performance computing | Cluster Computing, Grid Computing |
Which two architectures both eliminate single points of failure but differ in how tightly coupled their nodes are? Explain the trade-off.
A hospital needs to process patient monitoring data in real-time while maintaining long-term records in a central system. Which combination of architectures would you recommend, and why?
Compare and contrast SOA and Microservices: What problem does each solve best, and when would you choose one over the other?
An FRQ describes a system that must scale automatically with demand, minimize infrastructure costs, and support rapid feature deployment. Which architectures address each requirement?
Why might an autonomous vehicle system use edge computing rather than cloud computing for obstacle detection, even though cloud computing offers more computational power?