Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Edge computing architectures aren't just technical implementations—they represent fundamentally different approaches to solving the core challenge of modern computing: where should processing happen? You're being tested on your ability to understand latency optimization, resource distribution, network topology, and scalability trade-offs. Each architecture makes different assumptions about connectivity, device capabilities, and workload characteristics, and knowing these distinctions is what separates surface-level memorization from genuine understanding.
When you encounter these architectures on an exam, don't just recall their names. Ask yourself: What problem does this architecture solve? Where does processing occur? How does it handle the tension between local responsiveness and centralized power? The items below are grouped by their fundamental design philosophy—master these categories, and you'll be able to reason through any scenario an FRQ throws at you.
These architectures take traditional cloud capabilities and push them toward the network edge. The core principle: maintain cloud-like services while reducing the physical and logical distance data must travel.
Compare: Fog Computing vs. Cloudlets—both extend cloud capabilities to the edge, but fog computing creates a distributed mesh across many nodes while cloudlets concentrate resources in discrete, localized data centers. If an FRQ asks about supporting bandwidth-constrained IoT sensors, fog is your answer; for mobile offloading scenarios, think cloudlets.
These architectures leverage existing network infrastructure to embed computing capabilities. The key insight: networks aren't just pipes for data—they can be platforms for processing.
Compare: MEC vs. Multi-access Edge Computing—MEC focuses specifically on mobile/cellular networks, while multi-access edge computing abstracts across all network types. On exams, MEC questions typically involve 5G or mobile applications; multi-access questions emphasize heterogeneous connectivity scenarios.
These architectures distribute control and resources across multiple independent nodes. The fundamental principle: resilience and scalability through elimination of single points of failure.
Compare: Peer-to-Peer vs. Distributed Edge Computing—both spread resources across multiple nodes, but peer-to-peer emphasizes device-to-device collaboration without hierarchy, while distributed edge computing typically involves coordinated placement of dedicated edge servers. Think peer-to-peer for crowdsourced computing; distributed for enterprise deployments.
These architectures organize edge resources into structured layers or optimize for specific use cases. The design philosophy: match processing location to task requirements.
Compare: Hierarchical vs. IoT Edge Computing—hierarchical architectures organize any edge workload into tiers based on complexity, while IoT edge computing specifically optimizes for the characteristics of sensor data (high volume, simple structure, time-sensitivity). Use hierarchical when discussing general-purpose edge deployments; IoT edge for sensor-specific scenarios.
| Concept | Best Examples |
|---|---|
| Cloud extension to edge | Fog Computing, Cloudlets, MEC |
| Network integration | Multi-access Edge Computing, Edge-Cloud Hybrid |
| Decentralization & resilience | Peer-to-Peer Edge, Distributed Edge Computing |
| Tiered processing | Hierarchical Edge Computing |
| Domain-specific optimization | IoT Edge Computing, Edge-Centric Computing |
| Mobile/5G applications | MEC, Multi-access Edge Computing |
| Latency minimization | MEC, Edge-Centric Computing, IoT Edge |
| Scalability focus | Edge-Cloud Hybrid, Distributed Edge Computing |
Which two architectures both extend cloud capabilities to the edge but differ in whether resources are concentrated (discrete locations) or distributed (mesh across nodes)?
If an autonomous vehicle needs guaranteed sub-millisecond response times while transitioning between cellular and Wi-Fi networks, which architecture best addresses this requirement, and why?
Compare and contrast Peer-to-Peer Edge Computing and Hierarchical Edge Computing in terms of their approach to workload distribution and fault tolerance.
An industrial facility needs to process thousands of sensor readings per second, filter out normal readings locally, and only send anomalies to a central system. Which architecture category best fits this use case, and what specific architecture would you recommend?
FRQ-style prompt: Explain how Edge-Cloud Hybrid Architecture balances the trade-offs between latency and scalability. Provide a specific application scenario where this architecture would outperform a purely edge-centric or purely cloud-centric approach.