Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
The System Development Life Cycle (SDLC) is the conceptual framework that explains how and why information systems succeed or fail. When you're tested on SDLC, you're really being asked to demonstrate understanding of project management principles, requirements engineering, quality assurance, and continuous improvement. These phases show up in every real-world IT project, from building a simple database to deploying enterprise-wide systems.
For your exam, you need to understand the logical flow between phases and recognize that each phase addresses a specific type of risk. Planning mitigates scope creep. Analysis prevents building the wrong system. Testing catches defects before users do. Don't just memorize the phase names. Know what problem each phase solves and how skipping or rushing a phase creates downstream failures.
Before any code gets written or any system gets designed, organizations must establish what they're building and why. These phases prevent the most expensive mistake in systems development: building something nobody needs.
Planning is about answering one big question: should we do this project, and if so, what are its boundaries?
Analysis shifts from "should we build it?" to "what exactly should it do?" This is where vague business needs get translated into specific, documented requirements.
Compare: Planning vs. Analysis: both gather requirements, but planning focuses on feasibility and scope while analysis dives into detailed specifications. If a question asks about "determining whether to proceed with a project," that's planning. If it asks about "documenting user needs," that's analysis.
Once requirements are locked, the focus shifts to creating the system. These phases transform abstract requirements into concrete, functional technology through systematic design and development practices.
Design is the blueprint phase. No code is written yet, but every major technical decision gets made here.
Implementation is where the system actually gets built and deployed. It covers coding, integration, and preparing users for the transition.
Compare: Design vs. Implementation: design answers "what will we build?" while implementation answers "let's build and deploy it." Exam questions often test whether you can identify which phase a specific activity belongs to. Creating wireframes = design. Writing code = implementation. Training users = implementation.
Testing isn't just "checking if it works." It's a systematic process of verifying that the system meets specifications and validating that it solves the original business problem. Those two words matter: verification asks "did we build it right?" and validation asks "did we build the right thing?"
Testing follows a hierarchy, where each level catches different types of problems:
Beyond functional testing, performance and load testing ensures the system handles real-world conditions like peak usage times, large data volumes, and concurrent users.
Test case documentation creates repeatable, traceable validation procedures. Undocumented testing is essentially worthless for audit purposes because you can't prove what was tested or demonstrate compliance.
Compare: System Testing vs. User Acceptance Testing: system testing is performed by IT teams against technical specifications, while UAT is performed by business users against real-world scenarios. Both must pass before go-live, but they catch different types of defects. A system can pass every technical test and still fail UAT because it doesn't match how users actually work.
A system's launch is just the beginning. These phases ensure the investment continues delivering value and adapts to changing business needs over time.
Once a system is live, it needs ongoing care. Maintenance keeps the system running and relevant.
Evaluation zooms out from day-to-day operations to ask the bigger question: is this system actually delivering the value we expected?
Compare: Maintenance vs. Evaluation: maintenance is operational (keeping the system running day to day), while evaluation is strategic (determining if the system delivers business value). Both happen post-implementation, but they serve different purposes and involve different stakeholders.
| Concept | Best Examples |
|---|---|
| Risk Mitigation | Planning (risk assessment), Testing (defect detection) |
| Requirements Engineering | Analysis (detailed requirements), Planning (initial scope) |
| Technical Construction | Design (architecture), Implementation (development) |
| Quality Assurance | Testing (all levels), Evaluation (performance review) |
| Change Management | Implementation (training), Maintenance (user support) |
| Continuous Improvement | Evaluation (lessons learned), Maintenance (updates) |
| Stakeholder Engagement | Planning (identification), Analysis (validation), Evaluation (feedback) |
Which two phases both involve gathering requirements, and how do they differ in scope and depth?
A company discovers after launch that their new system can't handle peak holiday traffic. Which phase failed, and what specific activity should have caught this problem?
Compare and contrast system testing and user acceptance testing. Who performs each, what do they validate, and why are both necessary?
If a post-implementation review reveals that the system doesn't align with business goals, which earlier phase likely had deficiencies, and what activities should have prevented this?
A scenario describes users refusing to adopt a new system despite it being technically functional. Which phase activities address this problem, and what should have been done differently?