upgrade
upgrade

💻Information Systems

Fundamental System Development Life Cycle Phases

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

The System Development Life Cycle (SDLC) isn't just a checklist—it's the conceptual framework that explains how and why information systems succeed or fail. When you're tested on SDLC, you're really being asked to demonstrate understanding of project management principles, requirements engineering, quality assurance, and continuous improvement. These phases show up in every real-world IT project, from building a simple database to deploying enterprise-wide systems.

Here's what matters for your exam: you need to understand the logical flow between phases and recognize that each phase addresses a specific type of risk. Planning mitigates scope creep, analysis prevents building the wrong system, testing catches defects before users do. Don't just memorize the phase names—know what problem each phase solves and how skipping or rushing a phase creates downstream failures.


Foundation Phases: Defining the Problem

Before any code gets written or any system gets designed, organizations must establish what they're building and why. These phases prevent the most expensive mistake in systems development: building something nobody needs.

Planning

  • Project scope and objectives—establishes boundaries that prevent scope creep and ensures alignment with strategic business goals
  • Stakeholder identification and initial requirements gathering creates the foundation for all subsequent phases; missing a key stakeholder here causes costly rework later
  • Risk assessment and mitigation planning addresses potential failures proactively, including budget overruns, timeline delays, and technical obstacles

Analysis

  • Requirements analysis distinguishes between what users say they want and what they actually need—a critical skill tested in FRQ scenarios
  • Gap analysis compares current-state systems against desired functionality to identify specific improvements needed
  • Functional vs. non-functional requirements documentation separates what the system does from how well it performs (speed, security, scalability)

Compare: Planning vs. Analysis—both gather requirements, but planning focuses on feasibility and scope while analysis dives into detailed specifications. If an FRQ asks about "determining whether to proceed with a project," that's planning; if it asks about "documenting user needs," that's analysis.


Construction Phases: Building the Solution

Once requirements are locked, the focus shifts to creating the system. These phases transform abstract requirements into concrete, functional technology through systematic design and coding practices.

Design

  • System architecture defines how components interact—this is where decisions about centralized vs. distributed systems, cloud vs. on-premise, and integration points get made
  • User interface (UI) design directly impacts adoption rates; poor usability is a leading cause of system rejection
  • Data models and database design establish how information flows and is stored, affecting system performance and reporting capabilities

Implementation

  • Development execution follows design specifications; deviations here introduce technical debt and maintenance headaches
  • System integration and data migration connects new systems to existing infrastructure—often the riskiest technical activity in the entire SDLC
  • User training bridges the gap between a technically functional system and one that actually gets used; change management is as important as code quality

Compare: Design vs. Implementation—design answers "what will we build?" while implementation answers "how do we build it?" Exam questions often test whether you can identify which phase a specific activity belongs to (creating wireframes = design; writing code = implementation).


Quality Assurance Phase: Validating the Solution

Testing isn't just "checking if it works"—it's a systematic process of verifying that the system meets specifications and validating that it solves the original business problem.

Testing

  • Testing strategy hierarchy—unit testing validates individual components, integration testing checks component interactions, system testing evaluates end-to-end functionality, and user acceptance testing (UAT) confirms business requirements are met
  • Test case documentation creates repeatable, traceable validation procedures; undocumented testing is essentially worthless for audit purposes
  • Performance and load testing ensures the system handles real-world conditions—peak usage times, large data volumes, concurrent users

Compare: System Testing vs. User Acceptance Testing—system testing is performed by IT teams against technical specifications, while UAT is performed by business users against real-world scenarios. Both must pass before go-live, but they catch different types of defects.


Sustainment Phases: Ensuring Long-Term Value

A system's launch is just the beginning. These phases ensure the investment continues delivering value and adapts to changing business needs over time.

Maintenance and Support

  • Ongoing user support resolves issues and maintains productivity; support ticket patterns often reveal design flaws or training gaps
  • System updates address three categories: bug fixes (corrective), performance improvements (perfective), and new features (adaptive)
  • Performance monitoring uses metrics and dashboards to detect degradation before users experience problems

Evaluation

  • Performance assessment compares actual outcomes against the objectives established during planning—closing the accountability loop
  • Post-implementation review captures lessons learned while they're fresh; organizations that skip this phase repeat the same mistakes
  • Stakeholder feedback informs the next iteration or project, connecting evaluation back to planning in a continuous improvement cycle

Compare: Maintenance vs. Evaluation—maintenance is operational (keeping the system running), while evaluation is strategic (determining if the system delivers value). Both happen post-implementation, but they serve different purposes and involve different stakeholders.


Quick Reference Table

ConceptBest Examples
Risk MitigationPlanning (risk assessment), Testing (defect detection)
Requirements EngineeringAnalysis (detailed requirements), Planning (initial scope)
Technical ConstructionDesign (architecture), Implementation (development)
Quality AssuranceTesting (all levels), Evaluation (performance review)
Change ManagementImplementation (training), Maintenance (user support)
Continuous ImprovementEvaluation (lessons learned), Maintenance (updates)
Stakeholder EngagementPlanning (identification), Analysis (validation), Evaluation (feedback)

Self-Check Questions

  1. Which two phases both involve gathering requirements, and how do they differ in scope and depth?

  2. A company discovers after launch that their new system can't handle peak holiday traffic. Which phase failed, and what specific activity should have caught this problem?

  3. Compare and contrast system testing and user acceptance testing—who performs each, what do they validate, and why are both necessary?

  4. If a post-implementation review reveals that the system doesn't align with business goals, which earlier phase likely had deficiencies, and what activities should have prevented this?

  5. An FRQ describes a scenario where users refuse to adopt a new system despite it being technically functional. Which phase activities address this problem, and what should have been done differently?