Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Debugging isn't just about fixing broken code—it's about developing a systematic problem-solving mindset that separates competent programmers from struggling ones. In Programming Languages and Techniques II, you're expected to work with increasingly complex systems: data structures, algorithms, object-oriented designs, and multi-file projects. When something breaks (and it will), your ability to efficiently locate and fix the issue determines whether you spend 10 minutes or 10 hours on a single bug.
The strategies below aren't random tips—they represent core diagnostic principles that professional developers use daily. You're being tested not just on whether you can write code, but on whether you can reason about code behavior, trace execution flow, and isolate failures methodically. Don't just memorize these techniques—understand when each approach is most effective and how they complement each other.
Before you can fix a bug, you need to understand exactly when and how it manifests. Consistent reproduction is the foundation of all debugging—without it, you're just guessing.
Compare: Error messages vs. stack traces—both indicate where something went wrong, but error messages describe what failed while stack traces show how execution got there. For complex bugs involving multiple function calls, start with the stack trace to understand flow, then use the error message to understand the specific failure.
Once you've observed the bug, your goal is to narrow down exactly which code is responsible. The faster you can isolate the problem, the faster you can fix it.
Compare: Isolation (commenting out code) vs. binary search debugging—both narrow down bug location, but isolation works well for small, modular code while binary search excels in large files or when you have no hypothesis about the bug's location. If an exam asks about efficient debugging of a 500-line function, binary search is your answer.
Sometimes observation isn't enough—you need to actively probe your program's state during execution. These techniques let you see inside your running code.
# Example: Strategic print debugging def binary_search(arr, target): left, right = 0, len(arr) - 1 while left <= right: mid = (left + right) // 2 print(f"DEBUG: left={left}, right={right}, mid={mid}, arr[mid]={arr[mid]}") if arr[mid] == target: return mid elif arr[mid] < target: left = mid + 1 else: right = mid - 1 return -1
assert statements that crash immediately when invariants are violateddef calculate_average(numbers): assert len(numbers) > 0, "Cannot calculate average of empty list" assert all(isinstance(n, (int, float)) for n in numbers), "All elements must be numeric" return sum(numbers) / len(numbers)
Compare: Print statements vs. debugger tools—prints are quick, portable, and work anywhere, but debuggers offer richer inspection without modifying code. Use prints for quick checks or when a debugger isn't available; use debuggers for complex state inspection or when you need to pause and explore interactively.
Bugs don't appear from nowhere—they're introduced by changes. Understanding what changed and when is often the fastest path to a fix.
git diff to see exactly what changed between working and broken versionsgit bisect to automatically binary-search through commit history and find the exact commit that introduced a bug# Git bisect example git bisect start git bisect bad # Current version is broken git bisect good abc123 # This old commit worked # Git will checkout commits for you to test git bisect good # or 'bad' based on your test # Repeat until the offending commit is found
Compare: Checking recent changes vs. git bisect—manual review works when you suspect a specific recent change, while git bisect automates the search across many commits. For bugs that appeared "sometime in the last month," git bisect can save hours of manual investigation.
Some bugs hide in plain sight. Careful, methodical review catches issues that scattered debugging misses.
null/None, empty strings "", and zero values that often behave unexpectedlyCompare: Systematic review vs. rubber duck debugging—both involve careful examination, but review is silent and visual while rubber ducking forces verbalization. Use systematic review for syntax and logic errors; use rubber ducking when you're stuck and need to break out of your current mental model.
You're not debugging in a vacuum—leverage the knowledge of others who've encountered similar issues.
| Concept | Best Strategies |
|---|---|
| First Response | Reproduce consistently, read error messages, analyze stack traces |
| Narrowing Down | Isolate the problem, binary search debugging, break into smaller parts |
| Runtime Inspection | Print statements, debugger tools, assertions |
| Change Analysis | Check recent changes, version control history, git bisect |
| Careful Examination | Systematic review, rubber duck debugging, edge case testing |
| Prevention | Assertions, error handling, frequent commits |
| External Help | Documentation, community resources, code review |
You've introduced a bug somewhere in the last 50 commits but aren't sure which one. Which two strategies would most efficiently locate the problematic commit, and how do they differ in approach?
Compare and contrast print statement debugging with using an IDE debugger. In what scenario would you choose prints over breakpoints, and vice versa?
A function works correctly for most inputs but fails for an empty list. Which debugging strategy category does this fall under, and what specific technique should you apply?
Your code throws a NullPointerException with a 15-line stack trace. Explain the systematic process you would use to interpret this trace and locate the root cause.
FRQ-style: You're debugging a recursive function that works for small inputs but causes a stack overflow for large ones. Describe three different debugging strategies you would apply, explaining why each is appropriate for this specific type of bug.