Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Computing innovations don't exist in a vacuum—they reshape societies, economies, and individual lives in ways that creators often can't predict. The AP exam tests your ability to analyze these impacts critically, recognizing that the same innovation can be both beneficial and harmful depending on context, perspective, and implementation. You're being tested on your understanding of legal frameworks, algorithmic accountability, privacy rights, and the digital divide—not just what these concepts mean, but how they play out in real-world scenarios.
When you encounter ethical issues on the exam, think beyond simple "good vs. bad" framings. The College Board wants you to demonstrate nuanced reasoning: How do intellectual property protections balance creator rights against public access? Why might algorithmic bias persist even when developers have good intentions? Don't just memorize definitions—know what principle each issue illustrates and be ready to apply that understanding to unfamiliar examples.
Personal data has become one of the most valuable commodities in the digital economy, creating tension between innovation and individual rights. The core principle here is informed consent—users should understand and control how their information is collected, used, and shared.
Compare: Corporate data collection vs. government surveillance—both involve gathering personal information without full user awareness, but they differ in purpose (profit vs. security) and legal frameworks. FRQs often ask you to evaluate tradeoffs between security benefits and privacy costs.
Algorithms increasingly make decisions that affect people's lives—from loan approvals to content recommendations. The key insight is that algorithms reflect the biases present in their training data and the assumptions of their creators, even when no one intends harm.
Compare: Algorithmic bias vs. AI accountability—bias is about what goes wrong (unfair outcomes), while accountability is about who answers for it when things go wrong. If an FRQ asks about harmful effects of computing innovations, these concepts often work together.
Digital technology makes copying and sharing trivially easy, creating unprecedented challenges for protecting creators while ensuring public access to knowledge. The tension here is between incentivizing innovation through ownership rights and enabling the free flow of information that drives further innovation.
Compare: Traditional copyright vs. Creative Commons—both protect creators, but copyright defaults to "all rights reserved" while CC licenses default to sharing with conditions. Know specific license types (CC BY requires attribution; GPL requires derivative works to remain open source).
Technology's benefits aren't distributed equally, and computing innovations can either bridge or widen existing social gaps. The underlying principle is that access to technology increasingly determines access to opportunity—in education, employment, healthcare, and civic participation.
Compare: Digital divide vs. accessibility—both concern who can use technology, but digital divide focuses on access (having devices and connectivity) while accessibility focuses on usability (whether technology works for people with different abilities). Both are equity issues with legal dimensions.
The internet has transformed how people communicate, organize, and form opinions, creating new categories of harm and benefit that didn't exist before. The key concept is emergent behavior—large-scale effects that arise from millions of individual interactions in ways no one designed or predicted.
Compare: Misinformation spread vs. targeted advertising—both exploit algorithmic amplification and user data, but one spreads false information while the other manipulates purchasing or voting behavior. Both illustrate how computing innovations can have unintended harmful effects at scale.
Digital systems are only as valuable as they are trustworthy, and security vulnerabilities can undermine both individual safety and societal confidence in technology. The principle here is that security is not just a technical problem but an ethical obligation—developers have responsibility to anticipate and mitigate threats.
Compare: Data breaches vs. hacking—breaches are the outcome (information exposed), while hacking is the method (unauthorized access). Security measures aim to prevent hacking; privacy regulations address what happens after breaches occur.
| Concept | Best Examples |
|---|---|
| Privacy and consent | GDPR, data collection practices, surveillance |
| Algorithmic accountability | Bias in training data, explainable AI, fairness audits |
| Intellectual property | Copyright, fair use, DMCA |
| Open access frameworks | Creative Commons, GPL, MIT License, open source |
| Digital equity | Digital divide, accessibility/ADA, broadband access |
| Dual-use effects | Automation (efficiency vs. job loss), AI (benefits vs. risks) |
| Emergent harms | Misinformation, filter bubbles, viral content |
| Security obligations | Vulnerability assessment, responsible disclosure |
Compare and contrast algorithmic bias and the digital divide. How do both create inequitable outcomes, and what distinguishes their causes?
Which two ethical issues both involve tension between protecting individual rights and enabling broader access or security? Explain the tradeoff each represents.
A social media platform's recommendation algorithm increases user engagement but also amplifies misinformation. Using the concept of emergent behavior, explain why this outcome might occur even if developers didn't intend it.
How do Creative Commons licenses and traditional copyright represent different approaches to the same underlying tension in intellectual property? When might a creator choose each option?
An FRQ asks you to evaluate a computing innovation's effects on different stakeholders. Using automation as your example, identify one beneficial effect and one harmful effect, and explain how the same feature of the innovation produces both outcomes.