Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
When you're working with digital images, you're constantly making tradeoffs between file size and image quality—and the AP exam expects you to understand why different compression methods exist and when to use each one. These algorithms aren't just technical trivia; they represent fundamental concepts in computer science: lossy vs. lossless compression, data redundancy, and how mathematical transformations can reduce information while preserving what matters most.
You're being tested on your ability to analyze how images are stored and transmitted as data, which connects directly to broader themes of data representation, abstraction, and algorithmic efficiency. Don't just memorize which format does what—know the underlying principle each algorithm demonstrates and be ready to explain why JPEG works well for photos but terribly for logos, or how run-length encoding exploits patterns in data.
These formats achieve smaller file sizes by permanently discarding some image data. The key principle: human perception doesn't notice certain details, so we can remove them strategically.
Compare: JPEG vs. WebP—both handle photographic images well, but WebP achieves 25-35% smaller files at equivalent quality and supports transparency. If an FRQ asks about optimizing web performance, WebP is your modern answer.
These formats reduce file size without losing any original data. The tradeoff: larger files, but perfect reconstruction.
Compare: PNG vs. GIF—both are lossless, but PNG supports millions of colors and smooth transparency while GIF is limited to 256 colors with binary transparency. Use GIF only when you need animation; otherwise, PNG wins.
These are the mathematical and algorithmic methods that power the formats above. Understanding these helps you explain how compression actually works.
Compare: DCT vs. RLE—DCT transforms data mathematically to identify what can be discarded (lossy), while RLE simply finds repeated patterns to shorten (lossless). They solve different problems and are often used together in compression pipelines.
Compare: Huffman Coding vs. LZW—both are lossless, but Huffman assigns codes based on individual value frequency while LZW identifies and replaces repeated sequences. Think of Huffman as letter-level optimization and LZW as word-level optimization.
| Concept | Best Examples |
|---|---|
| Lossy compression for photos | JPEG, HEIF, WebP (lossy mode) |
| Lossless compression for graphics | PNG, GIF, WebP (lossless mode) |
| Transparency support | PNG, GIF, WebP, HEIF |
| Animation support | GIF, WebP, HEIF |
| Frequency-domain transformation | DCT (used in JPEG) |
| Pattern-based lossless encoding | RLE, LZW |
| Statistical encoding | Huffman Coding |
| Modern/next-gen formats | WebP, HEIF, JPEG 2000 |
Which two compression techniques are both lossless but use fundamentally different approaches—one based on repeated sequences and one based on value frequency?
A web developer needs to display a company logo with a transparent background. Why would JPEG be a poor choice, and which format would you recommend instead?
Compare and contrast JPEG and PNG: What type of compression does each use, and what types of images is each best suited for?
If an FRQ asks you to explain how JPEG achieves smaller file sizes than the original image, which underlying technique (DCT, RLE, or Huffman) would be most important to discuss, and why?
WebP and HEIF are both considered "next-generation" formats. What advantages do they share over older formats like JPEG and PNG, and why haven't they completely replaced those legacy formats yet?