Fiveable
Fiveable

or

Log in

Find what you need to study


Light

Find what you need to study

2.6 MC Answers and Review

7 min readmarch 13, 2023

Answers and Review for Multiple Choice Practice on Data

https://cdn.pixabay.com/photo/2019/06/17/19/48/source-4280758_1280.jpg

Image courtesy of Pixabay

STOP ⛔ Before you look at the answers, make sure you gave this practice quiz a try so you can assess your understanding of the concepts covered in Big Idea 2. Click here for the practice questions: AP CS: Principles Big Idea 2 Multiple Choice Questions.

Facts about the test: The AP CS: Principles exam has 70 multiple choice questions and you will be given 2 hours (120 minutes) to complete the section. That means it should take you around 10 minutes to complete 6 questions. The following questions were not written by College Board and, although they cover information outlined in the AP Computer Science Principles Course and Exam Description, the formatting on the exam may be different.


A. Filter videos to those recorded within the city

B. Filter videos to those recorded in the last 3 months

C. Filter videos to those recorded in a particular city.

D. Filter videos to those recorded with his favorite camera

Explanation: The metadata only stored the date, time, location, and device used for each video recording. Any other data outside of the metadata will not be able to be filtered. Read this guide about Extracting Info from Data!


2. This type of data makes working with other data easier - allowing the user to sort or locate specific information. It is also referred to as "data about data". What type of data is this?

A. Quantitative Data

B. Programming Data

C. Metadata

D. Machine Data

Explanation: The word "meta" means to refer to self or to the conventions of its genre. Hence, metadata is data about data. Read this guide about Extracting Info from Data!


3. A coffee shop is interested in learning about the beverage preferences of coffee drinkers living nearby to the coffee shop and intends to use survey data to help decide which new beverage options to add to the menu. Which of the following is LEAST likely to be part of the process used to analyze the data?

A. Filtering data to view responses based off of gender.

B. Cleaning data to remove inconsistencies.

C. Filtering data to view responses based off of age.

D. Cleaning up data visualization to remove unwanted patterns.

Explanation: We don't manipulate data to remove unwanted patterns. When we analyze data, we analyze what we have and not what we want to manipulate it to show. Cleaning data is a process that makes the data uniform without changing its meaning. To change unwanted patterns in the data will more than likely change the meaning of the data when analyzed. Read this guide about Extracting Info from Data!


4. An ornithologist is interested in learning more about the different kinds of birds living in different areas of the state he lives in. The ornithologist creates an app that allows residents of the town to photograph birds in their area using a smartphone and record date, time, and location of the photograph. Afterwards the ornithologist will analyze the data to try to determine where different kinds of birds living in the state.Which of the following does this situation best demonstrate?

A. Crowd Sourcing

B. Crowd funding

C. Citizen Science

D. Open Data

Explanation: Citizen Science can be defined as scientific research conducted in whole or part by distributed individuals, many of whom may not be scientists, who contribute relevant data to research using their own computing devices.

Read this guide about crowdsourcing and other methods of collection!


5. A local health department decides to publicize data it has collected about the spread of Covid-19 around the city. The data is freely available for all to use and analyze in the hopes that it is possible to identify more efficient strategies to avoid the spread of the virus.Which of the following does this situation best demonstrate?

A. Open Data

B. Citizen Science

C. Crowd funding

D. Machine Data

Explanation: Open data is research data that is freely available on the Internet for anyone to download, modify, and distribute without any legal or financial restrictions.

Read this guide about crowdsourcing and other methods of collection!


 6. A videographer stores videos on his laptop. In this case the videos are considered the data. Each video includes metadata such as: -Date: The date the video was shot. -Time: The time the video was shot. -Location: The location the video was shot. -Device: The camera the video was recorded with. Filtering the data to analyze specific portions of the metadata would be considered what?

A. Machine Data

B. Data Filtering

C. Crowd funding

D. Crowd Sourcing

Explanation: Data filtering is choosing a smaller subset of a data set to use for analysis, for example by eliminating / keeping only certain rows in a table.

Read this guide about using programs with data!


7. Data that showed Individuals who smoked 5 packs of cigarettes a week all developed lung cancer was an example of what?

A. Machine Data

B. Correlation

C. Open Data

D. Citizen Science

Explanation: Correlation is a relationship between two pieces of data, it refers to how one piece of data directly impacts another.

Read this guide about using programs with data!


8. Providing users to read-only access to data is an example of

A. Security

B. Privacy

C. Encryption

D. Filtering

Explanation: Security is giving the appropriate level of access to data or software functionality.

Read this guide about Safe Computing!


9. Which data compression technique comes at the expense of losing data?

A. Classification

B. Filtering

C. Lossless

D. Lossy

Explanation: Lossy data compression involves the loss of some data in the compression process. The original data that is lost can never be restored, but this technique offers the greatest compression.

Read this guide about Data Compression!


 10. Information about the location of a photograph within the data of a picture is

A. Content

B. Open Data

C. Metadata

D. Maxdata

Explanation: The word "meta" means to refer to self or to the conventions of its genre. Hence, metadata is data about data. Read this guide about Extracting Info from Data!


11. The ability to add or remove resources to store large data sets is called

A. Filtering

B. Metadata

C. Scalability

D. Routing

Explanation: Being able to scale means resources can be added or removed to store and process large data sets. Read this guide about Extracting Info from Data!


12. This process ensures that incomplete data does not hid or skew results, it repairs bad or incomplete data. What is this process called?

A. Scalability

B. Cleaning Data

C. Filtering

D. Encryption

Explanation: Sometimes it is necessary for data to be cleaned, removed, or repaired to ensure valid data is used for research and analysis. Read this guide about Extracting Info from Data!


13. Why is it important to analyze big data?

A. It helps humans to identify patterns that humans cannot see or consistently discover naturally

B. It verifies existing issues and solutions within the internet

C. It increases the speed and protection of internet access.

D. It increases redundancy, liability, and scalability

Explanation: Taking a deep dive to analyze big data ensures that we are able to identify patterns that could help solve problems or pinpoint new possibilities that people likely could not process without computing power.

Read this introduction to Big Idea 2!


14. This compression technique requires no data to be lost and ensures that the original image can be restored. A. Data Storage

B. Lossless

C. Lossy

D. Ciphering

Explanation: Lossless data compression does not result in data being permanently deleted. All data is still intact, and the original image can be completely restored.

Read this guide about Data Compression!


15. Large data sets can be defined by grouping data with common features and values based on criteria provided by the data analysts. This process is called

A. Cleaning  

B. Patterns

C. Classifying

D. Filtering Explanation: Classifying data allows people to make meaning of large data sets by grouping data with similar features and values together.

Read this guide about Extracting Info from Data!


What can we help you do now?

  • 🤝Connect with other students studying AP CS: P with Hours


2.6 MC Answers and Review

7 min readmarch 13, 2023

Answers and Review for Multiple Choice Practice on Data

https://cdn.pixabay.com/photo/2019/06/17/19/48/source-4280758_1280.jpg

Image courtesy of Pixabay

STOP ⛔ Before you look at the answers, make sure you gave this practice quiz a try so you can assess your understanding of the concepts covered in Big Idea 2. Click here for the practice questions: AP CS: Principles Big Idea 2 Multiple Choice Questions.

Facts about the test: The AP CS: Principles exam has 70 multiple choice questions and you will be given 2 hours (120 minutes) to complete the section. That means it should take you around 10 minutes to complete 6 questions. The following questions were not written by College Board and, although they cover information outlined in the AP Computer Science Principles Course and Exam Description, the formatting on the exam may be different.


A. Filter videos to those recorded within the city

B. Filter videos to those recorded in the last 3 months

C. Filter videos to those recorded in a particular city.

D. Filter videos to those recorded with his favorite camera

Explanation: The metadata only stored the date, time, location, and device used for each video recording. Any other data outside of the metadata will not be able to be filtered. Read this guide about Extracting Info from Data!


2. This type of data makes working with other data easier - allowing the user to sort or locate specific information. It is also referred to as "data about data". What type of data is this?

A. Quantitative Data

B. Programming Data

C. Metadata

D. Machine Data

Explanation: The word "meta" means to refer to self or to the conventions of its genre. Hence, metadata is data about data. Read this guide about Extracting Info from Data!


3. A coffee shop is interested in learning about the beverage preferences of coffee drinkers living nearby to the coffee shop and intends to use survey data to help decide which new beverage options to add to the menu. Which of the following is LEAST likely to be part of the process used to analyze the data?

A. Filtering data to view responses based off of gender.

B. Cleaning data to remove inconsistencies.

C. Filtering data to view responses based off of age.

D. Cleaning up data visualization to remove unwanted patterns.

Explanation: We don't manipulate data to remove unwanted patterns. When we analyze data, we analyze what we have and not what we want to manipulate it to show. Cleaning data is a process that makes the data uniform without changing its meaning. To change unwanted patterns in the data will more than likely change the meaning of the data when analyzed. Read this guide about Extracting Info from Data!


4. An ornithologist is interested in learning more about the different kinds of birds living in different areas of the state he lives in. The ornithologist creates an app that allows residents of the town to photograph birds in their area using a smartphone and record date, time, and location of the photograph. Afterwards the ornithologist will analyze the data to try to determine where different kinds of birds living in the state.Which of the following does this situation best demonstrate?

A. Crowd Sourcing

B. Crowd funding

C. Citizen Science

D. Open Data

Explanation: Citizen Science can be defined as scientific research conducted in whole or part by distributed individuals, many of whom may not be scientists, who contribute relevant data to research using their own computing devices.

Read this guide about crowdsourcing and other methods of collection!


5. A local health department decides to publicize data it has collected about the spread of Covid-19 around the city. The data is freely available for all to use and analyze in the hopes that it is possible to identify more efficient strategies to avoid the spread of the virus.Which of the following does this situation best demonstrate?

A. Open Data

B. Citizen Science

C. Crowd funding

D. Machine Data

Explanation: Open data is research data that is freely available on the Internet for anyone to download, modify, and distribute without any legal or financial restrictions.

Read this guide about crowdsourcing and other methods of collection!


 6. A videographer stores videos on his laptop. In this case the videos are considered the data. Each video includes metadata such as: -Date: The date the video was shot. -Time: The time the video was shot. -Location: The location the video was shot. -Device: The camera the video was recorded with. Filtering the data to analyze specific portions of the metadata would be considered what?

A. Machine Data

B. Data Filtering

C. Crowd funding

D. Crowd Sourcing

Explanation: Data filtering is choosing a smaller subset of a data set to use for analysis, for example by eliminating / keeping only certain rows in a table.

Read this guide about using programs with data!


7. Data that showed Individuals who smoked 5 packs of cigarettes a week all developed lung cancer was an example of what?

A. Machine Data

B. Correlation

C. Open Data

D. Citizen Science

Explanation: Correlation is a relationship between two pieces of data, it refers to how one piece of data directly impacts another.

Read this guide about using programs with data!


8. Providing users to read-only access to data is an example of

A. Security

B. Privacy

C. Encryption

D. Filtering

Explanation: Security is giving the appropriate level of access to data or software functionality.

Read this guide about Safe Computing!


9. Which data compression technique comes at the expense of losing data?

A. Classification

B. Filtering

C. Lossless

D. Lossy

Explanation: Lossy data compression involves the loss of some data in the compression process. The original data that is lost can never be restored, but this technique offers the greatest compression.

Read this guide about Data Compression!


 10. Information about the location of a photograph within the data of a picture is

A. Content

B. Open Data

C. Metadata

D. Maxdata

Explanation: The word "meta" means to refer to self or to the conventions of its genre. Hence, metadata is data about data. Read this guide about Extracting Info from Data!


11. The ability to add or remove resources to store large data sets is called

A. Filtering

B. Metadata

C. Scalability

D. Routing

Explanation: Being able to scale means resources can be added or removed to store and process large data sets. Read this guide about Extracting Info from Data!


12. This process ensures that incomplete data does not hid or skew results, it repairs bad or incomplete data. What is this process called?

A. Scalability

B. Cleaning Data

C. Filtering

D. Encryption

Explanation: Sometimes it is necessary for data to be cleaned, removed, or repaired to ensure valid data is used for research and analysis. Read this guide about Extracting Info from Data!


13. Why is it important to analyze big data?

A. It helps humans to identify patterns that humans cannot see or consistently discover naturally

B. It verifies existing issues and solutions within the internet

C. It increases the speed and protection of internet access.

D. It increases redundancy, liability, and scalability

Explanation: Taking a deep dive to analyze big data ensures that we are able to identify patterns that could help solve problems or pinpoint new possibilities that people likely could not process without computing power.

Read this introduction to Big Idea 2!


14. This compression technique requires no data to be lost and ensures that the original image can be restored. A. Data Storage

B. Lossless

C. Lossy

D. Ciphering

Explanation: Lossless data compression does not result in data being permanently deleted. All data is still intact, and the original image can be completely restored.

Read this guide about Data Compression!


15. Large data sets can be defined by grouping data with common features and values based on criteria provided by the data analysts. This process is called

A. Cleaning  

B. Patterns

C. Classifying

D. Filtering Explanation: Classifying data allows people to make meaning of large data sets by grouping data with similar features and values together.

Read this guide about Extracting Info from Data!


What can we help you do now?

  • 🤝Connect with other students studying AP CS: P with Hours




© 2024 Fiveable Inc. All rights reserved.

AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.


© 2024 Fiveable Inc. All rights reserved.

AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.