Fiveable
Fiveable

or

Log in

Find what you need to study


Light

Find what you need to study

5.3 Computing Bias

3 min readmarch 13, 2023

Minna Chow

Minna Chow

Minna Chow

Minna Chow

What is Bias?

As we've discussed throughout these guides, can reflect existing biases.

Biases are tendencies or inclinations, especially those that are unfair or prejudicial. Everyone has their own biases, but certain biases, especially those based on someone's identity, can be actively harmful to society.

Biases exist in the world and in individuals. use from the world around them, that people pick out to feed to the computing innovation to use. Therefore, can reflect those existing biases.

Examples of Bias

can be embedded at all levels of development, from the brainstorming phase to the work done after release. This can take the form of a written into the itself or in the used. Let's look at some examples.

  • are used to determine the chances that a defendant will re-offend, or commit another crime. This information is then used to influence decisions across the judicial process.

    • However, these algorithms are trained to pick out patterns and make decisions based on historical , and historical is historically biased against certain races and classes. As a result, risk assessment tools may disproportionally flag people from certain groups as risks.

  • are often trained on sets that contain fewer images of women and minorities than white men. Algorithms might be biased by exclusion because they're trained on sets of that aren't as diverse as they need to be.

  • used by companies to help them sort through large quantities of applicants can be biased against certain races or genders. For example, if successful candidates for a position tend to be men because historically only men have applied, an recruting might teach itself that male candidates are preferred over female ones.

What can we do to prevent bias in computing?

Luckily, people can take steps to combat these biases, and the first step is understanding and acknowledging that could exist. Here are some working suggestions for preventing biases:

  • Use diverse and representative sets: Using sets that are diverse and representative of the overall population can help to reduce in (such as our facial recognition program above).

  • Review algorithms for potential biases: Carefully reviewing algorithms for potential biases, and testing them on diverse sets, can help to identify and mitigate any biases that may be present.

  • Incorporate : Using , such as or , can help to ensure that do not produce discriminatory outcomes.

  • Address human : It is important to be aware of the potential for human in the development and use of technical systems, and to actively seek out and address potential sources of .

  • Increase diversity in the tech industry: Increasing diversity in the tech industry can help to bring a wider range of perspectives and experiences to the development of technical systems, reducing the potential for .

By taking these actions, we're not only benefitting our programs, but also society as a whole. After all, algorithms are written by people. Being able to find and eliminate in computers can help us eliminate in ourselves as well.

Key Terms to Review (11)

Algorithm

: An algorithm is a step-by-step procedure or set of rules for solving a specific problem or accomplishing a task within a finite number of steps.

Bias

: Bias refers to the tendency of a system or process to favor certain outcomes or groups over others, often resulting in unfair or unequal treatment.

Computing Innovations

: Computing innovations refer to new ideas, technologies, or applications that bring about significant advancements in the field of computer science.

Criminal risk assessment tools

: Criminal risk assessment tools are algorithms or software programs used to evaluate the likelihood of an individual committing future criminal offenses based on various factors such as past behavior, demographics, and social environment.

Data

: Data refers to information that is collected, stored, and processed by computers. It can be in the form of numbers, text, images, or any other type of digital content.

Demographic Parity

: Demographic parity refers to achieving equal representation or proportional representation of different demographic groups in a particular context, such as employment or education. It aims to ensure that individuals from various backgrounds have fair and equitable opportunities.

Equal Opportunity

: Equal opportunity refers to the principle that every individual should have the same access to resources, rights, and opportunities without discrimination based on factors such as race, gender, or socioeconomic status. It promotes fairness and aims to level the playing field for everyone.

Facial recognition systems

: Facial recognition systems are technologies that analyze facial features from images or videos and match them against existing databases for identification purposes. They use algorithms to detect and compare unique facial patterns.

Fairness metrics

: Fairness metrics are measures used to evaluate how fair or unbiased a machine learning model is in its decision-making process. They assess whether the model treats different groups of individuals equally and without discrimination.

Machine learning models

: Machine learning models are computer programs that can learn from data and make predictions or decisions without being explicitly programmed. They use statistical techniques to identify patterns in data and generalize from them.

Recruiting algorithms

: Recruiting algorithms are machine learning models that are used by companies to automate and streamline the hiring process. These algorithms analyze applicant data and make predictions about their suitability for a job.

5.3 Computing Bias

3 min readmarch 13, 2023

Minna Chow

Minna Chow

Minna Chow

Minna Chow

What is Bias?

As we've discussed throughout these guides, can reflect existing biases.

Biases are tendencies or inclinations, especially those that are unfair or prejudicial. Everyone has their own biases, but certain biases, especially those based on someone's identity, can be actively harmful to society.

Biases exist in the world and in individuals. use from the world around them, that people pick out to feed to the computing innovation to use. Therefore, can reflect those existing biases.

Examples of Bias

can be embedded at all levels of development, from the brainstorming phase to the work done after release. This can take the form of a written into the itself or in the used. Let's look at some examples.

  • are used to determine the chances that a defendant will re-offend, or commit another crime. This information is then used to influence decisions across the judicial process.

    • However, these algorithms are trained to pick out patterns and make decisions based on historical , and historical is historically biased against certain races and classes. As a result, risk assessment tools may disproportionally flag people from certain groups as risks.

  • are often trained on sets that contain fewer images of women and minorities than white men. Algorithms might be biased by exclusion because they're trained on sets of that aren't as diverse as they need to be.

  • used by companies to help them sort through large quantities of applicants can be biased against certain races or genders. For example, if successful candidates for a position tend to be men because historically only men have applied, an recruting might teach itself that male candidates are preferred over female ones.

What can we do to prevent bias in computing?

Luckily, people can take steps to combat these biases, and the first step is understanding and acknowledging that could exist. Here are some working suggestions for preventing biases:

  • Use diverse and representative sets: Using sets that are diverse and representative of the overall population can help to reduce in (such as our facial recognition program above).

  • Review algorithms for potential biases: Carefully reviewing algorithms for potential biases, and testing them on diverse sets, can help to identify and mitigate any biases that may be present.

  • Incorporate : Using , such as or , can help to ensure that do not produce discriminatory outcomes.

  • Address human : It is important to be aware of the potential for human in the development and use of technical systems, and to actively seek out and address potential sources of .

  • Increase diversity in the tech industry: Increasing diversity in the tech industry can help to bring a wider range of perspectives and experiences to the development of technical systems, reducing the potential for .

By taking these actions, we're not only benefitting our programs, but also society as a whole. After all, algorithms are written by people. Being able to find and eliminate in computers can help us eliminate in ourselves as well.

Key Terms to Review (11)

Algorithm

: An algorithm is a step-by-step procedure or set of rules for solving a specific problem or accomplishing a task within a finite number of steps.

Bias

: Bias refers to the tendency of a system or process to favor certain outcomes or groups over others, often resulting in unfair or unequal treatment.

Computing Innovations

: Computing innovations refer to new ideas, technologies, or applications that bring about significant advancements in the field of computer science.

Criminal risk assessment tools

: Criminal risk assessment tools are algorithms or software programs used to evaluate the likelihood of an individual committing future criminal offenses based on various factors such as past behavior, demographics, and social environment.

Data

: Data refers to information that is collected, stored, and processed by computers. It can be in the form of numbers, text, images, or any other type of digital content.

Demographic Parity

: Demographic parity refers to achieving equal representation or proportional representation of different demographic groups in a particular context, such as employment or education. It aims to ensure that individuals from various backgrounds have fair and equitable opportunities.

Equal Opportunity

: Equal opportunity refers to the principle that every individual should have the same access to resources, rights, and opportunities without discrimination based on factors such as race, gender, or socioeconomic status. It promotes fairness and aims to level the playing field for everyone.

Facial recognition systems

: Facial recognition systems are technologies that analyze facial features from images or videos and match them against existing databases for identification purposes. They use algorithms to detect and compare unique facial patterns.

Fairness metrics

: Fairness metrics are measures used to evaluate how fair or unbiased a machine learning model is in its decision-making process. They assess whether the model treats different groups of individuals equally and without discrimination.

Machine learning models

: Machine learning models are computer programs that can learn from data and make predictions or decisions without being explicitly programmed. They use statistical techniques to identify patterns in data and generalize from them.

Recruiting algorithms

: Recruiting algorithms are machine learning models that are used by companies to automate and streamline the hiring process. These algorithms analyze applicant data and make predictions about their suitability for a job.


© 2024 Fiveable Inc. All rights reserved.

AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.


© 2024 Fiveable Inc. All rights reserved.

AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.