Big Idea 5.3 Computing Bias
This tech talk discusses bias in computing
Computer Bias
Earlier we talked about beneficial and harmful effects of computing. Such conversation often lead to conversations on computer bias, particularly when bias creates a harmful effect.
As programmers, you now have the possibility of creating algorithms. It has been said, “Humans are error-prone and biased”. So, does that mean algorithms and the computers they run on are better?
Intentional or Purposeful bias (Crossover Group Up, 10 minutes)
- Google “What age groups use Facebook” vs “… TikTok”? What does the data say? Is there purposeful exclusion in these platforms? Is it harmful? Should it be corrected? Is it good business?
- The data says that 41.7% of Facebook’s is 18-34 years old. Facebook’s smallest audience is 13-17 at about 4.7%. The data also says that 62% of TikTok’s users are 10-29 years old. TikTok’s smallest audience is people who are 50+ years old at 7.1%. I don’t think there is purposeful exclusion in these platforms but the fast-paced videos of TikTok might not appeal to older people and the photos and videos on facebook don’t appeal to younger people. I don’t think it is harmful but I think people of all ages should use one platform instead of two different ones. However, this separation is good business because the platforms can use targeted ads to gain more revenue.
- Why do virtual assistants have female voices? Amazon, Alexa Google, Apple Siri. Was this purposeful? Is it harmful? Should it be corrected? Is it good business?
- Virtual assistants have female voices because they are typically more friendly sounding and appealing. I think this was purposeful because female voices are more trustworthy and calming. Companies are starting to offer different voice options, including male and gender-neutral voices, to be more inclusive.
- Talk about an algorithm that influences your decisions, think about these companies (ie FAANG - Facebook, Amazon, Apple, Netflix, Google)
- An algorithm that influences my decisions is probably my YouTube recommendations. This analyzes the videos I’ve already watched and recommends new videos based on those videos. For example, if I watched a video about the new Apple announcement, my recommended videos would be filed with Apple product reviews.
As Pairs (5 minutes)
- Watch the video… HP computers are racist
- Come up with some thoughts on the video and be ready to discuss them as I call on you. Here are some ideas…
- Does the owner of the computer think this was intentional?
- Based on the video, it’s not explicitly stated if the owner of the computer thinks this was intentional or not. However, they do make a statement that they believe HP computers are racist.
- How do you think this happened?
- I think this happened because of a flaw in testing where they only tested using lighter skinned people and didn’t think of testing darker skinned people.
- Is this harmful? Was it intended to be harmful or exclude?
- This is harmful because it excludes people with darker skin but this was not intended.
- Should it be corrected?
- This should be corrected because it gives a bad reputation to the company due to an unintended mistake. This mistake also excludes and large part of the target audience and is very bad.
- What would you or should you do to produce a better outcome?
- To produce a better outcome, I would have done more testing with people with all skin tones to make sure the face tracking feature works for all people.
- Does the owner of the computer think this was intentional?
Hacks
Write summary/thoughts/conclusions from each of the exercises above. Focus on avoiding Bias in algorithms or code you write.
- Computing bias refers to systemic prejudice in artificial intelligence and machine learning algorithms, resulting in unequal treatment of certain groups. It occurs when the data used to train a model or the way the algorithm is designed leads to inaccurate, unfair, or biased outcomes. This can have serious consequences, including discrimination in areas such as employment, credit, and law enforcement. It is important for organizations to identify and address computing bias to ensure that AI systems are fair and just for all people.