Gender and Racial Bias in Tech Design and AI

Technology is constantly changing and improving. Is it improving in all of the ways we need it to?

As someone that has worked in the technology industry for years, specifically UX and crowdtesting, I have been privy to the bias in tech design and AI. 

Artificial Intelligence (AI) is the simulation of human intelligence processes by computer systems. AI works off of the information and data that is created and collected by humans. There are biases that exist in humanity, whether those biases are based on gender, race, sexual orientation, socioeconomic class, etc. Without proper diversity and testing, these biases inevitably make their way into technology and that is extremely harmful and discriminatory. 

One of the most major reasons why there is a bias in AI comes from a lack of diversity in the tech industry. Only 22% of professionals in AI and data science fields are women. Lack of diversity in AI teams leads to design that does not account for and fit the needs or wants of the entire population. For example, women are more likely than men to feel motion sickness when using virtual-reality headsets, possibly because 90% of women’s pupils are closer together than the typical default setting. Chances are, without the necessary testing and diversity, the typical default setting was most likely designed by a man. 

Joy Buolamwini described her firsthand experience with biased tech when she was a grad student at MIT. She discovered that some facial analysis software couldn’t detect her dark-skinned face until she put on a white mask. 

Joy Buolamwini and Timnit Gebru conducted a study, called Gender Shades, that looked at commercial facial-recognition systems by IBM, Microsoft, and Amazon. They discovered that the systems used image data that lacked diversity and representation, which led to all systems performing better on male faces than female faces (8.1% - 20.6% difference in error rates). In addition, they found that the systems in all of the companies performed better on lighter subjects than darker subjects (11.8% - 19.2% difference in error rates). All of the companies performed the worst on darker females. 

At this point, AI and machine learning is pervasive in all of our lives. It’s completely unacceptable to have these biases be so prevalent in the world’s technology, as this can lead to detrimental mistakes and irreversible damage. Companies need to make sure they’re hiring diverse talent and conducting inclusive product testing if they want to create systems that work for all of humanity. 

Next
Next

Conscious Meat Eating: How You Can Help