top of page

Is AI Sexist or Racist?

Given the recent hype of ChatGPT, I got curious and started to poke around different platforms of Generative AI. Aside from the possibilities of content generation, I was especially intrigued by image AIs. Type in a short description, and within a few seconds, AI will generate an image for you. This was mind-blowing and revolutionary to how the future is heading, until I noticed something disturbing.

Based on my research, I started to experiment with prompts that are relevant or pertain to Asian culture. The language AI did fine and was informative, but the image AI results came back mediocre and somewhat offensive.

(images generated based on "images that are significant to Asian culture or Asian women.")

This experiment sent me down another inquisitive path and has me wondering, “is AI racist” A recent article on CNN sheds a lot of light on the subject and validates what I was a bit reluctant to acknowledge. I wanted AI to be a force for good information, but the reality is that it still draws its data from existing sources, so the same bias or discrimination we see in today’s world recycles right back into the AI’s mind.

Of course, the first step to combating discrimination and bias is by bringing awareness to the problem. So let’s examine AI bias, differences in AI bias internationally, and how we can combat AI bias. AI bias refers to the unfair and unintended effects that can occur when machine learning algorithms are trained on biased data. For example, facial recognition software may be biased against people of color, or a hiring algorithm may be biased against women. This can lead to unequal treatment and discrimination, even when the algorithm is intended to be impartial. AI bias can be particularly concerning when it comes to important decisions that affect people's lives, such as hiring, lending, and criminal justice.

Differences in AI bias can vary internationally and can be influenced by factors such as cultural differences, government regulations, and access to data. For example, in the United States, there have been concerns about AI bias in the criminal justice system, where algorithms have been used to predict the likelihood of reoffending. Studies have shown that these algorithms are biased against people of color, leading to unequal treatment in sentencing. In China, there have been concerns about the use of AI for surveillance, which has been used to monitor the Uighur Muslim minority. There have been many examples of Chinese AI perpetuating gender inequality, assigning CEO roles to men and caregiving identities to women.

In Russia, there have been concerns about the use of AI to influence elections using social media bots. In the United States, AI bias has been a focus of research and regulation. In 2019, the Algorithmic Accountability Act was introduced, which would require companies to assess the impact of their algorithms on fairness, accuracy, bias, and discrimination. This act was updated in 2022 to include AI algorithms. However, this bill has yet to be passed into law.

In China, the government has been taking a more proactive approach, with the introduction of new regulations to govern the use of AI in areas such as healthcare and finance, based on the philosophy of Communism. In Russia, there has been less public discussion of AI bias, and it is unclear how the government is addressing this issue.

To combat AI bias, it is essential to ensure that algorithms are trained on diverse and representative data. This can involve using data from a variety of sources and ensuring that the data is not biased toward certain groups or outcomes. Additionally, it is important to involve a diverse group of stakeholders in the development and deployment of AI, including people from different backgrounds and perspectives. This can help to identify potential biases and ensure that the technology is fair and inclusive.

AI bias is a growing concern that can have serious consequences for individuals and society as a whole. While there are differences in AI bias internationally, it is important to address this issue globally through a combination of research, regulation, and education that are fitting for individual cultures. By working together, we can ensure that AI is developed and deployed in a way that is fair, unbiased, and beneficial for all.


bottom of page