AI is human designed, making it unsurprising that it reflects human bias.
In 2020, Microsoft laid off dozens of newsroom employees and replaced them with AI. Unfortunately, they did not take into account issues with bias in algorithms and the frequent inability of algorithms to tell people of color apart. Shortly after Microsoft’s robot reporters made their debut, the news skimming algorithm published a story to MSN about Jade Thirwell from the band Little Mix and her personal reflections on racism… with a photo of the wrong person. Instead of a photo of Thirwell, the algorithm posted a picture of her bandmate Leigh-Ann Pinnock.
AI’s inability to recognize the faces of people of color is a topic of great concern. In 2021, the documentary Coded Bias followed Joy Buolamwini, a computer scientist at M.I.T. Media Lab after she made the startling discovery that AI facial recognition software could not detect dark skinned faces or recognize women with accuracy.
Why are algorithms racist?
In her book, Automated (Un)Intelligence, data journalism professor Meridith Broussard explains that the term machine learning is somewhat misleading on its own. When computer scientists say that AI applications “learn,” they don’t quite mean learning in the human sense. AI learns from training data — large datasets that teach it statistical patterns in the world. Basically, the AI learns how to solve problems better and faster because it can predict what will happen from the data it learns from. The result of this however, is that the machine misses out on a lot of the nuances of human intelligence and communication — for example, it likely won’t be able to detect sarcasm or figures of speech.
Additionally, AI is created by humans — and humans have bias. So if a dataset reflects human biases, the AI will produce a biased output. So, for example, when Amazon used AI to screen resumes and filter job applicants, it was quickly discovered that the algorithm was sorting out the resumes of women.
The algorithm was trained on the resumes of successful employees and Silicon Valley isn’t known for its gender diversity. So, the application started rejecting resumes with feminine language in it, penalizing resumes that had the word “women” in it and resumes that contained the name of certain women’s colleges. The result was that Amazon had to stop using the application shortly after introducing it. Although the tool was edited to make it more neutral, there was no way to verify that it would not be discriminatory again — so it has not come back into use.
How can I use AI while taking bias into account?
While it is important to consider the drawbacks of algorithmic bias, we don’t have to throw out all AI. Responsible use of AI means acknowledging that humans give their prejudices to machines and that we still need human intervention in many cases. In the case of the AI editor that published the photo of the wrong Black woman, the mistake could have been avoided if a human editor had simply fact-checked the post.
So coming in with an understanding of algorithmic bias is helpful to the newsroom looking to add machine learning applications to some aspects of the news cycle. It’s unlikely that AI will be able to replace human journalists anytime in the future, which is consistent with the responses from local news decision makers who participated in the Associated Press study regarding local newsrooms’ use of AI.
Here are ways to take algorithmic bias into consideration when using AI in the newsroom:
Confirm spelling of names in transcriptions;
Use a human fact-checker;
Make sure photos are of the correct people before publishing the story;
Regularly audit AI applications to screen for bias.
AI functions better as a helper than an unchecked agent. Although the potential of algorithmic applications in the newsroom is still a developing field, we can start by developing a basic understanding of how they work and how to engage in better service journalism with the support of technology.
This article is published from International Journalist's Network (IJnet). This article was originally published at the Reynolds Journalism Institute and republished on IJNet with permission.