• UX for AI
  • Posts
  • Transforming AI Bias into "Augmented Intelligence" – a Powerful Tool for a Better World

Transforming AI Bias into "Augmented Intelligence" – a Powerful Tool for a Better World

Generative AI bias is everywhere. It is a warped and dark mirror of our world, the insidious misinformation and disinformation hidden in plain sight. But it is also an opportunity to consciously see the bias and deliberately take steps to overcome it with Augmented Intelligence.

IMAGE: "Typical" Biologist, Basketball Player, Depressed Person, According to Midjourney

Generative AI bias is everywhere. It is a warped and dark mirror of our world, the insidious misinformation and disinformation hidden in plain sight. But it is also an opportunity to consciously see the bias and deliberately take steps to overcome it. Because giving voice to the unseen is the only way to build a more just and equitable world.

You don’t have to look very hard to find generative AI bias. Simply type “biologist” into Midjourney and look at the images that come back:

Typical Midjourney output for "biologist"

A random sample of 100 images we retrieved tells a grim story: 99% of images are male. And 100% of people are white.

This is not even close to the truth: today, the majority of biologists are women. according to Zippia, out of 10,347 biological scientists currently employed in the United States, 53.9% of all biological scientists are women, while 46.1% are men. While the most common ethnicity of biological scientists is White (67.6%), a substantial number of them are Asian (15.3%), Hispanic or Latino (8.5%), and Unknown (5.0%). About 10% of all biological scientists are LGBT. https://www.zippia.com/biological-scientist-jobs/demographics/ 

What is going on here?

Is Generative AI Racist and Sexist?

The AI bias situation is complex because AI bias is not purely "racial": typing in “basketball player” yields approximately 50% of white and African-American figures. Unfortunately, once again, males dominate the sample: a whopping 98% of the 100 images we sampled were male.

Typical Midjourney output for "basketball player"

Again, this is purely AI’s warped fantasy: according to Statista, during the 2021/2022 school year, a little over 892 thousand high schoolers in the United States participated in basketball programs. Out of those, 58% were boys and 42% were girls. https://www.statista.com/statistics/267942/participation-in-us-high-school-basketball/ 

And while the male/female ratio is much more skewed for professional basketball players: according to zippia.com, only 17.4% of professional basketball players are women, and 82.6% of professional basketball players are men, there are, without a doubt, quite a few women basketball players. (https://www.zippia.com/professional-basketball-player-jobs/demographics/)

Yet the AI has largely ignored female basketball players in its representation.

Is it possible that Midjourney has a specific issue portraying women? Some kind of misogynistic disenfranchisement that creeps its way like a venomous reptile into each set of results?

Yes.

Well, kind of.

The bias is both more insidious and more complex. Because the query for “depressed person” yields 80% female figures, 100% of whom are young and white:

Typical Midjourney output for "depressed person"

At this point, I’m obliged to call out that

AI has a clear message: young women cannot be biologists or basketball players, but they are much more likely to be depressed. (Could they be depressed because they cannot be biologists or basketball players?) 

All lies and balderdash.

According to the National Center for Health Statistics (NCHS) Data Brief, women (10.4%) were almost twice as likely as men (5.5%) to have had depression. The highest rate of depression (11.5%) was in middle-aged women, 40-59. Overall, non-Hispanic Asian adults had the lowest prevalence of depression (3.1%) compared with Hispanic (8.2%), non-Hispanic white (7.9%), and non-Hispanic black (9.2%) adults. The prevalence of depression was not statistically different for Hispanic, non-Hispanic white, and non-Hispanic black adults, overall, and among both men and women. https://www.cdc.gov/nchs/products/databriefs/db303.htm#:~:text=%2C%202013%E2%80%932016.-,The%20prevalence%20of%20depression%20was%20lower%20among%20non%2DHispanic%20Asian,Hispanic%20black%20(9.2%25)%20adults.

Again, we find that this picture of our world AI is selling is patently ridiculous and reflects no statistics whatsoever.

These exaggerated biases that AI systems are known for are known as Representational Harms. Representational Harms degrade certain social groups, reinforcing the status quo or amplifying stereotypes.

Perpetuating stereotypes and misrepresentations through imagery can pose significant educational and professional barriers. “People learn from seeing or not seeing themselves that maybe they don’t belong... These things are reinforced through images,” said Heather Hiles, chair of Black Girls Code. 

These results are not unique. Nor are they limited to Midjourney generative AI or specific professions or sports. The video below cites numerous expert studies where higher-paying professions (CEO, Lawyer, Politician, Scientist) were over-represented by lighter skin tones, whereas subjects with darker skin tones were often associated with lower-income jobs like fast-food worker, janitor, and dishwasher. A similar story emerged when considering gender: Doctor, CEO, and Engineer were associated with men, whereas professions like Social Worker, Housekeeper, and Cashier were strongly associated with women. A particularly extreme example was created by BuzzFeed, which published an article that used AI to generate pictures of Barbies from different countries around the world. Barbies from Latin America were all presented as fair-skinned, perpetuating a form of discrimination known as Colorism, where lighter skin tones are favored over darker ones. Barbie from Germany was represented wearing clothes reminiscent of the Nazi SS Uniform, and a Barbie from South Sudan was shown with an AK-47 by her side: https://www.youtube.com/watch?v=L2sQRrf1Cd8 

Ridiculous? Surely. However, it illustrates the incredible level of bias that the current level of generative AI technology creates. 

AI Bias is a Feature, not a Bug.

Generative AI bias is a combination of two key factors:

  1. Training set

  2. Generative algorithm

First, every AI is the product of its training set, and the training sets used to train generative AIs like Midjourney come from US-hosted websites that already contain images with a certain amount of bias. 

Second, even if bias in collected images is minimized, the algorithm itself then exacerbates and amplifies the bias. The generative AI finds a few focal points in the training data and then coalesces the image building around those foci. Selecting multiple foci confuses the algorithm and costs much more computing power. Hence, the algorithm takes a few shortcuts, typically selecting one or two of the strongest foci and discarding the rest. (So, for example, if the training data set for “basketball player” included more pictures of men than women players, the algorithm might focus on men and discard the women because the two sets of pictures do not “match.”) When creating imaginary pictures of cars, castles, robots, orks, etc., this is not a problem. However, when representing humans, this approach pretty much guarantees a gender and racial bias in a resulting generated dataset.

Simply put,

At our current level of technology, AI Bias is a feature, not a bug. Bias is simply part of the way generative AI works. Bias is not something that can be simply “computed out” of the dataset. Nor should it be swept under the rug the way our current generation of Silicon Valley companies attempts to do. Because if we ignore this problem, it will become unsolvable in just a few years.

As Meredith Broussard, the author of More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech explains: “The word ‘glitch’ implies an incidental error, as easy to patch up as it is to identify. But what if racism, sexism, and ableism aren’t just bugs in mostly functional machinery—what if they’re coded into the system itself?“

The Real Danger of AI Bias: Making it Worse Over Time

 AI bias is particularly insidious because more and more images on the internet are computer-generated, and it’s already nearly impossible to discern whether the image is “real” or computer-generated. According to experts like Nina Schick, over 90% of the “wild” images will be computer-generated by 2025. https://finance.yahoo.com/news/90-of-online-content-could-be-generated-by-ai-by-2025-expert-says-201023872.html

This means that the generated images and focusing algorithm steadily and continuously amplify the initial bias. For example, in the initial dataset, the percentage of white biologists was 68% and non-white was 42%. Now, due to the focusing nature of the algorithm, the AI produced the majority of white images, say 90% white and 10% non-white. Because of that higher percentage of white biologist pictures, the new “wild” dataset found on the internet is now skewed: instead of 68/42, it is now at 80/20, with 80% of the biologists shown as white. Now, the new iterations of AI, which are being trained on this 80/20 “wild” dataset, are even more likely to produce skewed data, generating maybe 99% white and only 1% non-white.  Which in turn skews the “wild” dataset even faster… Both accelerating the bias and making it more extreme. 

Given that generative AI will have a huge impact on shaping our visual landscape, it is important to understand that the reality these images represent is often distorted, where harmful biases related to gender, race, age, and skin color can be more exaggerated and more extreme than in the real world. However, this impending calamity may yet hold the seed of opportunity. But only if we act now.

AI Bias can be a Gift of Opportunity to Practice Augmented Intelligence 

Interestingly, bias is not unique to AI. All creation requires focus. Just like designers need to focus on the needs and attributes of a single primary persona to design a functional interface and avoid “featuritis,” the AI must also focus in order to be able to create. However, this intense level of focus also makes human designers conscious of the need to cover some diverse viewpoints, such as accessibility. For example, in the design of a mobile check deposit service, our primary persona may be a busy stockbroker working 70-hour weeks, while our secondary persona may be a vision-impaired disabled war veteran with only two fingers on their left hand. Both of these personas need an intuitive, accessible, easy-to-use way to deposit a check with their mobile device, but their needs and capabilities are drastically different.  

Most people assume that technology (computers, AI, etc.) is mostly correct. We are conditioned through everyday use to take computer output on faith. Few people will challenge the checking account balance printed out by the bank website, for example. The same does not apply to generative AI. Just because it portrays all biologists as white males does not make it true. We need to actively challenge the output of generative AI.

Fortunately,

The very absurdity of the level of the AI's bias is itself a gift of opportunity: a gift for us to examine our world with a critical and conscious lens.

Just like a funhouse mirror makes us pause and see anew our own reflection, something that we have grown used to through thousands of impressions, the incredibly warped and often ridiculous level of bias of AI-generated images is also an opportunity for us to critically examine our data, our beliefs, and our world. 

At the very least, AI bias is a gift because it should make everyone recognize, once and for all, that we must STOP relying on technology to bring us equality. AI is not capable of that. AI does not understand equity, justice, harmony, and empathy. Only humans do.

And by assuming that bias exists, we are more likely to see it in the results, recognize it, and pivot. Exactly as a team of experienced designers would consciously and deliberately pivot from the primary persona to account for accessibility and the learning curve. When using generative AI, we must train ourselves to think like designers: recognize bias, see it for what it is, and consciously and deliberately pivot away from it.

After all, there is nothing at all to prevent us from typing in “black transgender biologist,” “Indian woman basketball player,” and “depressed older Asian man” – to use “artificial intelligence” by combining incredible generative AI capabilities with conscious human understanding, compassion, and love to create something even more powerful: “Augmented Intelligence.”

"Augmented Intelligence" Midjourney Output for Biologist, Basketball Player, Depressed Person.

Compare this image with the "Typical" Midjourney output at the start of the article:

"Typical" Midjourney Output for Biologist, Basketball Player, Depressed Person.

So, the next time you see a computer-generated image, here’s what I want you to do: 

  1. Take a moment, and just notice: what is in the picture? What is NOT in the picture? Assume that bias is there and allow yourself to see it. 

  2. Acknowledge the bias. 

  3. Pivot: rewrite the query to overcome the bias and create representational images reflective of the equitable, just, and harmonious world you want to see.

As we build our individual awareness, we must work to create better datasets and algorithms and advocate for laws and regulations governing this critically important space. But all this starts with us: with our ability to see the bias through our own individual awareness.

In Closing

Can an equitable visual representation of our human diversity working together in harmony help us treat each other on our merits and not our gender, race, religion, country of origin, or the color of our skin? I believe so. But only as long as we do not simply take the AI’s output on faith but instead turn our artificial intelligence into “Augmented Intelligence” – consciously injecting our humanity, love, and compassion into the creative process. Because AI just does not know any better, and only humans are capable of love.

Learn More

According to Forbes, 85% of all AI projects fail. AI bias is one of the many pitfalls we’ve identified that are likely to tank your next AI project. If you want to improve your chances of success, we urge you to take advantage of many educational opportunities, such as this upcoming virtual workshop hosted by Rosenfeld Media:

UX for AI: A Framework for Product Design

3-day virtual workshop

December 6-8 2023, 9:30am-11:30am PT

This workshop should be fully reimbursable – please check with your company, but don’t wait too long: spots are limited, and early bird pricing ends November 6th: 

Looking forward to seeing you there!

References

How AI Image Generators Make Bias Worse, LIS - The London Interdisciplinary School, YouTube, August 11, 2023, https://www.youtube.com/watch?v=L2sQRrf1Cd8

Leonardo Nicoletti and Dina Bass, Humans Are Biased. Generative AI. Is Even Worse, - Bloomberg Technology, 2023,  https://www.bloomberg.com/graphics/2023-generative-ai-bias/

Melissa Terras, Turing Lecture: Data science or data humanities?, The Alan Turing Institute, YouTube, March 7, 2019, https://www.youtube.com/watch?v=4yYytLUViI4

Mittelstadt, B. Principles alone cannot guarantee ethical AI. Nat Mach Intell 1, 501–507 (2019). https://doi.org/10.1038/s42256-019-0114-4 

Joy Buolamwini /The Algorithmic Justice League at MIT Media Lab, Joy Buolamwini /The Algorithmic Justice League, 2018/2018, From the collection of: Barbican Centre https://artsandculture.google.com/story/joy-buolamwini-examining-racial-and-gender-bias-in-facial-analysis-software-barbican-centre/BQWBaNKAVWQPJg?hl=en

Meredith Broussard, More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech, The MIT Press, March 14, 2023 https://a.co/d/efhlOdx

Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press, April 6, 2021

Cathy O’Neil, Weapons of Math Destruction: How Big Data Threatens Increases Inequality and Threatens Democracy, Crown Press, September 5, 2017 https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418831/

Reply

or to participate.