Back to Blog

Images are Worth a Thousand Tokens: How AI Image Generators are Exposing their Bias

ai best practices Mar 08, 2024

Written by Matthäus Huelse

Let's dive right into the thick of things with Google's Gemini AI. I don’t know if you’ve heard, but it's the talk of the town. Specifically, it’s about the image generating capabilities. A little bit of context first: One of the commonly quoted issues with AI is its inherent biases. If you haven't checked out this talk by Amy Webber at SXSW 2023, it's a great example of how AI's image generation can be a mirror to our societal biases. In short, it shows us that when we ask AI for something as simple as picturing a school superintendent, we're often handed back a stereotype, not the rich diversity we'd hope to see. In this video, the presenter shows the prompts and the images it generated. When requesting a picture of a hospital CEO, it showed 4 variations of older white men. Adjusting the prompt for an urban, a rural, a large or a small hospital did not change the AI’s idea of the person it should depict: a white, middle-aged, man. 

Now, let's geek out a bit on why this happened. It's all about the training data - the information we feed into AI to help it learn and make decisions. Natural language processing allows AI to understand human language, even if it isn’t perfectly standardized. Let’s say you ask Alexa to turn off the lights in your house. If you don’t use the exact set of phrases programmed into the tool, Alexa is going to be confused and probably not follow your commands. Modern day Large Language Models (LLMs) will understand you, without you having to follow a certain set of pre-programmed phrases. In order to get to that point, AI has to be fed a large amount of data. That would be texts of any kind, articles, blog posts, papers, social media posts, etc. Where all that data came from is rather murky, but a large portion of said training data is harvested from the internet. 

Imagine if aliens started orbiting our planet and, in order to learn more about us, they decided to tap into our information highway. They analyze the content posted by humans across the globe and are going to make a decision of whether we are worthy to join their galactic federation of planets. Now ask yourself, “If humanity would be judged solely on what is posted online, do you think we’d make the cut for the aliens?” Personally, I’d be rather nervous, not just for the content that is posted, but also because of the content that is missing. How much of humanity is simply not online? AI is trained to be human on a data set we don’t fully know or understand, and which is not complete. Many of the biases we’ve seen in Google search results are a result of this incomplete training. 

A couple of years ago, articles started popping up about Google’s image search. “Professional versus unprofessional hair” Google search results showed a very concerning pattern. When searching for images of unprofessional hair styles, mostly black women would be shown, while professional hair styles were almost only showing white women. Now, it’s easy to blame Google and say that its algorithm is not up to snuff, but might it behoove us to take an introspective look about what this says about tendencies and biases in society? A vast majority of us access and interact with the internet through Google, an enormous repository of knowledge that saves, indexes, and archives almost anything you have interacted with online. That’s the basis and source of much of the training data. Google’s challenges with the algorithm and AI’s issues with diverse representations are different sides of the same coin. It’s a reflection of an incomplete fraction of human experience. The same data that has allowed us to create AI that naturally processes language has also impacted the way it responds to us. Personally, I believe that this is one of the most important realizations we have to make with this tool, especially considering the confidence with which AI responds to our requests.

Let me take a fast, potentially illegal left turn here. I'm no lawyer, but if I asked AI to draft up some legal documents for me - let’s say I was driving recklessly around corners - the legal brief that AI would present to me would probably look very good to me. I’m not an expert on legal documents, court proceedings, and local laws about driving around corners, but give me a presentation or a slide deck and I will go into excruciating detail about your font sizes and colors. Hand over an AI-generated lesson plan to a seasoned teacher or a scrutinizing administrator, and they'll find plenty to tweak. This is where the concept of "Human in the Loop" comes into play. While we love the idea of AI as this all-knowing wizard, we must avoid falling into that trap. My teachers had it right when they told me not to trust at face value what I see on YouTube, Wikipedia, the news. AI not only sounds human, but it sounds and looks convincing

Now, imagine autocorrect just landed as a brand new feature on your phone, promising to correct your spelling errors on the fly. The immediate reward and benefit are immense. All of a sudden, even the most rough message from that one friend in your group text appears legible. But let’s say that I heard from an inside source that there is a small, barely noticeable, chance that the autocorrect might make a mistake. It's still in its early days, learning the ropes, which means it might occasionally throw in a wild card - like a typo, a word that’s slightly out of place, maybe inserting the word "sausage" in the middle of your text. You and I may have never noticed something like that. And maybe you’ll say you’d take the chance. Sure, it may lighten the mood in the aforementioned group text here and there, but I am willing to bet money that you would double check every single text message you’d send to your boss from now on. 

These types of AI growing pains and errors are much harder to spot in plain text. Dall-E’s lack of diverse representation and Gemini’s overcorrection of it are a lot easier to spot and don’t need an in-depth literary analysis. I don’t want to sound like a conspiracy theorist, but we need to sow some doubt about AI. It's a field that's burgeoning, brimming with potential, yet it's also in a phase where slip-ups and biases can sneak in, largely because it's learning from us - humans with our own set of imperfections. 

AI presents its information so confidently and absolute, that it is easy to fall into the trap of not questioning its answers. We may not have even realized what biases are even present. The idea here isn't to label humans as "faulty," but rather to acknowledge that we, and by extension, the AI that learns from our data, are works in progress, constantly evolving and refining. We need to adapt and foster that sense of critical thinking, when approaching AI products. 

Did you like this post?

Share it with your friends! Don't forget to tag us- @educoachnetwork.

Stay Connected.

Get the latest EDU Coach Network Blog Post, events, and community resources sent straight to your inbox through our newsletter!