Shari Dunn Qualified
Qualified at the Intersection
“AI” Isn’t the Only One Hallucinating. We Are.
6
0:00
-15:40

“AI” Isn’t the Only One Hallucinating. We Are.

From iPhones to “AI,” Tech’s Greatest Trick Is Making Us Believe It’s Smarter Than It Is
6

Good morning, and welcome to Qualified at the Intersection. Today, I’m diving into something that came to me while at an event where someone casually joked, “I don’t know what Android users are going to do,” as they gave instructions to iPhone users to download a QR Code. The room laughed.

But beneath that joke is an iceberg. One of the excellent camouflaging, social conditioning, and illusions.

We’ve been sold the story that certain tools signify intelligence, access, or success. But as we’ll explore today, neither iPhones nor “AI” are what they seem. They are mirrors, built to reflect the systems that created them: white supremacy, classism, racialized eugenics, and deeply entrenched bias. So, let’s pull back the curtain.

iPhones, Cameras, and the Color of Access

Let’s start with the device in your hand. Mine is an Android, but statistically speaking, in the United States, yours is likely an iPhone. Just a note, Android accounts for 71% of smartphone sales worldwide, while iPhones (iOS) account for around 28%.

The divide between iPhone and Android isn’t just about software. It’s about race, class, and power.1

iPhones were marketed from the beginning as elite. They were targeted at wealthier, whiter, Western consumers and those who aspired to join their ranks. Androids, which, as I said above, dominate globally and among the global racial majority, are coded in the U.S. as poor, unsophisticated, even “ghetto.”2

A Business Insider article framed Android as a product for "lower-end" markets while positioning iPhones as occupying the "premium" tier. The subtext is racial and economic: If you use Android, you must lack taste, class, and money. If you use an iPhone, you belong to a superior caste.3

This branding is not incidental; it is a tool of social control and commercial hierarchy. It’s enforced through media bias, peer pressure, and outright erasure. That’s why, at the event, the emcee dismissed Android users entirely. The implication was clear: Android users are something you don’t want to be associated with.

Even beyond price and platform, there’s the camera. For years, iPhone cameras failed to accurately photograph darker skin. That flaw is rooted in the racist history of photo technology, specifically the Shirley card, used to calibrate photos for white skin tones. Kodak didn’t revise its calibration to better capture Black and Brown people. It changed only when customers complained about poor imaging of chocolate and woodgrain.4

Even now, Apple lags behind Android models like Google Pixel, which created the Real Tone initiative to prioritize accurate rendering of diverse skin tones. My own Samsung Galaxy captures richer, truer tones than most iPhones. But somehow, we still believe iPhones are "better."5

We should ask: Better for whom?

This is how racial capitalism works. The iPhone isn’t just a phone, it’s a caste marker. A signifier of social superiority. An object that helps perpetuate myths about worth, beauty, class, and race.

“AI” Isn’t Intelligent. It’s Just Repetition.

Now let’s shift to “AI,” which I will continue to put in quotes because it is neither artificial nor intelligent.6

AI can, is, and will create real harm, just not in the ways people are being told.

Numerous studies have now shown how so-called artificial intelligence systems systematically disadvantage marginalized groups. The data that trains these systems is riddled with societal bias: the racism of the internet, the sexism of corporate evaluations, the surveillance practices that target Black and Brown communities. And machine learning doesn’t challenge those patterns; it amplifies them.

For example, facial recognition tools misidentify Black faces at rates up to 100 times more often than white faces. Predictive policing systems send more patrols into communities of color, not because of crime, but because of patterns in historic arrest data. Hiring algorithms have penalized applicants with names that “sound” African-American or Asian. Language models consistently mirror misogynistic and racist tropes.

These aren’t accidents. They’re systemic failures built on systemic prejudice. Discrimination isn’t a glitch; it’s a system feature because “AI” learns from us.

As linguist Emily M. Bender and sociologist Alex Hanna explain in The AI Con, these systems are simply massive machine-learning models at scale. They predict patterns based on enormous human-made datasets, which means they replicate bias, repeat discrimination, and, most importantly, merely remix misinformation. However, they are not intelligent on their own or headed to sentience; that is the con.7

They don’t think. They guess. And guess what? They hallucinate.

As the New York Times reported, models like ChatGPT and Gemini regularly fabricate facts, create imaginary citations, and confidently present falsehoods.8 I've used these tools extensively, and they are getting worse. This isn’t brilliance. It’s badly designed autofill.

And yet, when a machine does it, we call it intelligence. If a human hallucinated this much, we’d worry about brain injury. But because it comes wrapped in silicon and TED talks, we bow to it.

Money, Eugenics, Race, and the Idea of Intelligence

We must dig deeper: The very concept of intelligence in Western science has been racialized for centuries.

From IQ tests built to validate eugenics theories to standardized testing that reinforces white-coded knowledge to early computing models that assume logic and neutrality are the domains of white, male rationality, the idea that intelligence can be defined, measured, and replicated in a machine was never neutral.9

“AI” isn’t drawing from an unbiased pool of knowledge. It’s pulling from a long legacy of white supremacy, colonization, and patriarchal dominance. In fact, early machine learning tools drew explicitly on eugenic models of ideal intelligence models that classified Black, Brown, disabled, and neurodiverse people as inferior.10

So no, “AI” isn’t just reflecting bias. It’s reinforcing a violent historical architecture that equates whiteness with genius, and everyone else with error.

In The Whiteness of AI, scholars Joy Buolamwini, Inioluwa Deborah Raji, and Abeba Birhane detail how image generators default to white faces, hiring algorithms penalize non-Western names, and predictive policing expands digital redlining. These outcomes aren’t bugs—they are features.

And behind the curtain? White billionaires who need you to believe that the machines they built are smarter than you, because they have a lot of money on the line.

Pattern Recognition

So let’s put this together:

  • iPhone sells exclusivity and calls it excellence.

  • “AI” sells repetition and calls it genius.

  • Both claim neutrality while reinforcing old hierarchies.

  • Both erase, misrepresent, and undervalue people of color.

  • Both depend on our collective hallucination that what is expensive, shiny, and exclusive must be better.

But we don’t need “AI” that hallucinates. We need humans who can tell the truth from fiction.
We don’t need phones that erase our faces or mock our access. We need tools that see us clearly.

Let’s stop mistaking branding for brilliance, let’s stop letting tech define who is worthy, and let’s stop hallucinating progress when we’re just automating the past.

Because “AI” isn’t the only one hallucinating.

We are, too.

1

file:///C:/Users/livel/Downloads/marlusagosling,+6558-en.pdf

2

https://www.pewresearch.org/short-reads/2015/04/30/racial-and-ethnic-differences-in-how-people-use-mobile-technology/

3

https://www.businessinsider.com/android-iphone-market-share-by-price-2014-8

4

https://nofilmschool.com/smartphone-cameras-and-capturing-darker-skin

5

https://abc7news.com/google-pixel-6-camera-real-tone-technology-compared-to-iphone-samsung-galaxy/11213036/

6

https://neontri.com/blog/android-iphone-statistics-report/

7

https://voicesofvr.com/1563-deconstructing-ai-hype-with-the-ai-con-authors-emily-m-bender-and-alex-hanna/

8

https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html

9

https://www.publicbooks.org/eugenics-powers-iq-and-ai/

10

https://www.hbs.edu/faculty/Pages/item.aspx?num=62442

Discussion about this episode

User's avatar