Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Creative Bloq
Creative Bloq
Technology
Alfredo Deambrosi

AI has a stereotyping problem

AI regulation.

Type 'board meeting' into even the best AI image generators and chances are high it will populate with rows of white men in suits. While headlines about AI often centre on hype or alarm, bias is already a very real and very present problem in visual media. And for brands, this isn’t just about fairness. It’s about trust, conversions, and whether your audience sees themselves in your story.

There’s no denying the upside: AI promises faster content creation, scalable campaigns, and limitless creative experimentation. But that promise comes with a catch: these systems learn from past imagery, which means they don’t just reflect history, but they replicate its stereotypes, sometimes on repeat.

To make it worse, images are more memorable than text. People might scroll past a biased sentence, but a biased image they remember. That makes misrepresentation in visuals even more powerful and more dangerous.

(Image credit: Generated by Adobe Firefly)

Counter AI's implicit bias

Representation in AI images isn’t just about aesthetics – it’s about who gets included in the story. AI systems, like humans, can internalise implicit biases from their training data. If a model learns from biased language or imagery, it may unknowingly generate prejudiced or stereotypical outputs.

Some of these biases include the following:

Experts at the IAPP also suggest that even when an AI's inputs are good quality, training AI is a continual process, so auditing an AI's outputs can show if the model needs to be updated or corrected.

By addressing both explicit and implicit biases, we can foster AI systems that promote inclusivity and fairness. To mitigate implicit bias in AI, it is necessary to:

  • Diversify training datasets to include balanced representation from various groups.
  • Implement bias detection techniques, such as fairness audits and adversarial testing.
  • Encourage transparency in AI decision-making to help users understand potential biases.

These aren’t just technical quirks – this visualisation impacts how people see themselves and others. And when AI paints the world with more bias than reality, the consequences spill over into hiring decisions, media narratives, and even self-perception.

It's a credibility problem

For marketers and brand leaders, bias in visuals isn’t some abstract ethical debate. It hits at the heart of brand performance:

  • Trust erosion: If your campaign visuals reinforce stereotypes, your brand risks being perceived as out of touch – or worse, exclusionary.
  • Customer connection: If audiences don’t see themselves in your imagery, they’re less likely to engage. Representation is relevance.
  • Regulatory risk: From the EU’s AI Act to US equal employment laws, new rules are emerging that hold organisations accountable for biased outputs.

Reframed for marketing leaders: bias is not only a social problem, but also a conversion and credibility problem.

Practical steps to address AI visual bias

Tackling bias doesn’t mean swearing off AI altogether. It means putting in guardrails:

  • Audit your AI pipeline: Know where generative tools are used and check outputs for skewed patterns.
  • Add human oversight: Don’t leave critical campaign visuals on autopilot. Human review is especially vital for high-visibility content.
  • Use editing tools wisely: Adjusting elements like cropping, masking, or backgrounds can help re-balance representation without distorting reality.
  • Stay informed: This space is moving fast. Staying ahead of the conversation means your team will be prepared to pivot when ethical issues arise.

Inclusive visuals are better visuals

(Image credit: Getty Images)

Bias in AI-generated imagery isn’t a future problem – it’s already shaping how people experience brands today. Companies that confront this issue head-on have a chance to stand out with visuals that are more inclusive, more accurate, and ultimately, more powerful in connecting with audiences. By identifying the sources of bias and creating inclusive prompts, we can strive towards developing AI systems that are fairer and just.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.