AI Needs More Than Code – Humanities Could Be Its Secret Ingredient

 


Doing AI Differently: Building Technology That Thinks Like Us, Not Just for Us

Artificial intelligence is everywhere. It’s in the phones we use, the apps that recommend our next favorite song, the navigation tools that guide us through traffic, and even the systems that help doctors detect diseases. But as this technology becomes woven into our daily lives, an important question is emerging: Are we building AI that truly understands us—or just machines that mimic understanding?

A new project called “Doing AI Differently” aims to answer that question in a bold way. It’s led by an international team of researchers from The Alan Turing Institute, the University of Edinburgh, the Arts and Humanities Research Council (AHRC-UKRI), and the Lloyd’s Register Foundation. Their mission is simple to say but complex to achieve: make AI more human-centered.


From Numbers to Meaning

For decades, we’ve thought of AI as a type of super-powered calculator—systems that crunch enormous amounts of data and spit out answers. This mindset makes sense because, at its core, AI runs on mathematics. But the team behind Doing AI Differently says this view is too narrow.

According to them, AI isn’t just solving math problems—it’s creating cultural artifacts. Think about it: when an AI writes a story, generates an image, or composes music, it’s producing something closer to a novel, a painting, or a song than to a spreadsheet or an equation. The twist? Unlike a human artist, the AI doesn’t truly understand what it’s creating.

It’s like someone who has memorized every word in the dictionary but has no idea how to hold a real conversation. They can give you a perfectly formed sentence, but they might miss the point entirely if the situation requires emotional sensitivity, historical awareness, or cultural nuance.

This lack of interpretive depth—the ability to understand why something matters, not just what the facts are—is a big problem. As Professor Drew Hemment, Theme Lead for Interpretive Technologies for Sustainability at The Alan Turing Institute, puts it:

“AI often fails when nuance and context matter most.”


The Homogenization Problem

There’s another challenge: Most AI systems are built on just a handful of similar designs.

This is known as the homogenization problem. It means that the same foundational structures, the same ways of processing information, and often the same biases are built into thousands of AI tools around the world.

To understand why this is risky, imagine this:
If every baker in the world used the exact same recipe, you’d end up with millions of identical cakes. They might taste fine, but they’d lack variety, creativity, and the ability to cater to different cultural tastes or dietary needs.

With AI, homogenization means that the same blind spots—whether in recognizing cultural differences, handling unusual situations, or avoiding harmful bias—get replicated endlessly. This leads to tools that look different on the outside but share the same weaknesses deep down.


We’ve Seen This Before

We don’t have to look far to see the dangers of rolling out powerful technology without considering its long-term social impact. Social media is a perfect example.

When platforms like Facebook, Twitter (now X), Instagram, and TikTok first emerged, their goals seemed simple: help people connect, share information, and express themselves. But over time, unintended consequences emerged—fake news spreading faster than truth, algorithms amplifying division, mental health challenges, and a reshaping of public discourse in ways few anticipated.

The Doing AI Differently team warns that we could be on the verge of repeating this mistake—only this time, with AI’s far greater ability to influence our decisions, culture, and even our political systems.


Introducing Interpretive AI

So, how do we avoid this trap? The team’s answer is a new approach they call Interpretive AI.

The idea is to design AI systems from the ground up to work more like humans do—embracing ambiguity, multiple viewpoints, and context rather than forcing every problem into a single, rigid answer.

Instead of building AI that says, “Here’s the only correct solution,” we could have systems that offer several interpretations, explaining the reasoning behind each one. This would not only make AI more flexible but also more transparent, allowing people to make informed choices based on multiple perspectives.


Breaking the Mold

A big part of Doing AI Differently is about exploring alternative AI architectures—essentially, different “recipes” for building AI—to avoid the homogenization problem.

Right now, large language models (LLMs) like ChatGPT, Claude, and Gemini dominate the AI landscape. While powerful, these models are built in similar ways, meaning their strengths and weaknesses overlap. The team wants to encourage the creation of entirely new approaches, inspired not just by statistics and data but also by human creativity, cultural diversity, and ethical principles.


Humans + AI = Stronger Together

One of the project’s most important points is that the future shouldn’t be about AI replacing humans. Instead, it should be about creating human–AI ensembles—partnerships where people and machines work side by side.

Think of it like a great jazz band. Each musician brings their own talent and perspective, and together they create something richer than any one could alone. In the same way, humans bring creativity, empathy, and ethical judgment, while AI offers speed, precision, and massive data-handling capabilities.


Real-World Applications

If done right, Interpretive AI could have a transformative impact across multiple areas of life:

1. Healthcare

When you visit a doctor, you don’t just hand them a list of symptoms—you tell them a story. You might explain how you’ve been feeling, how it’s affecting your work, and even how your family history plays a role. This human narrative is essential for good medical care.

An interpretive AI could capture and understand that full story, helping doctors make more personalized, accurate diagnoses. It could also improve trust, because patients would feel their experiences are being understood—not just their lab results.


2. Climate Action

Global climate data is essential, but solutions often fail when they ignore local realities. A village in the Amazon rainforest faces different challenges than a coastal city in Bangladesh or a farming community in Kenya.

An interpretive AI could bridge the gap between massive datasets and the unique cultural, political, and environmental contexts of local communities. It could help design climate strategies that actually work on the ground—tailored, realistic, and embraced by the people they affect.


3. Education

Every student learns differently. Some need visual explanations, others thrive on discussion, and some prefer hands-on activities. Current AI tutoring systems tend to give uniform answers, but an interpretive AI could adapt lessons to match the learner’s background, cultural references, and preferred style.


4. Disaster Response

In crises—from earthquakes to floods—context matters. An AI that understands not only the geography but also the cultural norms of an affected community could coordinate rescue efforts more effectively, ensuring that aid reaches those who need it most and is delivered in ways that are culturally respectful.


The Global Push

To make this vision a reality, the Doing AI Differently team is launching a new international funding call, bringing together researchers from the UK and Canada. The goal is to pool expertise across borders, combining technical innovation with insights from the arts, humanities, and social sciences.

This is important because AI is not just a technological challenge—it’s a human one. The values, assumptions, and worldviews we build into these systems will shape how they affect billions of people.


A Narrow Window of Opportunity

As Professor Hemment warns:

“We’re at a pivotal moment for AI. We have a narrowing window to build in interpretive capabilities from the ground up.”

Once certain designs dominate the market, changing course becomes much harder. This is why the team is urging governments, businesses, and the public to act now.


Why This Matters to All of Us

The future of AI isn’t just for tech experts to decide—it’s something that will shape everyday life for everyone, everywhere. Whether you live in a bustling city, a rural village, or somewhere in between, AI will increasingly influence how you work, learn, communicate, and make decisions.

By doing AI differently, we have the chance to ensure this technology reflects the richness and diversity of human life, rather than flattening it into a one-size-fits-all mold.


The Big Picture

The Doing AI Differently initiative isn’t just about creating better algorithms—it’s about rethinking our relationship with technology. It asks us to see AI not as a cold, distant machine, but as a collaborator that can help us tackle the most urgent problems of our time.

And perhaps most importantly, it     us to remember that technology should serve humanity—not the other way around.

Post a Comment

0 Comments