81,000 People Told an AI What They Actually Want. The Answer Isn't What You Think.

If you asked most people what they want from AI, you’d expect to hear “make me more productive” or “do my emails.” And you’d be partly right. But when Anthropic, the company behind Claude, actually sat down and asked 80,508 people across 159 countries and 70 languages what they really, truly want from AI, the answers went somewhere far more human than that.

The number one thing people wanted wasn’t faster spreadsheets. It wasn’t better code. It was time. Time with their families. Time to think. Time to stop running on the hamster wheel long enough to remember what they were running for in the first place.

That hit me harder than any spec sheet or model benchmark ever has.

What Anthropic Actually Did

In December 2025, Anthropic invited every Claude user to sit for a conversational interview with a specially built version of Claude called the Anthropic Interviewer. Not a multiple-choice survey. Not a feedback form. An actual adaptive conversation that asked people about their hopes, their fears, and their real experiences with AI. Over 80,000 people took them up on it. Anthropic says it’s the largest qualitative study of its kind ever conducted.

They’d trialled this approach back in late 2025 with a smaller study of 1,250 professionals, and the method clearly worked well enough to scale up massively. The idea is simple but powerful: use AI to interview people about AI, at a depth and scale that would have been impossible with human researchers alone.

The responses were then classified by AI to spot patterns, but the research team also pulled out individual quotes and reviewed everything for personally identifying information before publishing. What came back is one of the most honest snapshots I’ve seen of how real people feel about the technology that’s reshaping their lives.

Sources: Anthropic (18 Mar 2026), CNBC (20 Mar 2026), Euronews (20 Mar 2026)

Productivity Was the Starting Point, Not the Destination

Yes, 19% of respondents said their primary aspiration was “professional excellence,” making it the biggest single category. And yes, 32% said productivity was where AI had already delivered for them. But here’s the thing that matters: when the interviewer dug deeper and asked people what that productivity was actually for, the answers shifted. It wasn’t about doing more work. It was about doing less of the wrong work so they could do more of the right living.

One software engineer in Japan said AI let them leave work on time and pick up their daughter from nursery. A white-collar worker in Colombia said AI helped them finish early enough to cook with their mother. These aren’t people chasing KPIs. They’re people trying to get their lives back.

Fourteen percent of people said what they wanted most was “personal transformation,” things like better mental health, learning, and self-improvement. Another 14% wanted help managing the relentless logistics of modern life. Eleven percent just wanted more time. Not to do more, but to breathe.

When I talk to businesses at Techosaurus about AI, I always try to steer the conversation away from “how do we get more output?” and towards “how do we give people the headspace to do their best work?” This study backs that up with 80,000 voices.

The Light and the Shade

The finding that really stayed with me was what Anthropic calls the “light and shade” of AI. They identified five tensions that kept coming up, not between different groups of people, but within the same person.

People who valued AI for learning were three times more likely to also worry about cognitive atrophy, the fear that they’re losing the ability to think for themselves. People who found emotional support in AI were simultaneously afraid of becoming dependent on it. People who saved time on some tasks watched the treadmill speed up on others. People who dreamed of economic freedom also dreaded economic displacement.

This isn’t an argument between optimists and pessimists. It’s the same person, holding both things at once. A lawyer in Israel put it perfectly: they use AI to review contracts and save time, and at the same time they fear they’re losing the ability to read properly. That’s not contradiction. That’s honesty.

And here’s the part that educators should really pay attention to: teachers and academics were two and a half to three times more likely than average to report witnessing cognitive atrophy firsthand, presumably in their students. Meanwhile, tradespeople, people learning voluntarily rather than inside institutional structures, reported strong learning benefits with almost no signs of atrophy. That tells you something important about how AI is used versus how it’s mandated.

The Number One Fear Isn’t Job Losses

You’d think the top concern would be “AI is going to take my job.” It’s not. At 26.7%, the most common fear was unreliability. Hallucinations, confident mistakes, outputs you can’t trust. Job displacement came second at 22.3%, followed by concerns about losing autonomy and agency at 21.9%.

That ranking matters. People aren’t primarily scared of being replaced. They’re frustrated that the tool they’re relying on still gets things wrong in ways that are hard to spot. One respondent described getting caught in what they called a “large, slow hallucination,” answers that were internally consistent, confident, and wrong in subtle but compounding ways. Anyone who’s used AI seriously will recognise that feeling.

I talk about this constantly when I’m training people. AI will give you advice the way a confident mate gives you advice. It sounds sure of itself. It makes you feel heard. But that doesn’t mean it’s right. Knowing when to trust and when to verify is the skill that separates good AI users from everyone else.

The Global Divide Is Real, and It’s Not What You’d Expect

Across the board, 67% of participants expressed net positive sentiment about AI. But the split between regions was telling. People in Sub-Saharan Africa, South Asia, and Latin America were consistently more optimistic than those in Western Europe or North America. In some African and Central Asian countries, nearly one in five respondents said they had no concerns about AI at all, roughly double the rate in Europe or the US.

The explanation isn’t complicated. In regions where access to education, healthcare, and economic opportunity is limited, AI looks like a ladder. People described it as a way to start businesses without funding, learn without expensive tutors, and access expertise that would otherwise be locked behind geography and wealth. An entrepreneur in Uganda said the only way they could stake a claim in the market was by building technology that works, because getting funding from Africa is nearly impossible. A stay-at-home mother in the US, in her late 40s, said AI gave her access to knowledge she thought was permanently out of reach.

In wealthier countries, the picture is different. People already have access. What they don’t have is time, headspace, and control. Their concerns lean more towards governance, surveillance, privacy, and the fear that the economic ground is shifting beneath them.

This is something I think about a lot. The AI conversation in the UK tends to centre on risk and regulation. And that’s important, genuinely. But it’s worth remembering that for a huge portion of the world, AI isn’t a threat to be managed. It’s the first real opportunity some people have ever had.

What This Means For the Rest of Us

There are a few things I take from this study that I think matter for anyone using AI, whether you’re running a business, teaching a class, or just trying to figure out what this technology means for your life.

First, the people getting the most out of AI aren’t the ones using it the most. They’re the ones who’ve figured out when to use it and when not to. That clarity matters more than any prompt template.

Second, the tension between benefit and risk isn’t a problem to solve. It’s the reality of using a powerful tool. If someone tells you AI is all upside, they’re selling you something. If someone tells you it’s all risk, they haven’t tried it properly. The honest position, the one 80,000 people described in their own words, is somewhere in the middle. And that’s fine.

Third, the study makes a strong case for something I’ve been saying for a while: AI is a communication and delegation skill, not a technical one. The people in this study who got the most value weren’t developers or engineers. They were freelancers, small business owners, tradespeople, stay-at-home parents. Curious people who saw an opportunity and took it. The future really does belong to the curious.

And finally, the methodology itself is worth noting. Anthropic used AI to conduct 80,000 qualitative interviews in 70 languages. That’s a form of research that simply didn’t exist two years ago. Whatever you think about the findings, the fact that a study this rich and this global is now possible should tell you something about where we’re headed. The tools aren’t just changing what we can do. They’re changing what we can understand about ourselves.

Want to see the results for yourself - go and have a look at the amazing site they have made to showcase their work - Anthropic 81,000 Interviews


Scott Quilter | Co-Founder & Chief AI & Innovation Officer, Techosaurus LTD

« Back

Latest Tech News

Our team gathers and shares news from all around the internet regularly. If we think its hot enough to share, we make it available here

Get In Touch

Got a tech challenge or a bright idea? Let’s chat and make it happen!