Why Do I Feel Weird Talking to My Computer?

I have a confession. When I’m alone in my office, I talk to AI constantly. I dictate emails because I can speak faster than I type and the results come back polished. I ask questions out loud while I’m thinking through a problem. I narrate what I need and let it do the work while I get on with something else. In those moments, I am genuinely more productive than I have ever been in my working life.

Then my wife walks in. And I’m instantly back on my keyboard.

I don’t know why. I’m not doing anything I’m ashamed of. The work I’m producing is exactly the same work I’d be proud to show her. And yet something switches the moment another person enters the room, and the voice goes off, and I become a very normal person typing at a computer like it’s 2015.

I mentioned this on the latest episode of Prompt Fiction and Reece immediately said he has exactly the same thing. I’ve done a little informal survey since then. I haven’t found a single person who disagrees. And I think that’s worth talking about properly, because the AI industry is heading towards a future built on voice, and if we can’t get comfortable using our voices when other people are around, that future is going to have a very awkward arrival.

It’s Not Just You

Let me be clear that this isn’t a tech literacy problem. I’ve spoken to people who use AI every single day, who are deeply comfortable with it, who have genuinely integrated it into how they work. And almost all of them have this same invisible line between alone and not alone.

Reece had an interesting take on it. He grew up in a house where if someone was on the phone, you left the room. Not because the call was private, but because it was the polite thing to do. There’s something embedded in a lot of us, particularly those of us raised in Britain, about the social contract around voice. Voice is for communicating with people. Using voice to communicate with something that isn’t a person, in front of people, feels like a category error.

There’s also the self-consciousness of being heard asking questions. When you type a question, it’s private. When you ask it out loud, you’ve just broadcast your uncertainty to everyone within earshot. “Hey, can you remind me what the capital of Azerbaijan is?” is a completely normal thing to need to know. It’s not a completely normal thing to want to announce to a room.

The CarPlay Problem

There’s a version of this that I find almost funny because it’s such a perfect illustration of the issue. CarPlay. Most people who use it regularly have had the same experience: you’re driving with a car full of people, the phone is connected, and suddenly it announces that you have a new message from someone, and it’s about to read it out loud. And you’re frantically reaching for the button, saying no no no no no, because you haven’t carefully curated your text message vocabulary for public consumption.

The privacy screen on the latest Samsung phones is the visual equivalent of this. It applies a pixel filter so that only the person looking directly at the screen can see it. You move ten degrees to the side and it’s blank. The fact that we have invented that technology tells you everything about how we actually feel about sharing our digital lives with the people around us.

So on one hand we’re building AI assistants that are meant to be invoked by voice, that are supposed to be woven into the fabric of daily life. And on the other hand we’re building screens that blank out the moment anyone looks at them. Those two trajectories don’t naturally converge.

The Open Plan Office Problem

The workplace version of this is going to become a genuine challenge. Not in some distant future. Now, or very soon.

OpenAI are reportedly working on a voice-first device. Apple are rumoured to be exploring one. The direction of travel from every major AI company is clear: less typing, more talking. And if you’ve ever sat in an open plan office, you can already see the problem. You’ve got noise-cancelling headphones in because the office is distracting, the person next to you has noise-cancelling headphones in for the same reason, and now you’re both supposed to be having spoken conversations with your AI, broadcast into the shared space, in the same room where you can’t even hear each other think.

The only scenario where voice-first AI works smoothly in an office is the scenario where everyone has noise-cancelling headphones on and nobody can hear anyone else. Which rather defeats the point of being in an office.

I’m not saying it won’t happen. I’m saying the social norms haven’t caught up yet. When mobile phones first arrived, people had full-volume personal conversations in public spaces and it drove everyone around them mad. Over time, social norms shifted. People learnt to step out. They learnt to drop their voice. We built phone booths back into modern offices for exactly that reason. The same process will happen with voice AI. It just takes time and it takes some awkward transitional moments.

Why It Actually Matters

This isn’t just an interesting social observation. There’s a real productivity cost buried in it.

I know for a fact that I get significantly more done when I’m using my voice than when I’m typing. The rate at which I can get ideas out of my head and into a form that AI can work with is completely different when I’m speaking. But I only have access to that productivity boost when I’m alone. The moment I’m around other people, I revert to the slower method because the social cost of talking out loud feels too high.

Multiply that across an organisation. Across a workforce. Across the millions of people who are currently using AI in a typing-only way, when a significant portion of the tasks they’re using it for would be faster and more natural if done by voice. That’s a lot of unrealised potential sitting behind a social awkwardness that nobody wants to admit to.

Reece made the point that it’s getting a bit easier in public because so many people have AirPods in now. If everyone’s wearing headphones, nobody’s really sure whether you’re talking to a person or a machine, so the self-consciousness decreases slightly. That might be where the bridge is, at least for the near term. Not a cultural shift, just a technological fig leaf that gives people enough cover to start talking.

What We Actually Do About It

I don’t think there’s a clean answer here. You can’t force social norms to shift faster than they want to. But you can be conscious of the constraint and work around it. And you can experiment.

If you’ve never used voice with your AI because it feels strange, try it when you’re genuinely alone. Drive to work and speak your emails rather than saving them for when you get to your desk. Go for a walk and use that time to dictate notes, draft blog posts, think out loud into a tool that can hold the thread for you. Get comfortable with how it works when the social friction is removed. Then, when you’re in an environment where you could use it around others, you’ll at least know what you’re giving up by not doing it.

The discomfort is real. It’s not going away quickly. But it’s worth knowing that it’s the discomfort doing the work, not the tool.


I got into this topic on the latest episode of Prompt Fiction, after Reece and I realised we had exactly the same relationship with voice AI. Listen to Chapter 12, Part 1 here.

Scott Quilter | Co-Founder & Chief AI & Innovation Officer, Techosaurus LTD

« Back

Latest Tech News

Our team gathers and shares news from all around the internet regularly. If we think its hot enough to share, we make it available here

Get In Touch

Got a tech challenge or a bright idea? Let’s chat and make it happen!