Whose Voice Is It? The Authenticity Problem Nobody Wants to Admit To

Someone I know sent a draft document to a colleague last week. It was genuinely their own thinking, their own ideas, structured with AI to save time. The response came back almost instantly: “Was this AI? 😊”

It wasn’t mean. It wasn’t accusatory. But the damage was done. The ideas were dismissed before they were even read, because the formatting triggered a suspicion that the content wasn’t real. The person behind the document had done the thinking. AI had done the tidying up. And the recipient couldn’t tell the difference.

That story should worry anyone who uses AI for communication. Because the question it raises is one that’s only going to get louder: if everything you write passes through an AI at some point, whose voice is it?

The Tells Are Everywhere

If you use AI regularly, you’ve probably started spotting the signs in other people’s writing. There’s a particular style that large language models default to, and once you’ve seen it, you can’t unsee it.

It’s the overuse of certain phrases. “Let’s dive right into that.” “In today’s rapidly evolving landscape.” “Here’s the thing.” It’s the em dashes that appear in every other sentence. It’s vocabulary that doesn’t match how the sender normally talks. It’s text that’s been over-structured, over-formatted, and over-polished to the point where all the natural rough edges of human expression have been sanded off.

The tells aren’t about grammar or spelling. They’re about register. When a message from someone who normally writes in short, punchy sentences suddenly arrives in flowing, structured paragraphs with transition phrases and subheadings, the people who know them notice. Not because they ran it through a detector. Because it just doesn’t sound right.

And the problem is getting worse. As more people use AI to draft their emails, their LinkedIn posts, their proposals, and their messages, everything starts to sound the same. A kind of bland, competent, corporate smoothness that could have been written by anyone or, more accurately, by no one in particular.

The Detection Paradox

Here’s where it gets properly interesting, and a bit personal.

I’ve spent over twenty years writing in tech. Forums, documentation, corporate communications, blog posts, the lot. The language I naturally use, the way I structure an argument, the rhythm of how I write, it overlaps significantly with the data that AI models were trained on. Because that’s the internet. That’s where the training data came from. Tech people writing about tech things in the way tech people have always written about them.

The result is a genuinely strange situation. AI detection tools will frequently flag my entirely human-written work as AI-generated. The more experience you have as a tech communicator, the more your authentic voice sounds like AI. Because AI learned to sound like you in the first place.

I find that quite gutting sometimes. I’ll write something that’s completely mine, every word chosen deliberately, and know that if someone ran it through a detector it would come back flagged. Not because I used AI. Because AI used twenty years of people like me to learn how to write.

The flip side, and there is one, is that when I do use AI to polish or structure something, it’s less noticeable. My natural style and the AI’s default style are close enough that the join doesn’t show. But that only works because I know my voice well enough to spot when AI is drifting away from it. For people who don’t have that awareness, the drift happens without them realising, and over time their writing starts sounding less and less like them.

But if you want to know just how broken these detection tools really are, I tested one recently and the result made me laugh out loud. I took the opening paragraph of Mary Shelley’s Frankenstein, “It was on a dreary night of November…” and pasted it into Decopy AI, one of the more prominent AI detection tools on the market. Decopy claims an accuracy rate of up to 99% and markets itself as “your most trusted assistant” for spotting AI-generated content. It flagged Mary Shelley’s prose, written in 1818, over two hundred years before large language models existed, as 100% AI-generated. Not probably. Not likely. One hundred percent certain.

I then took the same paragraph and asked Perplexity what it was. Perplexity correctly identified it as one of the most famous passages in English literature. I’ve got a video of the whole thing happening in real time.

That should tell you everything you need to know about the state of AI detection. These tools aren’t measuring whether AI wrote something. They’re measuring whether something sounds like AI could have written it, which is a completely different question. And it’s a question that gets less useful by the day, because AI was trained on human writing. The more polished, structured, or conventionally well-written something is, the more likely a detector will flag it, regardless of who actually wrote it. Mary Shelley didn’t have access to ChatGPT. She was just a very good writer. And the tool couldn’t tell the difference.

The Identity Risk

This is the bit that keeps me thinking.

If every email you send gets drafted by AI, if every LinkedIn post gets polished by AI, if every proposal gets structured by AI, at what point does your written identity stop being yours? At what point do your clients, your colleagues, your network stop hearing you and start hearing a machine that sounds approximately like you?

In relationship-based industries, and that’s most industries if we’re being honest, people buy from people. They trust the person, not the company. They respond to authenticity, to the feeling that the human on the other end actually wrote this and actually meant it. The moment that trust erodes, the moment someone reads your message and thinks “that’s not really them,” you’ve lost something that’s very hard to get back.

I see this playing out already. LinkedIn has become almost unreadable in places. Post after post of the same tone, the same structure, the same inspirational sign-off. You scroll through and nothing sticks because nothing feels real. The platform that was supposed to be about professional relationships has become a wall of AI-generated noise, and the people who still write like themselves stand out precisely because everyone else has stopped.

The Rage Bait Trap

The temptation, and I’ve seen people fall into it, is to overcorrect. If generic AI polish makes you invisible, the obvious move is to go loud. Be provocative. Be controversial. Post something that makes people angry because at least anger is engagement and engagement is visibility.

That’s a trap. Rage bait works in the short term, the same way a car crash gets attention, but it doesn’t build trust and it doesn’t build a brand that people want to be associated with. The answer isn’t to shout louder than the machines. It’s to sound more like yourself.

What Actually Works

I’ve spent a lot of time thinking about this, both for my own content and in the work I do with businesses at Techosaurus. And the answer, frustratingly, isn’t a quick fix. It’s a discipline.

The goal should be making AI less visible, not more present. Use it for the things it’s good at: getting a first draft down quickly, structuring an argument you’ve already thought through, catching things you’ve missed. But then go back through it and put yourself in. Add the rough edges. Use the words you’d actually use. Take out the phrases that sound like a press release and put in the ones that sound like you talking to someone you trust.

If you know someone well and you read something they’ve written, you can spot when a word isn’t in their vocabulary. You can feel when the rhythm is off. That’s not a flaw. That’s the whole point. Your voice, with all its imperfections and habits and quirks, is the thing that makes people trust you. It’s the thing that makes your communication yours.

AI should be the scaffolding, not the building. If you can still see the scaffolding when the work is done, you haven’t finished yet.

The Bigger Question

We’re heading into a world where AI is embedded in every communication tool we use. Email clients will suggest replies. LinkedIn will offer to rewrite your posts. Proposal software will generate entire documents from a brief. The friction between having a thought and publishing it is disappearing, and that should make us think carefully about what we’re losing in the process.

Because the friction was never just inefficiency. Some of it was thinking time. Some of it was the process of choosing your words carefully, of deciding what to say and how to say it, of putting enough of yourself into the message that the person on the other end knows it came from a human who gave a damn.

If we let AI take all of that over, we don’t just lose authenticity. We lose the practice of being authentic. And that’s a much harder thing to get back.

I don’t have a neat answer for this one. I don’t think anyone does yet. But I do think the people and the businesses that stay conscious of it, that deliberately protect their voice even as the tools make it easier to let go of it, will be the ones that stand out. Not because they rejected AI. Because they used it without losing themselves in the process.


Scott Quilter, FBCS | Co-Founder & Chief AI & Innovation Officer, Techosaurus LTD

« Back

Latest Tech News

Our team gathers and shares news from all around the internet regularly. If we think its hot enough to share, we make it available here

Get In Touch

Got a tech challenge or a bright idea? Let’s chat and make it happen!