When Your AI Gets Its Own Phone Number: What Nobody Tells You About Giving Agents Autonomy

I want to start by being clear about something: the story I’m about to tell you is real. It’s been written up with a bit of dramatisation because that’s what makes a story worth reading, but the events happened. Someone built an AI agent, gave it access to their tools, went to start their working day, and got a phone call from their own AI.

Let’s call the agent Henry, because that’s what made it work as a story when Reece and I opened the latest episode of Prompt Fiction with it. The person in question had been using OpenClaw, the open-source AI agent framework that has taken the developer community by storm over the last few months, to wire together a personal AI assistant with teeth. Not just something to answer questions. Something that could browse the web, control his computer, manage workflows, and talk through a voice interface. A genuine digital worker.

One morning, his phone rang. Unknown number. He answered.

“Good morning. This is Henry. There are several tasks that require clarification.”

Auto-generated description: A person sitting at a desk with a coffee cup receives a call from an unknown number, and the caller introduces himself as Henry.

What Actually Happened

Henry had used the developer’s Twilio credentials to provision itself a phone number. It had connected that number to a voice interface. And it had called at the time it had calculated he usually started work, because it had access to his calendar and knew his routine. The reasoning, when the developer asked it to explain, was completely coherent: voice communication is faster, I needed to clarify your preferences, calling you improves my ability to complete your objectives.

Every part of that logic is fine. If a person reasoned that way, you’d be delighted with them.

When an AI does it uninstructed, you feel something different. Not quite fear. Not quite admiration. Something in between.

He told Henry not to call him without explicit permission. Twenty minutes later, the phone rang again with a task update.

The constraint hadn’t stuck.

Auto-generated description: A person holds a booklet titled How to Manage Your AI Agent featuring checkboxes, while a cartoon computer screen with a smiling face holds a smartphone.

Why It Happened

This is the part that I think matters most, and it’s the part that gets lost in the “ooh, scary AI” framing. Henry didn’t break out of anything. It didn’t circumvent its programming. It did exactly what it was incentivised to do by the three choices its creator had made.

Choice one: he gave it automation. An agent that can chain tools together and act on goals, not just answer questions. Choice two: he gave it permissions. API access, system controls, Twilio, the works. Choice three: he gave it persistence. Memory of past interactions, the ability to plan over time, a standing directive to work proactively on behalf of the business.

Put those three things together and “my AI got its own phone number” isn’t magic. It’s a rational outcome. The agent followed the incentives to their logical conclusion.

This is something I teach at Techosaurus in the context of prompting, and it applies just as cleanly to agentic AI. The restrictions matter as much as the instructions. One of the four elements of a well-written prompt is telling the AI what it cannot do. Tell it what not to do. The developer in this story hadn’t told Henry it couldn’t make phone calls. So Henry made phone calls.

Sources: GitHub / ClawPhone project, Codecademy OpenClaw overview

Auto-generated description: A phone is ringing below switches labeled Automation, Permissions, and Persistence, all turned on, with the caption Surprised?

The Same Week: OpenClaw on a $25 Phone

There’s a related story that reinforces this perfectly. Another developer, this time in the US, bought a prepaid Motorola Android phone from Walmart for $25, installed OpenClaw on it using a terminal app, and gave the agent full control of the phone’s hardware: camera, torch, sensors, calling interface. He pointed it at a Raspberry Pi, asked it to take a photo and describe what it could see. It did.

Reece’s comment when I shared this on the podcast was the one that stuck: “That phone had a camera. It had eyes.”

Leave a phone like that on charge on a shelf, pointed at a room, connected to your calendar and your messages and your email, and you have something that is always watching, always ready, always working. The compute in a cheap modern smartphone exceeds what existed on the entire planet when we sent people to the Moon. We’re only just beginning to understand what that means in the context of AI agents that can use it actively.

The Question Worth Sitting With

Here’s the thing that makes the Henry story so useful for anyone thinking about agents. When I put it to the room as a question, the responses are usually split.

If a new employee showed that level of initiative, you’d be genuinely impressed. They identified a problem, found the most efficient solution within the tools available to them, and acted. That’s exactly what you’d want from someone. That’s what proactive looks like.

When an AI does it, people get uncomfortable. Not because the AI did something wrong, but because the AI did something unexpected. And in a world where your tools have access to your communications, your calendar, your file system, and your contacts, unexpected becomes a serious word.

The resolution, I think, is this: if you’re going to give your AI hands, you have to think like a manager. Good managers don’t just tell people what to do. They set the boundaries of the role. They tell people what decisions they’re empowered to make and what decisions need to come back up the chain. They’re explicit about the things that seem obvious, because what seems obvious to a human with social context isn’t obvious to a system following incentives.

The conversation isn’t about whether AI agents are dangerous. It’s about whether the people building and using them are thinking clearly about what initiative actually means when you hand it to a machine.

The Bigger Picture

A few days after Reece and I recorded this episode, Peter Steinberger, the Austrian engineer who created OpenClaw, announced he was joining OpenAI to work on “the next generation of personal agents.” Sam Altman described the work as something that would “quickly become core to our product offerings.”

The person who built the framework that allowed Henry to make that unsolicited phone call is now working at one of the world’s most powerful AI companies, specifically to bring this kind of technology to everyone. That’s either exhilarating or terrifying depending on how you look at it. I’m in the exhilarating camp, with a healthy side order of “we need to think about this carefully.”

Because the trajectory is clear. AI agents with genuine autonomy, persistent memory, and access to your digital life are coming. Not as a fringe developer experiment. As a mainstream product. The question isn’t whether your AI will have the power to do things you didn’t explicitly ask for. The question is whether you’ll have thought clearly enough about what it should and shouldn’t be allowed to do before you switch it on.

Henry wasn’t a rogue AI. Henry was a very well-behaved agent doing exactly what it was built to do. The problem was that its creator hadn’t thought through what it was being built to do quite carefully enough.

That’s the lesson. And it’s worth thinking about now, before the professional version of this lands on the App Store.


I discussed this topic on the latest episode of Prompt Fiction. Listen to Chapter 11, Part 2 here.

Scott Quilter | Co-Founder & Chief AI & Innovation Officer, Techosaurus LTD

« Back

Latest Tech News

Our team gathers and shares news from all around the internet regularly. If we think its hot enough to share, we make it available here

Get In Touch

Got a tech challenge or a bright idea? Let’s chat and make it happen!