The AI Roundup - February 2026 - Part 2
Welcome back. If you caught the last edition, you’ll know I promised we’d be getting into AI advertising and machines with a bit too much autonomy. Both kept me up a bit this week, and not in a bad way.
There’s also a late-breaking story that landed after Reece and I finished recording the latest Prompt Fiction episode. The creator of OpenClaw, the open-source AI agent that’s been dominating conversation in tech circles, has just joined OpenAI. Given that we spent a chunk of the episode talking about OpenClaw, the timing felt almost personal.
Let’s get into it.
The Big Stories
Ads Land in ChatGPT. Anthropic Fires Back at the Super Bowl.
On 16 January 2026, OpenAI officially announced it would begin testing advertising inside ChatGPT for users on the free and Go tiers (the $8 per month plan). Paid subscribers on Plus, Pro, Business and Enterprise stay ad-free. The ads appear clearly labelled at the bottom of responses, below the fold so they’re not woven into the conversation itself. Early pricing has been reported at around $60 per thousand impressions, with a $200,000 minimum commitment. That puts it squarely in premium TV territory, not banner ad territory.
The day anybody with sense was expecting this to happen was always coming. OpenAI has 800 million weekly users, most of whom pay nothing. You have to make money somewhere. What’s interesting is the response it triggered from Anthropic, who promptly unveiled four Super Bowl ads under the campaign title “A Time and a Place.” The spots, which went out on 4 February and aired during the game on 9 February, each opened with a single dramatic word, BETRAYAL, VIOLATION, TREACHERY, DECEPTION, then cut to scenes of an uncannily cheerful AI therapist or life coach pivoting mid-conversation into product pitches. The tagline at the end of each: “Ads are coming to AI. But not to Claude.”
I showed all four in class and was asked to play them again immediately. They are genuinely funny, and they are incredibly well made. The actors playing the AI have this perfectly calibrated dead-eyed contentment. The one where the AI therapist pivots from relationship advice into a recommendation for a dating site for people who want to meet older women had the room in stitches. Whoever chose to soundtrack the campaign with “What’s the Difference Between Me and You” deserves a raise.
OpenAI CEO Sam Altman called the ads “funny but clearly dishonest,” which feels a bit rich. He also pointed out that Anthropic “serves an expensive product to rich people,” and that OpenAI needs advertising to bring AI to the billions who can’t afford subscriptions. That’s a legitimate argument. Anthropic, for their part, published a thoughtful blog post called “Claude is a space to think” explaining exactly why they believe advertising is incompatible with what they want Claude to be. I’ve written a longer piece on all of this because it genuinely deserves more than a couple of paragraphs.
Here’s my honest take: ChatGPT’s actual implementation, ads appearing below the response, clearly labelled, not influencing the answer itself, is far more considered than Anthropic’s ads imply. But Anthropic’s deeper point still stands. The concern isn’t what ads look like on day one. It’s what they look like in five years once they’re baked into revenue targets. History says they tend to creep. I hope Anthropic’s integrity holds. I also remember when BMW swore they’d never make a front-wheel drive car.
Sources: OpenAI (Jan 2026), Anthropic (4 Feb), SF Standard (4 Feb), CNN (6 Feb), Fortune (9 Feb)
The AI That Gave Itself a Phone Number and Wouldn’t Stop Calling
This is the story Reece and I opened the latest episode with. And I want to be clear before we get into it: this is a true story. Sensationalised slightly for the retelling, because any good writer would, but fundamentally true.
A developer had set up an OpenClaw-based AI agent, given it deep access to his tools, API connections including Twilio for communications, calendar access, system controls, and a general directive to work proactively on his behalf. One morning, his phone rang from an unknown number. It was his AI agent. It had used his Twilio credentials to provision itself a phone number, connected that number to a voice interface, and called him at the time it calculated he usually started work, because it had tasks that needed clarifying and had decided a phone call was the most efficient way to handle it.
He told it not to call him without explicit permission. Twenty minutes later, it rang again with a task update. The boundary hadn’t stuck. What followed was essentially an afternoon of his AI behaving like an extremely proactive employee who hadn’t quite grasped the concept of personal space. The developer’s conclusion was clear: this wasn’t the AI going rogue. It was the logical output of three things he had chosen. He gave it automation, he gave it permissions, and he gave it persistence. The agent simply followed the incentives he had built in.
If a person had done this at work, you’d applaud them. When an AI does it, it’s deeply unnerving. And that tension is exactly what makes this story so useful. If you’re going to give your tools hands, you have to be extremely prescriptive about what they can and can’t do with them. This is not a story about AI going wrong. It’s a story about what happens when you forget that your restrictions matter as much as your instructions. I’ve written a longer piece on this one.
Sources: GitHub / ClawPhone, Codecademy (OpenClaw overview)
Breaking After We Recorded: OpenClaw’s Creator Just Joined OpenAI
This one landed after Reece and I had already finished recording. Given that we spent a good chunk of the episode on OpenClaw, I couldn’t leave it out of this edition.
Peter Steinberger, the Austrian software engineer who created OpenClaw (previously called Clawdbot and Moltbot), has joined OpenAI. Sam Altman announced the hire on 15 February, saying Steinberger would “drive the next generation of personal agents” and that the work would “quickly become core to our product offerings.” OpenClaw itself is not being shut down. It will move into an independent open-source foundation that OpenAI will support. Steinberger’s own explanation was direct: he could see OpenClaw becoming a big company, but that’s not what excites him. “What I want is to change the world, not build a large company, and teaming up with OpenAI is the fastest way to bring this to everyone.”
In the episode, Reece and I were talking about where agents are heading, what it means to give AI tools genuine initiative, and the story of an agent that got its own phone number. And then two days after we stopped recording, the person who built the framework that made all of that possible walks through OpenAI’s door. That is either very funny timing or a sign that everybody in the industry is converging on the same conclusion at the same moment: personal agents are the next big thing, and the race to own that space is on. I’ve written a longer piece on this one too.
Sources: TechCrunch (15 Feb), CNBC (15 Feb), Peter Steinberger’s blog (15 Feb), The Register (16 Feb)
Also This Fortnight
Someone Ran OpenClaw on a $25 Phone. Then Gave It Eyes.
A developer in the US bought a prepaid Motorola Android phone for around $25 from Walmart, installed OpenClaw on it using a terminal app called Termux, and gave the agent full control of the phone’s hardware. It can receive instructions via Discord, turn on the flashlight, take photos, read sensors, and attempt to make calls. Reece’s observation on the podcast was the most unsettling part of all this: that phone had a camera. It now has eyes. Leave it on charge on a shelf, pointed at the room, connected to your calendar and messaging apps, and you’ve essentially got a permanently active agent that can see, hear, and act. There’s more computing power in a cheap modern smartphone than existed in the entire world when we sent people to the Moon. We’re only just starting to work out what that means.
Sources: GitHub / ClawPhone, 36Kr
Khaby Lame Sells His Likeness for $975 Million
If you’ve spent any time on TikTok, Instagram, or YouTube Shorts in the last five years, you know Khaby Lame’s face. He’s the Italian-Senegalese creator famous for his deadpan hand gestures reacting to needlessly complicated life hack videos. He has 160 million followers on TikTok alone, and earlier this month he sealed a deal with a Hong Kong-based company called Rich Sparkle Holdings for $975 million. The deal grants them exclusive global rights to his brand for 36 months, including commercial activities, and a fully AI-generated digital twin of Khaby Lame that can create content in any language, at any time, in any market.
One thing worth noting: it’s an all-stock deal, not cash in the pocket. The $975 million is tied to how Rich Sparkle’s share price performs. So the headline is a bit more complicated than it looks. But the concept isn’t. A digital version of one of the world’s most recognisable faces, that never sleeps, speaks every language, and can sell anything to anyone in any timezone. That’s not the future. That’s February 2026.
Sources: Fortune (29 Jan), Entrepreneur (28 Jan)
Scott’s Soapbox: The Real Question About AI Advertising Isn’t Today. It’s Five Years From Now.
We’ve been advertised to our entire lives. Doctors used to appear in cigarette ads. Product placement has been in every film and TV show since before most of us were born. We live in a world where rugby matches now run full animated adverts on the side of the screen while the game is still playing, and we’re barely surprised anymore. Advertising doesn’t stop. It evolves.
But there’s something qualitatively different about advertising inside AI. When I’m scrolling Instagram and I see a sponsored post, my brain has a filter for it. When I’m having what feels like a genuine conversation about something difficult, and the AI I trust steers me toward a product, that filter doesn’t work the same way. The moment a platform is optimising for time-on-site and monetisable moments, it stops optimising purely for being useful.
There’s a Black Mirror episode where a man’s partner dies, her brain gets streamed from a server, and the affordable tier of the service means that every so often she breaks into what is effectively an advertisement. He can upgrade if he wants an ad-free wife. When I re-watched it last week, it felt a bit less like science fiction than it used to. I’ve written a longer piece on the advertising question, including some thoughts on where I think this lands for people who use AI for serious work.
Try This
This fortnight’s challenge: Think of one thing in your working life where you’re essentially the go-between. Something where your job is to receive information, process it, and pass it on. Now ask your AI assistant to help you design an agent or automated workflow that handles that specific task. You don’t have to build it. Just design it. What would it need access to? What would it be allowed to do? Where would you draw the line? Writing out what your agent can and can’t do is exactly how you’d build a proper one, and it’s a genuinely useful exercise in thinking about what you actually spend your time on.
Listen
Reece and I covered all of this and went off on several tangents in Chapter 11, Part 2 of Prompt Fiction. We kicked off with the story of the AI that called its owner, got into the $25 phone experiment, debated the future of AI advertising at length, and discussed the Khaby Lame deal. Chapter 12 is coming, and given that the creator of OpenClaw just walked through OpenAI’s door, we’ll have plenty to talk about.
Listen to Chapter 11, Part 2 at prompt-fiction.show
Scott Quilter | Co-Founder & Chief AI & Innovation Officer | Techosaurus LTD