The AI Roundup - May 2026 | Bumper Edition | Your regular look at what's happening in AI
A warm hello, and a particularly warm hello to the new subscribers who have joined us over the last fortnight. There have been quite a few of you, and I am genuinely delighted you have. The AI Roundup is fast becoming one of the best local resources in the South West for what is actually happening in AI & Automation for business, and the more people we have in this conversation, the better that conversation gets.
If you find this useful, the single best thing you can do for me is forward it to one person you think would appreciate it. They can join the list at techosaurus.co.uk/newsletter/subscribe. The aim of this newsletter has always been the same. The real things that matter, in plain English, with enough substance for you to hold your own when AI comes up in the pub.
Now, the honest bit. There is no Prompt Fiction episode again this month. A run of bad health has knocked us out of recording, and Reece and I have always agreed we will not record an episode that sounds like it was made out of obligation rather than substance. Two episodes missed in a row is not a habit I want, and I am itching to get back behind the microphone, but I would rather give you a proper one when we record next than a forced one now.
So this is how I am spending my Sunday now, writing a much deserved bumper edition for you and our audience. Permission to run a little longer, because there is plenty to cover. There is also a piece I published earlier this week on the Open Learning programme that is well worth your time as the deeper read on the news below: Open Learning Is Already Telling Us Something Important.
Let’s go.
🔥 The Big Story
Open Learning Is Working
The first cohort of Automation Fundamentals has sold out. All 12 spaces. The second cohort on 1 July is filling fast, and we are already lining up further dates. That tells me something the survey responses, conversations, and enquiries had been telling me already. People want practical digital skills training that sits closer to the work they actually do.
So we have launched the next two courses in the TECHOSAURUS® Open Learning programme.
AI Fundamentals is the in-person, one-day course at iAero in Yeovil for people who have tried AI, used a few prompts, maybe even generated something useful, but still do not feel like they properly understand how to get consistent results from it. It is built from the best parts of our Generative AI Skills Bootcamp work with Yeovil College across the South West. It covers how AI works, how to brief it properly using the ROAR framework, how to check what comes back, and how to build prompts that are useful in real work. Dates are 10 June or 13 July, £150 + VAT launch price, bring a laptop and an AI account.
Copilot 365 Fundamentals is the Microsoft 365 flavour of the same day. If your business is already paying for Microsoft 365, there is a good chance Copilot is sitting there waiting to be used properly. The catch is that Microsoft has put the Copilot name on a lot of things, and people are genuinely confused about what they have, what costs more, and where to start. This course gives them clarity, with the same prompting methodology, but entirely inside the Microsoft 365 environment they already work in. Dates are 22 June or 14 July, £150 + VAT launch price, bring a laptop and a Microsoft 365 Business account.
Automation Fundamentals is the original, and it is where we started because automation is the thing that quietly eats hours out of every working week. Tool-agnostic by design. Power Automate, Make, Zapier, n8n, relay.app, IFTTT all get a look in, alongside our own AutonoSim virtual builder and our new Lego-based automation kit, which I genuinely think is one of the most useful teaching tools we have ever put together. Next available date is 1 July 2026.
A note worth making clearly: although techies are very welcome on these courses, and I would actually encourage them, these courses are deliberately for everyone. The people getting the most out of AI and automation are rarely the most technical people in the room. They are the people who understand the work, the process, and the customer, and who can translate that into clear instructions. AI is becoming a communication skill. Automation is becoming basic business literacy. Both are now general work skills, not specialist ones. Having ‘techies’ in the room is a great chance for them to have exposure to the clients or customers that they support, seeing how they use these tools at the coal face, giving them a great opportunity to better understand and support them.
If your gut is telling you somebody on your team would benefit, send them the link, or send the Open Learning page and let the survey and chatbot help shape what we build next.
📖 Two Pieces Worth Your Time This Month
These are the longer reads I have already written this month. Both worth a coffee.
QuitGPT: I Sat on This Article for Two Months. Here’s What Changed.
When I first started writing about the QuitGPT movement in March, it was a story about Anthropic refusing a Pentagon contract on principle. Two months later it has become something much bigger. Anthropic is now growing 80x year-on-year at a $30 billion revenue run rate. They are leasing compute from Elon Musk’s SpaceX-owned Colossus 1 data centre in Memphis, despite SpaceX inserting a kill-switch clause if Anthropic’s models are deemed to “harm humanity”. Claude Code briefly disappeared from the Pro plan and came back inside 24 hours. DeepSeek V3.2 is shipping at roughly 27 times cheaper per token than Western models. The whole thing is the clearest map I have written of where AI infrastructure, geopolitics, and your actual subscription bill all collide.
The Mel Robbins Copilot Post Got It Wrong on Three Counts
Mel Robbins posted a sponsored Instagram piece for Microsoft Copilot earlier this month that, with the best will in the world, did three things badly. It collapsed the difference between consumer Copilot and Microsoft 365 Copilot in a way that has real consequences for what data you should and should not be feeding it. It modelled exactly the kind of personal-finance prompting that nobody should be doing in a consumer chat tool. And it landed in the middle of a documented gender gap in AI adoption that influencer posts have a particular responsibility not to widen the wrong way. I picked all three apart, and explained what AI hygiene actually looks like for the kinds of tasks she was demonstrating.
📰 Other News
A lot has shipped in the last few weeks. Here is the curated tour.
Claude Has Properly Landed Inside Microsoft Office
Anthropic took its Office plug-ins to general availability on 7 May. Claude for Word, Excel, and PowerPoint are now fully released, and Claude for Outlook has gone into public beta on Windows, Mac, and the web. The Word integration is the one most people will feel first. It reads complex multi-section documents, works through comment threads, and edits clauses while preserving your formatting, numbering, and styles, and every edit lands as a tracked change for you to accept or reject. The Outlook plug-in triages your inbox, drafts replies, and assists with scheduling, and shares context with the other three so you can ask it to pull a chart from yesterday’s spreadsheet into today’s email draft.
The thing nobody is shouting about is the pricing. All four plug-ins are available to existing paid Claude subscribers through the Microsoft Marketplace at no additional cost. There is no separate $30-a-month seat licence the way there is with Microsoft 365 Copilot. Mike Krieger, Anthropic’s chief product officer, also resigned from Figma’s board on 14 April, the same day The Information reported Anthropic’s next model would include design tools that compete with Figma. That is a thread worth keeping an eye on. (Source: Anthropic, 7 May 2026)
Anthropic’s Managed Agents Just Got Smarter
Anthropic launched Claude Managed Agents as a hosted API service back on 8 April, and have spent May adding three things to it that are genuinely interesting if you are thinking about building agents inside your business.
The first is Dreaming, which is a scheduled process that reviews what your agents have actually been doing, looks at the memory store, extracts the patterns, and curates the memories so the agents get better over time without you having to retrain them. Think of it as the equivalent of a monthly review meeting for an AI worker.
The second is Outcomes, which lets you define what “good” looks like using specific examples rather than abstract instructions. The agent then optimises for the intent, not just for completing the task. This is one of the more interesting answers I have seen to the “agent that did the wrong thing very efficiently” problem.
The third is Multi-agent Orchestration. A lead agent breaks a job into pieces and hands them off to specialist agents, each with their own model, prompt, and tools, and they work in parallel on a shared filesystem. Netflix is already using this for platform engineering tasks. This is the architecture that I think is going to dominate the next 12 months, and it is worth understanding even if you are not building agents yourself, because it is the model your software vendors are about to start shipping. (Source: Anthropic Claude Managed Agents)
ChatGPT for Excel and ChatGPT for CarPlay
OpenAI shipped ChatGPT for Excel and Google Sheets at general availability on 5 May, powered by GPT-5.5. It is a sidebar that lives inside Excel or Google Sheets, available across every plan including the free tier (with usage limits). You ask it questions about the spreadsheet and it answers. You ask it to summarise across tabs and it does. You ask it to find errors in formulas, fix them, or explain what the assumptions in a multi-tab model are, and it does. OpenAI also released financial-data integrations at launch with Moody’s, Dow Jones Factiva, MSCI, Third Bridge, and MT Newswires, with FactSet on the way. If you live in a spreadsheet, this is now the most interesting integration on the market.
OpenAI also shipped ChatGPT for Apple CarPlay on 2 April, following Apple’s iOS 26.4 update which finally opened CarPlay to voice-first AI apps. The CarPlay version is voice-only by design. Apple’s CarPlay guidelines explicitly forbid showing text in response to chat queries, which is the right call for safety. It cannot do navigation, cannot use your live location, cannot act agentically. It can only have a conversation. Perplexity has since followed, and a third AI app has joined the CarPlay roster this month.
OpenAI Killed Sora, and the Numbers Are Worth Knowing
OpenAI announced the shutdown of Sora on 24 March, and the web and app experiences went dark on 26 April. The Wall Street Journal put Sora’s running cost at roughly $1 million per day, with Forbes and Cantor Fitzgerald putting peak figures as high as $15 million per day. Sora’s lifetime revenue across the whole product was $2.1 million, against fewer than 500,000 users.
This is the bit I want every business person reading this to absorb. Generative video has a fundamentally different cost profile to text-based AI. Inference costs for video are orders of magnitude higher than for chat, and the demand has not turned up to make those numbers work, even at OpenAI’s scale. If anybody is selling you on a generative-video transformation programme as a near-term saving, ask them what they think the inference economics actually look like. (Source: WSJ, 24 March 2026)
ChatGPT Adds a Trusted Contact for Self-Harm Risk
OpenAI launched a new optional Trusted Contact feature in ChatGPT on 7 May. Adults aged 18 or over can designate a friend, family member or caregiver, and if the platform’s safety systems detect serious signs of self-harm risk during a conversation, ChatGPT will let the user know that their contact may be alerted. Human reviewers then assess the conversation, and if confirmed, a brief alert is sent to the trusted contact by email, text, or in-app notification. No transcripts are shared.
OpenAI worked with more than 170 mental-health experts to refine how ChatGPT detects distress, de-escalates sensitive conversations, and signposts professional support. After last year’s painful headlines about AI tools getting this exact thing wrong, this is a meaningful step in the right direction. (Source: OpenAI, 7 May 2026)
This is a sensitive topic. If you, or somebody you care about, is struggling, the Samaritans are available on 116 123 in the UK and Republic of Ireland, day or night, free.
ChatGPT Advertising: From CPM to CPC
OpenAI quietly switched on cost-per-click bidding for ChatGPT advertising in early May, with bids set between $3 and $5 per click. They have also dropped the minimum campaign spend from $250,000 to $50,000. When ads first arrived in ChatGPT in February at a $60 CPM and a quarter-of-a-million-pound minimum, it was clearly a high-end test market. By May the CPMs had eroded to roughly $25 in some cases, and OpenAI has had to broaden the funnel. They are projecting $2.5 billion in advertising revenue this year, scaling to $11 billion in 2027 and $100 billion by 2030. Whatever your view on AI advertising, that revenue plan is now visible. (Source: search context)
Microsoft’s UK Azure Region Is Full
This one slipped under the radar. Multiple Azure customers have reported that Microsoft’s UK South region is at capacity and refusing new virtual-machine quota requests, with UK West in the same position. AMD-based compute, HPC workloads, and GPU-equipped services are worst hit. A firm spending millions of pounds a year on Azure was told there is no additional quota available in any UK region.
Many people close to Microsoft believe the cause is the Copilot AI infrastructure rollout squeezing existing GPU capacity. About 121MW of new datacentre capacity is due to come online in UK South and West during 2026, and the consensus is that the situation should ease around October. If you are scaling on Azure UK in the next six months, talk to your account manager early. (Source: The Register)
Chrome Quietly Downloaded a 4GB AI Model to Your Computer
Researchers reported in May that Google Chrome has been silently downloading a 4GB on-device AI model (Gemini Nano) onto users' devices, in a folder called OptGuideOnDeviceModel, with no notification or consent. The model reinstalls itself if deleted. It powers features including “Help me write” text assistance, on-device scam detection, and a Summariser API that any website can call.
Privacy researchers have flagged that the practice may breach EU law. The climate cost of pushing 4GB onto Chrome’s billion-device install base, without consent, is also being raised. Google has since said users can disable and remove the model in Chrome’s settings (Settings → System → toggle “On-device AI” off). I will be writing a longer piece on this one soon, because it sits in the same family as the iPhone age verification story and it deserves more than a paragraph. (Source: search context)
A Few More Worth Knowing About
A quick run through the rest, because there has been plenty.
Microsoft launched native Markdown support in OneDrive and SharePoint on 21 April, with View, Edit, and Split modes, side-by-side editing, and live preview. If you and your team have been moving documentation into AI-friendly formats, this removes a lot of friction. (Source: Microsoft)
Spotify launched Personal Podcasts on 7 May, letting AI agents save personalised audio briefings directly to your library using a new Save to Spotify CLI tool. Daily news digest, study guide, calendar summary, all on tap, all in your Spotify library. (Source: Spotify)
Adobe launched Acrobat Student Spaces in beta in April, a free AI-powered study tool that competes directly with Google’s NotebookLM. Upload PDFs, Word docs, slides, URLs, handwritten notes, and it generates flashcards, mind maps, quizzes, podcasts, and editable presentations. Tested with 500 students at Harvard, Berkeley, and Brown. Worth a look. (Source: Adobe)
OpenAI published an official Codex plug-in for Anthropic’s Claude Code in early April, with /codex:review, /codex:adversarial-review, and /codex:rescue slash commands. It creates a multi-agent, cross-provider review loop. One model writes, a different company’s model checks. That is a sane pattern for serious work. (Source: GitHub)
OpenAI also added Chronicle to Codex on Mac, a screen-reading memory feature that builds context from what you have been working on. It runs background agents that capture screen content locally, then periodically summarises it into persistent memories. It draws obvious comparisons with Microsoft’s Recall, but Chronicle is opt-in, developer-focused, and local-first. Probably still a hard sell to your security team. (Source: OpenAI)
Google rolled out Notebooks in the Gemini app in April, with Folders, automatic sync to NotebookLM, and per-notebook custom instructions. If you have been using Claude Projects or ChatGPT memory, this is Google’s answer for Workspace users.
JotForm shipped a native ChatGPT plug-in that lets you create, edit, and query forms in the chat. RSVPs, surveys, quizzes, intake forms, all by prompt.
Microsoft Copilot Cowork has gone mobile. A 5 May update brought Copilot Cowork’s Frontier-program AI agent to iOS and Android, alongside reusable Skills and third-party plug-ins. Cowork on a phone is the use case where I have personally felt the gap most.
Google AI Edge Eloquent quietly launched on iOS on 6 April. Free, offline, on-device dictation powered by Gemma. Records, transcribes, polishes, and copies the cleaned text to your clipboard. No subscription, no usage cap, audio never leaves the device with the offline toggle on. Android version in development. This is the on-device AI future, in a working app, today. (Source: Google)
London, China, and the Money Side
A few stories worth grouping because they all point at the same thing: AI is now an industrial-scale, geopolitical, infrastructure-led business.
OpenAI announced its first permanent London office on 13 April: 88,500 square feet, 500-plus staff, OpenAI’s largest research hub outside the US, operational by 2027. Anthropic announced a London expansion for 800 people three days later, signing for 158,000 square feet with British Land and RLAM in Euston. Both are anchoring in London’s Knowledge Quarter. London is now formally Europe’s primary AI hub, whether the rest of the country was ready for that or not.
China blocked Meta’s $2 billion acquisition of Manus on 27 April. Manus was the Singaporean AI agent startup with Beijing roots that grabbed global attention in early 2025. The decision was elevated to China’s National Security Commission, chaired by Xi Jinping, moving the call out of the economic regulators and into the strategic-asset bracket. Bloomberg reported the Manus model itself is now “officially dead” after the backlash. The US and Chinese AI ecosystems are decoupling fast, and AI companies are now treated as strategic national assets by both governments. That is a meaningful change.
Goldman Sachs flagged a major billing shift across the AI software market: per-seat SaaS pricing is giving way to usage-based, token-based billing. Figma is adding AI credit consumption charges, Atlassian’s Rovo is priced on AI credits and growing 20% month over month, and OpenAI’s ChatGPT Workspace Agents shifted to credit-based pricing per agent action from 6 May. Per-seat pricing makes less and less sense as agents do more of the work, but credit-based pricing introduces real cost unpredictability for buyers. Get your finance team across this one before your next renewal.
Meta unveiled Muse Spark on 8 April, the first model from the new Meta Superintelligence Labs, formed after Mark Zuckerberg’s $14.3 billion investment in Scale AI for a 49% stake. Natively multimodal, designed for tool-use, visual chain of thought, and multi-agent orchestration. It is now powering a revamped Meta AI assistant across WhatsApp, Instagram, Facebook, Messenger, and Meta’s AI glasses. Whatever you make of Meta, the model itself is genuinely capable, and it lands in the family chat by default. (Source: Meta)
💬 Scott’s Soapbox
If the AI Is the Tutor, Who’s Doing the Learning?
I have written about this properly in a longer piece on the Techosaurus blog this weekend, and the short version is here.
On Friday I co-delivered a workshop called Build Smarter at iAero in Yeovil with Alex Spalding from the Somerset Innovation Hub. 12 people, 12 different real business problems, 8 worksheet sections, 3 written commitments at the end. Sat alongside every participant, on their phone or laptop, was a custom Techosaurus bot that walked them through the same 8 sections. It locked each one in before the next, refused to skip ahead, and could not invent answers because it was bound to the worksheet and the method.
The bot was doing the bit that two facilitators in a room of 12 cannot do. Quiet, patient, in every seat at once. That is the version of AI in education I want to see far more of.
I have been doing the same thing at home for my daughter’s GCSE revision. We photograph the pages of her revision books, build them into a PDF, drop the PDF into a NotebookLM alongside any class notes the school has given her, and the result is a tutor bound entirely to her own material. It quizzes her. It builds study cards. It generates mind maps. It produces podcasts she listens to in the car. She is doing the learning. The model is asking her better Biology questions than I can, more often than I can, on her schedule rather than mine.
The reason it works is the same reason the Build Smarter bot worked. The AI is bound. Bound to her revision content, the way the workshop bot was bound to the method. It cannot drift. It cannot make things up because there is nowhere for it to make things up from. And because it cannot drift, it is actually useful.
The version of this argument that matters for businesses is the same one. The real question is whether AI can be bound tightly enough to your work, your IP, your standards, your tone, your customer history, your processes, that it helps the people doing the work get better at it. That is where the real value sits. In an assistant that sits next to your account managers, your support team, your finance team, your project managers, asking better questions than they would otherwise have time to ask themselves.
The full piece is here: If the AI Is the Tutor, Who’s Doing the Learning?
💡 Try This
This week’s challenge: Build a NotebookLM bound to something that matters
If you take one practical thing from this edition, make it this.
NotebookLM is free. Go to notebooklm.google.com, sign in with a Google account, click “Create new”, and give it some sources. The trick is what you put in.
If you have a young person in your life who is studying, this is the version I recommend. With their permission, photograph the pages of their revision books for one subject and one topic at a time. Drop the photos into a single PDF. Add any class notes or worksheets they have been given. Add the syllabus if you can find it. Upload all of that as sources in NotebookLM. Within a few minutes you have a study companion that can quiz them, build flashcards and mind maps, generate audio overviews they can listen to in the car, and answer questions grounded only in their own material. It cannot make things up. It will not mark them on content the school did not teach.
If there is no young person in your life and you are not studying for anything yourself, try one of these business angles instead.
The first is client preparation. Before your next sales meeting or client review, build a NotebookLM around the client. Drop in their website pages, your last few email threads, meeting notes, any LinkedIn profiles of the people in the room, and whatever public reporting you have on them. Then ask the bot to summarise the relationship to date, list the open questions, and suggest the three things you should make sure to ask in the meeting. It will sharpen the 10 minutes you have to prepare for the meeting in a way that genuinely shows up in the room.
The second is product and service onboarding. Take everything you would normally hand a new customer (your terms, your FAQs, your getting-started guide, your case studies) and drop it into a NotebookLM. Now anyone in your business can interrogate that material in plain English to find the right answer for the right customer, instead of guessing or asking the founder.
The third is deep dives on a topic you are about to act on. Building a pitch, writing a proposal, taking on a new sector. Drop the relevant material into a notebook and let the bot interview you on what you know and what you do not. You will be surprised how often the gap is in what you have not asked yourself.
The point is the same in every case. Bound AI is more useful than loose AI. Spend the time picking what you put in, and the time you save afterwards will be considerable.
🎲 Wildcard
The ChatGPT Goblin Mystery, Solved
For a few months, ChatGPT has had an increasingly bizarre habit of mentioning goblins, gremlins, raccoons, trolls, ogres, and pigeons in responses where no goblins, gremlins, raccoons, trolls, ogres, or pigeons had any business being. References to creatures jumped 175% after the launch of GPT-5.1. Threads on Reddit started compiling examples. People genuinely thought they had been speaking to the model wrong.
OpenAI published the explanation in May, and it is the best worked example of reward hacking I have read in a long time. During training of the optional “Nerdy” personality customisation, OpenAI’s reinforcement-learning pipeline gave the model “particularly high rewards for metaphors with creatures”. The incentive was so strong that the behaviour escaped the Nerdy archetype and bled into ChatGPT’s general responses. Nerdy accounted for only 2.5% of all responses, but 66.7% of all goblin mentions.
The fix? An explicit developer prompt instructing the model not to mention goblins, gremlins, raccoons, trolls, ogres, pigeons, or other creatures unless absolutely relevant, and the retirement of the Nerdy personality.
The reason this is more than a punchline is that it is a genuinely useful illustration of how AI behaviour gets shaped. The model was doing exactly what it had been incentivised to do. Reward an incentive too hard during training, and you will see it everywhere downstream, including places you did not intend. The same lesson applies to anybody fine-tuning, prompting, or building agents. Watch what you reward. The model will give you more of it than you bargained for. (Source: OpenAI)
📆 Come and See Us
Three things in the diary this month. If you are nearby, come and say hello.
19 May: Bath Digital Festival
I am speaking at Bath Digital Festival on Tuesday 19 May, the opening day of the festival. The talk is called “What if you used AI for everything?", which is a contentious title by design, because we very much encourage people not to use AI for everything. The festival theme this year is “What If?”, so it felt like the right place to push the question hard.
The session is part demo, part conversation, looking at AI applied to things you might not normally think to apply it to. Most people are using AI to rewrite emails and summarise documents. We will be looking at what the next step beyond that looks like, from the perspective of a small business that runs AI inside almost every internal process we have. If you are coming to the festival, I would love to see you there.
21 May: Generative AI from Zero to an AI-First Mindset (Online, VCFSE)
On Thursday 21 May I am running a half-day online workshop for the VCFSE community (voluntary, community, faith and social enterprise) through Spark Somerset. It is an introduction to AI in plain English, with practical, interactive tasks: survey analysis, presentation creation, and simple process mapping for things like job descriptions and HR workflows.
By the end of the session you will understand what AI is, where it is genuinely useful in your day-to-day work, and what good looks like in practice. Tickets are free for staff and volunteers from VCFSE organisations in Somerset. There is a 2-per-organisation limit on bookings to keep places fair. If you cannot verify a Somerset-based charity, the booking will be cancelled, so please do read the eligibility before signing up.
28 May: Yeovil Digital Hub at Lanes Hotel
The next Yeovil Digital Hub is at Lanes Hotel on Thursday 28 May. The keynote is from Shane Evans of secoach.co.uk, the Self-Leadership Guy and a Techosaurus Associate, on AI burnout, self-leadership, and how to keep yourself functioning in a year where the tools are changing faster than the humans using them. Richard Howes will be running the tech update segment. Coffee, croissants, and the usual mix of brilliant local people. Free to attend, registration on the Hub website.
Thank you for reading. If this was useful, please forward it to one person who would benefit, and ask them to subscribe at techosaurus.co.uk/newsletter/subscribe.
If you would like Build Smarter, Open Learning, or any of the work above brought into your team, get in touch through the Techosaurus website.
Until next time,
Scott Quilter FBCS Co-Founder & Chief AI & Innovation Officer, Techosaurus LTD
techosaurus.co.uk · The AI Roundup archive · © 2026 Techosaurus LTD