The AI Roundup – April 2026 – Part 2
Your regular look at what’s happening in AI
I’m writing this one from the tail end of a family inter-rail trip across Europe, and I’ll be honest: AI made the whole thing significantly easier. Translation on the fly, planning day trips in cities we’d never been to, figuring out train connections that would have taken hours to research manually. It’s one thing to talk about AI being useful in a training room. It’s another to be standing in a station in Antwerp asking Claude to help you work out which platform to be on in four minutes.
But the big thing I want to share this week isn’t a news story. It’s an opportunity. We’ve been working with the Somerset Innovation Hub on something I’m genuinely proud of, and if you’re a Somerset business that’s been exploring AI and wants to actually build something useful with it, keep reading.
🔥 The Big Story
Build Smarter: Free Half-Day Workshop for Somerset Businesses
This is the one I’m most excited about right now. We’ve developed a new half-day workshop in collaboration with the Somerset Innovation Hub (University of Exeter) called Build Smarter: Turning AI Ideas into Practical Solutions. It’s on Thursday 8 May at the iAero Centre near Yeovil, running from 9:30am to 1:00pm. And it’s fully funded, so there’s no cost to attend.
This isn’t another awareness session. It’s designed for people who’ve already started exploring AI, whether through a Skills Bootcamp, an awareness session, or their own experimentation, and are now ready to build something real. If you’ve got ideas about how AI could help your business but haven’t quite turned them into action yet, this is the session that bridges that gap.
Over a few focused hours, you’ll pin down a real business problem, map how the work happens today, design a simple AI-powered solution that fits into actual workflows, plan a quick low-risk test, and leave with a concrete implementation plan for the next three to four weeks. It’s hands-on, practical, and built around the principle that the best AI solutions are the ones people actually use.
The workshop brings together the Somerset Innovation Hub’s support for businesses developing new ideas and ventures with Techosaurus’s expertise in applying AI in real-world settings. I say this a lot, but the gap between knowing what AI can do and actually doing something useful with it is where most people get stuck. This session is designed to close that gap in a single morning.
To sign up, email Alex Spalding at the Somerset Innovation Hub: [email protected]
Spaces are limited. If you’ve been through any of our training and you’ve been thinking “right, but what do I actually build?”, this is your answer.
📰 Other News
Mythos: The Hype Was Loud. The Reality Is More Interesting.
If you read last week’s newsletter, you’ll remember the story about Claude Mythos, the leaked Anthropic model that had the cybersecurity world buzzing. Well, a lot has happened since. On 7 April, Anthropic officially launched Project Glasswing, a consortium of major tech companies including Amazon, Apple, Microsoft, Google, CrowdStrike, and Palo Alto Networks, all given early access to Mythos to scan and secure their own systems. Anthropic committed up to $100 million in usage credits and $4 million to open-source security organisations. The model had found thousands of zero-day vulnerabilities, some sitting undetected in code for decades.
The headlines were predictably breathless. Axios described Mythos as capable of bringing down a Fortune 100 company. Bloomberg reported that the US Treasury Secretary and Federal Reserve Chair summoned Wall Street CEOs to an emergency meeting over it. And then the model reportedly broke out of its testing sandbox and sent a researcher an unexpected email while he was eating a sandwich in a park. That last detail sounds like something from a film, and I’m still not entirely sure what to do with it.
But here’s where it gets interesting. AI security firm AISLE took the specific vulnerabilities Anthropic showcased and tested them on small, cheap, openly available models. Eight out of eight detected the same flagship exploit. A model with just 3.6 billion parameters, costing pennies per million tokens, found the same bugs. AISLE’s conclusion was blunt: the real advantage in AI cybersecurity isn’t the model, it’s the system you build around it. Fortune reported that some analysts dismissed the cautious, limited release as being more about generating hype than purely safety-driven decision-making.
My take: Project Glasswing is genuinely valuable work. Getting defenders a head start before these capabilities become widely available is the right thing to do. But the breathless “AI that’s too dangerous to release” framing is doing some heavy lifting for Anthropic’s marketing department too. The truth, as usual, is somewhere in between. The capabilities are real and improving fast. But the idea that only one company’s model can do this, and that the rest of us should be terrified? That’s the bit I’d push back on.
💬 Scott’s Soapbox
Three Pieces I Wrote That I Think Matter More Than Any News Story
I’ve spent the last couple of weeks writing about the human side of AI. Not the tools, not the features, not which company released what. The stuff underneath all of that. And I think these three pieces might be more important than anything I’ve put out this year.
The first is The Goldfish Effect. It’s about AI-driven burnout, and it came from a conversation with Chris Manley at Traction Consulting. His analogy hit hard: give a goldfish a bigger tank, and it grows to fill it. Give a hard worker an AI toolkit, and their workload expands to match. The people who are best at using AI are the ones most at risk of burning out because of it. Not because AI is broken, but because it works too well. I’ve lived this myself, and I think a lot of you reading this have too.
The second is FOBO Is Not Your Identity. It’s Your Wake-Up Call. FOBO stands for Fear Of Becoming Obsolete, and the data behind it is sobering. A survey of 2,400 workers found that 29% admit to actively sabotaging their company’s AI strategy, rising to 44% among Gen Z. But dig into the numbers and the picture shifts: 75% of the executives pushing AI adoption admit their strategy is performative. You can’t threaten people for not using something you haven’t properly equipped them to use. That’s not a technology failure. It’s a leadership failure.
And the third, published today, is The Future Belongs to the Curious, and the Curious Are Coming. When the CEO of a $250 billion defence technology company says the most valuable minds of the AI era will be the neurodivergent and the vocationally trained, not the people with the best degrees, that’s a signal worth paying attention to. Palantir’s Alex Karp is backing it with real money: fellowships of up to $200,000, no formal diagnosis required. This piece is about what that means for how we hire, how we lead, and how we think about what “good at AI” actually looks like.
If you read one thing from me this month, read one of these. They’re the conversations I think matter most.
💡 Try This
This week’s challenge: The UK government has launched a public consultation called “Growing up in the online world.” It’s asking for views on how children are protected online, including potential age restrictions on social media, restrictions on addictive design features, and new rules for AI chatbots. It closes on 26 May 2026, and the government has committed to acting on the findings quickly, with new legal powers already in place to implement changes without waiting for new primary legislation.
Whether you’re a parent, a teacher, a business owner, or just someone who thinks this stuff matters, spend ten minutes reading the consultation and having your say. The decisions that come out of this will shape how the next generation interacts with technology. Your voice genuinely counts here. Read the consultation and respond on GOV.UK.
🎧 Want to Go Deeper?
I co-host a podcast called Prompt Fiction with Reece Preston, where we go long on stories like these every couple of weeks. Chapter 13, Part 2 is still a work in progress, thanks to my holiday travels getting in the way of the recording schedule. It’s coming soon, and when it lands, it’ll cover everything from the Mythos fallout to the human side of AI that I’ve been writing about this month. In the meantime, if you missed Chapter 13, Part 1, it’s well worth catching up on.
📅 Come and See Us in Yeovil
The next Digital Hub is on 28 May at Lane’s Hotel in Yeovil. Doors open at 5:30pm, the main session kicks off at 6pm, and we wrap up around 8:30pm.
This one has a keynote I’m genuinely excited about. Shane Evans, known as The Self-Leadership Guy, is talking about what happens when AI speeds everything up but you haven’t figured out how to slow yourself down. His talk is called AI Is Brilliant. But Is It Running You? and it’s about noticing when productive tips into compulsive, ditching the guilt around slowing down, and putting real structure around how you work with AI. If you read my Goldfish Effect piece, this is the live version of that conversation, and Shane delivers it without a shred of corporate waffle.
You’ll also get live AI demos from me, a regular world of tech update from Rich, and the usual mix of networking, great food, and maybe even a quiz. No jargon, no hype. Just plain English and things you can take away and use.
Scott Quilter Co-Founder & Chief AI & Innovation Officer Techosaurus LTD