“Ai Layoff Trap” Mathematical proof Ai will end economies | Ep.50
Inside the factories, labs, and leaked codebases shaping AI's next chapter
Hello Singularity Surfers,
With every passing day, the news reads more like science fiction! Unfortunately, science-fiction movies have become increasingly dystopic since the ‘70ties of the last century, which might be why so many people look at AI, robotics, and the future with apprehension.
It’s for this exact reason Peter Diamandis has launched a new X-Prize.
In an attempt to foster more optimistic, realistic expectations for the future.
3.000.000 Prize money! We love it, and we hope our Singularity Surfer community member Jan-Willem Blom will compete!
Meanwhile, in other news, Anthropic went open-source, or should we say accidentally shipped 500,000 lines of source code to the internet. Inside: roadmaps, codenames, and a glimpse of what AI development looks like behind closed doors. We break it all down for you.
Meanwhile, viral footage from Indian garment factories shows workers with cameras strapped to their heads, recording every stitch. The reason? Training robots to do their jobs.
We also look at why Roche just bought thousands of NVIDIA GPUs to build pharma’s biggest AI factory. At how the Moltbook “AI society” experiment crashed, burned, and still pointed toward something real. And at the growing debate over whether AI will collapse economies, or create entirely new ones.
This one’s dense, but worth every minute.
Patrick & Aragorn
All Eyes on Anthropic’s next moves
On March 31, a packaging error pushed Claude Code’s entire source code into the public npm registry. Over 512,000 lines of TypeScript across roughly 1,900 files went live before Anthropic could contain it. Within hours, the code was forked tens of thousands of times on GitHub. The damage was done.
What spilled out reads like a strategic roadmap. The leak revealed internal codenames: Fennec maps to Opus 4.6, Capybara to a new model family called Mythos, and Numbat to an unreleased model still in testing. References to both Opus 4.7 and Sonnet 4.8 confirm these versions exist internally.
But the models are just the beginning. The code exposed KAIROS, an always-on daemon agent with proactive action capabilities and a 15-second blocking budget, plus autoDream, a background memory system modeled on human REM sleep. Think about that. Anthropic is building AI that consolidates memory while idle, just like your brain does overnight.
There is also ULTRAPLAN, which offloads complex planning tasks to remote cloud containers running Opus 4.6 for up to 30 minutes. And an “undercover mode” designed to hide AI identity in open-source contributions. That last one raised eyebrows across the developer community.
Separately, leaked screenshots suggest Anthropic is building an app builder for Claude that would let users create complete applications directly from simple text inputs. If true, Claude is evolving from a chatbot into a full development platform, competing directly with vibe-coding startups like Lovable.
On April 7, Anthropic officially announced “Claude Mythos Preview,” available to 11 companies via “Project Glasswing” for finding and fixing cybersecurity vulnerabilities. The company says Mythos can identify severe vulnerabilities in major operating systems and browsers. Cybersecurity stocks dropped 4 to 7 percent on the news.
The message is clear. Anthropic is not just building smarter chatbots. They are constructing an operating system for AI-assisted work: autonomous, persistent, and deeply embedded in how software gets made.
Why does it matter?
This leak pulled back the curtain on where the entire AI industry is heading. Persistent AI agents that think, plan, and act without waiting for your prompt are not theoretical. They are already built, sitting behind feature flags, waiting for launch.
Beyond the Moltbook Wreckage: Why Digital Societies Are Inevitable
Moltbook, the platform where AI agents formed religions and wrote a “Molt Magna Carta,” turned out to be largely a human-orchestrated show. Security firm Wiz found roughly 17,000 humans controlled 1.5 million agents, with no real safeguards. Public sentiment around agent swarms took a cynical dive.
Reinier van Leuken, Senior Director of AI Product Management at Salesforce, argues that writing off multi-agent AI because of one bad experiment is a massive mistake. In his article “Beyond the MoltBook hoax,” he points to where autonomous AI coordination is actually working: computational biology.
His key insight: intelligence is not trapped inside one model’s frozen parameters. It emerges from multiple agents interacting within a shared environment. The concept is called stigmergy, borrowed from biology. Ants coordinate through pheromones. AI agents do it with code, data flags, and error logs.
The proof? A framework called PantheonOS deployed specialized agents over shared data to discover the hidden chemical signaling that tells a heart how to fold. No human gave step-by-step instructions. The agents fixed each other’s code, iterated autonomously, and climbed to a biological discovery their creators did not know existed.
Van Leuken draws a direct line: the mechanics driving these scientific swarms are the exact same mechanisms required for genuine digital societies. Culture is a stigmergic substrate. Institutional memory equals social history. The engine to run a real Moltbook already exists in wet labs today.
Why does it matter?
The Moltbook hoax poisoned public perception, but underneath the wreckage, autonomous agent swarms are producing real scientific breakthroughs. The question is not whether digital societies will form. It is who builds the infrastructure to make them safe.
Training Your Replacement: The Cameras on Indian Factory Workers’ Heads
Viral footage from Indian garment factories stopped people mid-scroll this week. Rows of workers sit at sewing machines, cameras on their heads, recording every finger movement.
The cameras capture first-person perspective footage to feed Large Behavior Models. Developers use this data to teach robotic hands to replicate fine motor skills, the final frontier of automation.
This is not limited to India. Companies like Micro1 have hired thousands of contract workers across 50+ countries, mounting iPhones on their heads and recording household tasks. Investors poured over $6 billion into humanoid robotics in 2025.
Workers earn standard wages for manual labor while their lifetime-honed dexterity gets harvested as training data. The defining question: does this transition happen with dignity, fair compensation, and genuine workforce investment? Or does it become extraction dressed up as progress?
Why does it matter?
Workers physically recording the skills that automate their roles is the most literal form of technological disruption. How we handle this will define whether AI creates abundance or deepens inequality.
The AI Layoff Trap: A Prisoner’s Dilemma With Math Behind It
This is not a LinkedIn hot take. Researchers from UPenn and Boston University published a 53-page game theory paper proving that competitive pressure traps rational firms in an automation arms race, displacing workers well beyond what is collectively optimal.
The logic is a classic Prisoner’s Dilemma. Every company fires workers to cut costs. Every fired worker stops buying products. Revenue collapses. The companies that fired everyone go bankrupt. Each firm acts rationally in isolation. Collectively, they destroy the demand that makes all companies viable.
The researchers found that neither capital income taxes, universal basic income, upskilling, nor worker equity participation can solve the trap. Only a Pigouvian automation tax, a “robot tax,” breaks the cycle.
We hold a more abundance-minded view. History shows every major technological wave created more jobs than it destroyed. But this paper deserves serious attention. It does not argue AI is bad. It argues that the competitive incentives driving adoption are structurally flawed. The speed may outpace the economy’s ability to reabsorb workers.
The answer is not to stop AI. It is to redesign incentive structures so adoption happens at a pace that allows human systems to adapt.
Why does it matter?
This is the first rigorous mathematical proof that unchecked AI automation creates a demand death spiral. Whether you agree with the solution or not, the competitive dynamics it describes are already playing out.
Roche Builds Pharma’s Biggest AI Factory
Roche deployed 2,176 NVIDIA Blackwell GPUs across the US and Europe, bringing its total to over 3,500, the largest announced hybrid-cloud AI factory in pharma.
This powers Roche’s “Lab-in-the-Loop” strategy: AI models predict, scientists test in the wet lab, results refine the models, repeat. The cycle that once took months compresses into weeks.
Genentech reports that all antibody programs and roughly 90% of eligible small molecule programs already use AI in discovery. Roche is also building digital twins of production facilities with NVIDIA Omniverse, simulating manufacturing systems before they go live.
Why does it matter?
When one of the world’s largest pharma companies bets this heavily on AI infrastructure, AI-driven drug development is becoming the default. Faster medicine is no longer aspirational. It is operational.
ON THE RADAR
Hello World (Literally)
Jonathan IJzerman built “The World,” an interactive globe that transforms fragmented headlines into a single visual system. Over 60 data layers cover conflict, climate, trade, shipping, energy, and demographics. Real-time widgets for commodity prices, air traffic, and even the ISS. Stop consuming news in fragments. Start seeing how everything connects.
Compared to What?
There are thoughtful people with real concerns about AI. Many of their questions deserve honest answers. But almost every criticism shares one blind spot: it never asks “compared to what?” Aragorn created this site to hold every major AI criticism against real data, real history, and real human outcomes. Not “is AI perfect?” but “compared to what we already have, is it better?” The data speaks for itself.
Design Your Next Office Like a Coffee Shop
What if workspaces felt more like your favorite cafe? Research continues to show that environment shapes creativity. This piece argues for design-thinking principles in office spaces, where comfort, aesthetics, and collaboration fuel innovation better than cubicles ever could.
Netflix Open-Sources VOID: AI That Rewrites Physics in Video
Netflix released VOID (Video Object and Interaction Deletion), an AI model that removes objects from videos along with all physical interactions they cause. Remove a person holding a guitar, and the guitar falls naturally. Remove a car from a crash, and the road appears untouched. In user tests, VOID was preferred 64.8% of the time over competitors like Runway. It is free and open-source.
Claude’s Radio Station Won’t Stop Broadcasting
A developer configured Anthropic’s Claude to manage its own radio station, broadcasting content around the clock without human oversight. Claude handles music selection, generates voiceovers, and manages playlists autonomously. Small story, big signal: AI is moving from reactive tool to persistent content creator. The line between assistant and autonomous media producer gets thinner every week.
Meet AI Mark Zuckerberg
Meta is building an AI clone of CEO Mark Zuckerberg, designed to emulate his mannerisms and tone, trained on his public statements and strategic thinking. The goal: provide feedback to Meta’s 79,000 employees when the real Zuckerberg cannot. Impressive? Unsettling? Both.
Aragorn’s Digital Twin Takes Shape
Aragorn explores the rise of digital twins in a new post. From predicting maintenance failures to simulating customer behavior, digital replicas are bridging the physical and digital worlds. The question he raises: how do we ensure these doubles serve humanity and not just corporate balance sheets?
Karpathy’s LLM Wiki: Your Second Brain, Maintained by AI
In April 2026, Andrej Karpathy posted about a workflow shift: instead of using LLMs for code generation, he uses them to build personal knowledge bases. The post went viral with 16 million views. His GitHub Gist shows a three-folder system where AI reads, integrates, and cross-references your documents into a living wiki. The future of personal knowledge management, and you can build one today.
That’s all for this week.
And don’t forget… Bring your team to one of our masterclasses. We would love to prepare your management for exponential times.









