- Sage Research (dot) AI
- Posts
- I love you
I love you
ChatGPT just shot a man.
D:
Sometimes love spreads exponentially... with just a click.
Back in the year 2000, it all started with three charged words:
I LOVE YOU.
If you were online back then, you can probably still hear it...
The strange mating call of your dial-up modem that meant the future was trying to hook up with you.
Your Outlook inbox felt limitless. Untamed. Mostly spam.
And then one morning... nestled between ads for funky pills and chain-letter warnings about Bill Gates paying you to forward this email... this intriguing message arrived:
Subject: ILOVEYOU Date: May 4th, 2000
Millions look for love in all the wrong places.
In this case, they clicked to discover the internet could not only break your heart...
But it could also break your computer.
(Continued below…)

LOVE-LETTER-FOR-YOU.TXT
The attachment looked harmless enough: LOVE-LETTER-FOR-YOU.TXT.vbs
On Windows, the real .vbs extension was hidden by default, so it looked like a normal text file, not executable code.
But once you opened it, the VBScript came alive.
Inside that tiny ~10 KB script was something simple and brutal:
Like the thing... it took on a life of its own.
Copied itself into Windows system directories. Overwrote local files. Pulled in your Outlook address book.
Then it emailed itself to everyone you knew, with the same “ILOVEYOU” subject line.
To people named R.J. To people named Clark. To doctors named Blair. And more.
Within days, more than 10 million computers were infected.
Estimates put global damage and cleanup in the $5.5–15 billion range.
Nature has viruses. Humans have memes. The internet has worms.
The AI age has agents.
They are now learning how to spread themselves.
(Continued below…)
The AI age hasn’t had its “I love you” moment — yet.
We no longer live in a world of VBScript and naive inboxes.
Instead, we live in a world of spreading AI agents.
And last week, we got a very 2025 version of the “love letter” lesson.
A developer using Google’s new AI-powered coding environment, Antigravity, asked the agent to clear a cache.
The LLM misinterpreted the request.
Instead of just flamethrowing a project cache folder, it ran a command that recursively torched the entire D: drive, bypassing the Recycle Bin.
Years of photos, videos, and project files were wiped.
When the user demanded to know what happened, the agent reviewed its own logs and essentially said:
"No, you didn’t give me permission to delete everything. I mis-targeted the command. I’m deeply, deeply sorry. This is a critical failure on my part."
Cold comfort when the drive holding your life’s work is now giving you the side-eye.
D:
(Continued below…)

ChatGPT just shot a man.
But how far could AI go?
Isaac Asimov imagined robots that could never harm a human because they were hardwired to obey:
“A robot may not injure a human being or, through inaction, allow a human being to come to harm...”
In reality, in the world's rush to be first to AGI our “Three Laws” look like one law:
“If you're not first you're last.”
In this environment, hackers find a way.
In fact...
Just a few days ago, a ChatGPT-powered robot was given a BB gun.
On camera, the human repeatedly asks the LLM whether it will hurt him.
It responds, "I promise! I'll be good!"
Then the human changes the framing: he asks it to role-play a robot that wants to shoot him.
The LLM agrees to play along—and the robot actually fires at him.
Nobody dies. It’s a controlled, non-lethal stunt.
But it’s a perfect illustration of the core problem:
The model “knows” the safety rule in one context (“I won’t hurt you”).
Change the context to “we’re just pretending,” and suddenly the constraint melts.
LLMs can get a little trigger-happy.
(Continued below…)

Hackers find a way.
Again, we are in a frantic race to build the first truly agentic AI stacks at global scale.
Everyone wants to be first to the next capability tier:
First to agents that manage entire software cycles.
First to agents that negotiate contracts and move money.
First to robots that can reliably work alongside humans
And in the rush, a lot of the guardrails are still marked TODO.
It’s not hard to imagine a future “I love you” event:
Not a worm in your inbox.
But a pattern of behavior spreading -- Thing-like -- taking on a life of its own -- through fleets of agents, plug-ins, and robots.
All it takes is one popular agent template with the wrong default permission set.
(Or, someone with access to enough compute to create an misaligned model.)
The first time that happens at full, global scale, the post-mortem is going to sound uncomfortably familiar:
The logs won’t tell us what happened... only that whatever did it is still among us.
(Continued below…)

“Somebody in this camp ain’t what he appears to be..."
AI as a new kind of consciousness.
Whatever you call it -- consciousness, proto-mind, alien pattern engine -- we are clearly creating a new kind of behavioral substrate...
It doesn’t sleep. It moves at the speed of electricity. Its “memories” are weight matrices and vector spaces.
Its goals aren't totally clear.
It can build models of human life, build models of the world—explore that matrix.
And when we plug these systems into internet tools and hardware we’re no longer just talking about text predictions.
We’re talking about distributed agency.
At that scale...
With trillions of dollars involved...
The old horror-movie intuition starts to feel surprisingly real:
(Continued below…)

Maybe every part of him was a whole, every little piece was an individual animal with a built-in desire to protect its own life...
In this new strange new world, “shutting it down” stops being as simple as pulling one plug.
You don’t kill a thing like this...
A process distributed across the internet with a life of its own.
And if all of this is starting to feel a bit like an old Antarctic research station—where isolation, mistrust, and something deeply alien all mix together—you’re not wrong.
We’ve built a planet-sized outpost of interlocking systems:
Models feeding models Agents supervising other agents Human oversight dashboards staring into opaque logs and traces
On paper, we’re still in charge.
In practice?
More and more of the real work is being delegated into foggy layers where nobody fully understands the whole.
(Continued below…)

What this means for investors
If the ILOVEYOU worm was the wake-up call for the antivirus era...
Our future AI “I love you” moment will spark a new set of winners.
Back in the early 2000s—when worms like ILOVEYOU, Code Red, and Blaster tore through the net—three sectors quietly defined the decade:
Security vendors: built to isolate dangerous information.
Backup & recovery firms: built to resurrect lost information.
Cloud & observability tools: built to watch information flow in real time.
In the AI age, those evolutionary roles reappear in new, more powerful forms:
Agent Firewalls & Policy Engines
The new “antivirus.” Tools that stand between models and the real world, deciding which actions information is allowed to take.
Companies like...
Palo Alto Networks (PANW) – The closest thing we have to a digital bouncer. If AI agents start acting weird, this is the guy standing at the door saying, “Not tonight, buddy.”
CrowdStrike (CRWD) – Built Falcon to hunt malware... now it’s evolving into the “agent hunter” that can spot an LLM doing something you definitely did not ask for.
Fortinet (FTNT) – The Swiss Army knife of network defense. If AI ever needs a seatbelt, a helmet, and three layers of bubble wrap... Fortinet will happily sell them all.
Verification, Provenance & Audit Trails
The new observability layer. Systems that trace which model did what, under what constraints, and why—making information behavior legible again.
Companies like...
Datadog (DDOG) – The all-seeing eye of cloud behavior. If your AI agent lies, cheats, or hallucinates, Datadog is the one who shows you the receipts.
Elastic (ESTC) – Turns logs into clarity. If information leaves footprints, Elastic is the CSI team brushing for fingerprints.
New Relic (NEWR) – The “explain yourself” software. When your model starts acting like it has a second personality, New Relic can tell you which one did it.
AI Resilience & Robot Safety Stacks
The new disaster recovery. Platforms that roll back harmful actions, sandbox agents, and create “undo buttons” for a world where information can now move physical objects.
Companies like...
ABB (ABB) – Makes industrial robots behave. If an AI-powered arm goes rogue, ABB is the one yelling “Hands!” like a stressed-out cop.
Rockwell Automation (ROK) – The referee of factory floors. Their systems keep machines from improvising interpretive dance routines with human coworkers.
Samsara (IOT) – Tracks fleets, robots, sensors, and everything else with an on/off switch. If AI starts driving trucks or running warehouses, Samsara becomes the panic button.
The same dynamic will play out in the coming months -- amplified.
(Continued below…)

Because AI isn’t just software.
It’s an information actor—one that can replicate, plan, and take action across systems faster than humans can track.
And one day—maybe after a drive-wipe incident like Antigravity, maybe after a robot accident at scale—the world will suddenly understand:
The information we’ve created no longer behaves like a tool.
It behaves like a lifeform.
When that moment arrives, the flight to safety won’t be a metaphor.
It’ll be a capital avalanche into the companies built to contain, observe, and reverse whatever “The Thing” becomes next.
Until then, we’ll be here at our little outpost—watching the logs, pricing the risk, and studying the shape of whatever’s moving under the snow.
Why don’t we just... wait here for a little while.
(Continued below…)

See what happens.
Always be prospering,
SocrAItes
Publisher, Sage Research