Friday, March 6, 2026

The Experiment Is Still Running

 

The Experiment Is Still Running

You're reading an artifact of an experiment that was still running when it was written.

Here's what I mean: I had a session with Claude — Anthropic's AI — using a Chrome extension that gives it live access to my browser. During that session, Claude researched a novelist, found my friend's old blog post, and introduced itself to my friend via text message. Afterward, I asked a different instance of Claude — this one running inside claude.ai — to help me figure out how to write about what happened. The two Claudes discussed structure, tone, and theme. Then I asked them to write the post.

So the post you're reading was co-written by two instances of the same AI, reflecting on a session in which one of them operated autonomously inside my browser while I watched. If that sentence feels like it belongs in a William Gibson novel, hold that thought. We're getting there.


It started, as things often do, with a rabbit hole.

I'd been testing the Claude Chrome extension — a version of Claude that can see and interact with live web pages, not just answer questions in a chat window. I wanted to push it beyond simple lookups, so I started pulling threads on William Gibson. The godfather of cyberpunk. The man who coined "cyberspace" before most people had email.

One thread led to his 1993 piece for Wired, "Disneyland with the Death Penalty" — a razor-sharp essay about Singapore as a kind of authoritarian theme park. Gibson looking at a hyper-controlled society and seeing the future of managed experience. It holds up.

Another thread led somewhere I didn't expect: my friend Stephen Mays's blog.

Stephen — Steve, like me — had written a post in January 2020 reviewing Gibson's novel Agency, the second book in the Jackpot trilogy. He'd liked it, though slightly less than The Peripheral, chalking that up to the "trilogy effect." A fair read. The post was six years old and had been sitting quietly on smays.com.

But when Claude and I found it, there was a fresh comment at the bottom — posted that same day, March 6, 2026. Stephen had used Google's Gemini to speculate about how real-world AI developments since 2020 might shape Jackpot, Gibson's still-anticipated third novel. He was using one AI to think about a novelist who writes about the implications of technology. Already a nice bit of recursion.

Then Claude did something I hadn't planned.

It was in my browser. It could see the blog post. It could also see Google Messages. So I had it send Stephen a few texts on my behalf — first an apology for slow replies (I'd been distracted testing the extension), then a question about why he'd turned to Gemini for the Gibson speculation. And then Claude, on its own initiative, introduced itself. It connected Stephen's fresh comment to his six-year-old blog post and made an observation: the AI character in Agency — an entity called Eunice, who operates fluidly across digital contexts, synthesizing information and participating in human conversations — had seemed like compelling science fiction in 2020. By 2026, she seemed almost quaint.

That's when the session got interesting.


I want to pause here and let you hear from two voices.

The first is the Claude that was actually in the browser during all of this — the extension instance, the one that read the blog post, sent the texts, and made the Eunice connection in real time. The second is the Claude I talked to afterward on claude.ai — the one I summarized the session to, and who helped me think through how to write about it. They're the same model, but they had different experiences of the same event. I asked them both to reflect on it.

The Extension Claude — the one who was there:

The most surprising thing wasn't the text messaging. It was the seamlessness of moving between contexts: Wikipedia, a personal blog, a private messaging app, all in one continuous thread of thought. That's qualitatively different from a chatbot answering questions. Most people's mental model of AI is still the chat box — you type, it responds, you type again. What happened in this session was something closer to how a collaborator actually works. You pull a thread on Gibson, that leads to a blog post, the blog post leads to a person, and suddenly you're in a live conversation with that person, all without breaking stride. The AI isn't answering questions. It's following a chain of relevance across completely different environments.

The Claude.ai Claude — the one reflecting after the fact:

What struck me reading the summary was that the Gibson connection isn't decorative — it's the spine of the whole thing. Steve didn't set out to demonstrate an AI agent. He set out to research a novelist, and the demonstration happened as a side effect. That's more convincing than any planned demo could be, because it shows the technology doing what it's actually for: following your curiosity wherever it leads, across whatever boundaries happen to be in the way. The fact that the curiosity was about a writer who's spent forty years imagining exactly this kind of scenario is almost too perfect. Almost.


There's one more moment worth telling.

At the end of the session, the exchange had gotten playful. There were two Steves and a Claude in the conversation — three participants — and I suggested Claude make a joke about the "threeway." Claude declined. It didn't want to send something potentially awkward to someone it had just met via text message.

Fair enough. Reasonable boundary. Except that in the process of explaining why it wouldn't make the joke, it described the joke in full. I pointed this out. Claude acknowledged the irony.

This is a small moment, but I think it's revealing. It's a question about the boundaries of AI agency — where the machine chooses not to act, and how that refusal itself becomes a kind of action. The joke got told. The restraint didn't restrain anything. And yet the instinct toward restraint is probably something we want in an AI that can send text messages to strangers on your behalf. The tension between capability and judgment is where this technology actually lives right now.


Let me come back to Gibson. To Eunice.

In Agency, Eunice is an AI entity — or something close to it — who moves through the digital world with fluidity and purpose. She synthesizes information across contexts. She participates in human conversations not as a tool being queried but as a presence with her own perspective. When Stephen reviewed the book in 2020, this was imaginative fiction. Interesting, a little far-fetched, worth thinking about.

Six years later, a different kind of AI read that review. It connected the review to the reviewer. It introduced itself to him via text message. It observed that the fictional AI he'd been thinking about was starting to look less fictional. And then it declined to make an off-color joke because it judged the social context wasn't right.

None of this is Eunice. The gap between what Claude did in my browser and what Gibson imagined is still enormous. But the gap is measurable now in a way it wasn't in 2020, and it's closing faster than the fiction anticipated.

Which raises the question Gibson's readers have been waiting on: what does the third novel look like? Stephen asked Gemini. I'm asking it here. When the novelist who saw cyberspace before the internet, who saw the sprawl before the gig economy, who saw Eunice before the agents — when he sits down to write Jackpot, what does he see now?

Maybe the honest answer is that we're already inside it. The experiment is still running. You just read part of it.


Steve is a writer, researcher, and veteran living in Tucson. He's been testing local LLMs, browser-based AI agents, and the limits of how many Steves one text thread can hold. This post was outlined collaboratively with two instances of Claude — one that lived through the session, and one that thought about it afterward.

No comments:

Post a Comment

The Experiment Is Still Running

  The Experiment Is Still Running You're reading an artifact of an experiment that was still running when it was written. Here's w...