Peri‑singular
Gradually, then Suddenly
The day I met Vernor Vinge we went to dinner in the company of my good friend David Baxter. We had an ulterior motive: David wanted to secure the rights to Vernor’s True Names to write a screenplay — and make a motion picture. As an expert in the virtual world, I’d been invited to work on the script.
While I was thrilled to be able to work on True Names, I had other reasons to have a deep conversation with Vernor. Well aware of his 1993 paper for NASA, The Coming Technological Singularity, I wanted to ask him what he thought about how we had been tracking in the half-dozen years since he’d written it.
We went deep, with recent developments in nanotechnology, virtual reality, machine learning — he stayed current with the research — and somewhere in the back-and-forth, he suddenly came to a conclusion, roaring out at me, “You’re a GRADUALIST!”
I knew immediately what he meant. His theory of a Technological Singularity posits an exponentially-increasing technological capability that achieves a blink-of-an-eye ‘takeoff’, accelerating well beyond human capabilities. And human understanding.
It’s not that I’ve ever doubted that thesis; I’ve just wondered what a ‘blink-of-an-eye’ actually looks like when you’re situated in the midst of it. Things that in the geological record look nearly instantaneous (such as the Permian-Triassic boundary, bringing extinction to 85% of species) took tens of thousands to millions of years. Even the sorts of environmental collapse associated with impactors such as Chicxulub — which did for all the dinosaurs except the birds — aren’t truly immediate except in the immediate vicinity of the impact.
Things, in short, take time.
I can not see any reason why the technological singularity would be any different. When viewed through a geological lens, it might appear to be here-today-gone-tomorrow, but that’s an artefact of the sampling rate. Day-to-day, minute-by-minute it seems unrealistic that any process essentially physically-based would or even could be so rapid. The world simply doesn’t move that way.
While there are moments of phase transition, they reveal a new order, rather than encompassing the change: all the elements must be in supersaturation for the phase transition to occur. It seems sudden only because the entire prologue has completed its necessary setup.
None of which is to undercut Vernor’s basic thesis — a Technological Singularity appears inevitable. That assertion was true before he wrote it, and remains true now that he’s gone. What remains contestable is what that Singularity actually looks like from within it. Because that’s right where we find ourselves.
We inhabit a moment in time of indefinite duration, between a time when a technological singularity was literally unthinkable and a time when it will be impossible to conceive of anything else. We are ‘peri-singular’, in the midst of Big Things, trying to work out where this is all going.
And because I asserted that we would have a reasonably long run-up to “The Great Surprise” of the Technological Singularity, Vernor pronounced me a ‘Gradualist’. I didn’t disagree at the time, and I still hold that view.
But in 2026, as the rate of change becomes so great it begins to confound any capacities we have to manage it, my views have become more nuanced. Peri-singular isn’t a single thing. It’s a gradient between not-so-much and utterly overwhelmed.
As to where we are right now, Vernor would not be surprised: every day, more overwhelmed.
The Grief of the Great Decentering
John Allsopp wrote a recent post comparing our present moment to the era when classical physics gave way — ‘gradually, then suddenly’ — to quantum mechanics. First you have an ultraviolet catastrophe, then you have Planck’s discrete ‘quanta’ patching that discrepancy, then you have uncertainty and cats-that-are-both-dead-and-not-dead and finally nuclear decay described as a quantum process, which leads directly to the atomic bomb.
Science advances one funeral at a time.
It all came together over one meeting in Brussels in 1927, the Solvay Conference. Einstein attended, even though not a true believer in quanta, debating Bohr on the randomness of the quantum realm: “God casts the die but not the dice.” Physicists and philosophers have been arguing about what it all means ever since.
A worldview died; “Newton’s Sleep”, as Blake put it, had been undermined by quantum mechanical probabilities and indeterminacies. There was no watchmaker, no celestial watch. It’s not clear — coming up on a century later — that we ever got over this, even as our entire civilisation is built upon semiconductors exploiting quantum mechanical effects. We see the world classically, or at least think we do.
It’s complicated.
What wasn’t complicated then — and remains largely the same — is our view of ourselves as the big-brained species capable of dreaming up quantum mechanics, a set of observations so deeply weird, so completely at variance with 99.999999% of observable reality that no human anywhere has actually been able to articulate what it all means. Maybe it doesn’t mean anything, some say. Or — well, maybe we’re just not up to it. Maybe our brains simply aren’t big enough.
That’s not a thought we often entertain. We reckon ourselves ‘smart enough’ to handle it. And perhaps that’s true. Or perhaps that’s whistling past the graveyard of another sort of funeral — one that advances our conception of ourselves by putting a limit on it.
Which brings us into this peri-singular moment: Allsopp points to the many-and-exponentially-growing reports from programmers and software engineers at the frontiers of AI coding, who all almost universally report a deep sense of loss and worthlessness as a machine begins to outperform them on an ever-wider range of cognitive tasks.
Competency at these tasks defines their work, and, by extension, their self-worth. If they’re no longer required because a machine can do it better, it means, implicitly, they’re no longer top of the heap. They’ve been usurped. By a machine.
Programmers got there first. Copywriters followed in 2025. Next come the lawyers and health administrators and marketers and financial advisors and on and on — a list of roles that can now or will soon be done better by machines than by people, precisely because the quality of cognition on offer is equal to or better than that available in the wetware version.
That’s an entirely new thing. It’s not a neat parallel to what happened with the steam engine, slowly displacing human labor as farming machinery became more capable, but immediately eliminating the need for animal labor. The number of horses in London collapsed over a single generation. When the horses were no longer needed, they simply… disappeared.
All of this is telling us that our definition as the greatest of the species — because we’re so smart — leaves us perilously exposed. A challenge isn’t coming from the animal kingdom, nor — as we might have anticipated — from aliens. No, this challenge is coming from the machines we ourselves created. The crown of intelligence is being passed from humans to machines.
That’s the essence of the ‘peri-singular’ moment. We’re in the days, months and years when this is happening. Today the programmers, tomorrow, everyone else, everywhere. Comprehensively, completely, and permanently.
As Allsopp points out, this sort of fundamental shift invokes a sense of loss. A loss of self. With that loss comes grief. The programmers are in shock and grieving. The rest of us will soon follow.
This is the moment of the Great Decentering, when our conception of ourselves shifts utterly and permanently.
It’s frightening because we associate it with a loss of control — the same agency we deny to ‘lesser’ animals we fear being visited on our selves. And we can’t easily outrun our history on this.
So there’s grief. And there’s fear.
That’s the texture of the Great Decentering.
John Henry’s Hammer and Turing’s Revenge
In primary school, we learned a song:
Told his captain
Well a man’s gotta
Act like a man
And before
Steam drill beats me
I will die
Hammer in my hand.
Many variations on this song have been current for at least a century, all of them centering upon the ‘battle’ between man and his machines. (In these stories, it’s always men.) John Henry will smash rocks better than the new steam drill — he’ll stake his life on it.
By the song’s conclusion SPOILER ALERT John Henry has done just that — coming just ahead of the steam drill. Then, having worked himself beyond all human endurance, he collapses and dies.
What is a man’s worth? Is it how many rocks he can break in a day? Or, since that measure has been well and truly wiped away by modern machinery, does our worth lay elsewhere, perhaps in our ability to think?
Enter Alan Turing, from 1948’s Intelligent Machinery:
An unwillingness to admit the possibility that mankind can have any rivals in intellectual power. This occurs as much amongst intellectual people as amongst others: they have more to lose. Those who admit the possibility all agree that its realization would be very disagreeable. The same situation arises in connection with the possibility of our being superseded by some other animal species. This is almost as disagreeable and its theoretical possibility is indisputable.
And, from 1950’s more widely read Computing Machinery and Intelligence:
We like to believe that Man is in some subtle way superior to the rest of creation. It is best if he can be shown to be necessarily superior, for then there is no danger of him losing his commanding position. The popularity of the theological argument is clearly connected with this feeling. It is likely to be quite strong in intellectual people, since they value the power of thinking more highly than others, and are more inclined to base their belief in the superiority of Man on this power.
Three quarters of a century ago, Alan Turing, the acknowledged ‘godfather’ of artificial intelligence, predicted the high degree of discomfort — particularly in the intellectual classes — associated with a moment when ‘thinking’ machines would come to rival or even surpass human capacities.
Could we answer that challenge by working faster and harder than the machines themselves? These machines operate at an unnatural, inhuman tempo. When we try to operate at the speed of the machine, we find ourselves rapidly burning out. Steve Yegge noted this in early February:
I first started noticing a concerning new phenomenon a month ago, just after the new year, where people were overworking due to AI.
This week I’m suddenly seeing a bunch of articles about it.
I’ve collected a number of data points, and I have a theory. My belief is that this all has a very simple explanation: AI is starting to kill us all…
We can not match the pace of the machine. That’s the path John Henry walked. Nor can we reliably beat the machine in head-to-head contests of intellectual capacity.
All we have left to fall back upon reads like ‘Man is in some subtle way superior to the rest of creation’:
Large language models (LLMs) can reproduce patterns from existing data and close variations thereof. But this does not mean that they use the same kind of cognition as humans do.
That’s an a priori assertion of human cognition as the sine qua non of intelligence: Unless a human thinks it, it’s not thought. Because only humans think.
Alan Turing, thou art avenged.
Don’t Fear the Reaper
What’s left for us, now that we’ve passed through the transition from ‘What can it do?’ into “What can’t it do?” All of the loss and grief wells up as helplessness and a vertiginous sense of weightlessness, as everything that once seemed solid melts into air.
We will get through this difficult moment. We can know that because this has happened before, a decade ago, to the world’s best players of the world’s hardest game: Go.
DeepMind’s Alpha Go soundly defeated world champion Lee Sedol, dramatically culminating in his resignation from the game, saying, “I used to inspire fans by advancing the techniques of Go and presenting a new paradigm. My reason for playing Go has vanished.” As soon as a computer could beat him, Lee felt he could only withdraw — in grief and loss.
Others took a different path, realising that in Alpha Go they had a player against whom they could challenge themselves — and learn from. “AI has changed everything,” says Park Jeong-sang, a South Korean Go commentator. “Fundamental moves that were once considered common sense aren’t played at all today, and techniques that didn’t exist before have become popular.”
That transition has proven painful to those well-established in the game. “I needed time to abandon everything I had learned before,” says Kim Chae-young, one of the top female Go players in the world. “The intuition I had built up over the years turned out to be wrong.”
A Zen aphorism, gently tugging at our shirtsleeve: You can not pour tea into a full cup.
Death is inevitable; for our bodies, for our cultures, for our knowledge. But death with humility clears a path for a transmigration of soul, so that we can choose to be something else. Something freed from the sorts of things that machines can so obviously do as well or better. Something utterly human.
What’s left for us? Everything else.