#49: You're Taking the Unabomber's Position
For the past few months, AI safety and accelerationism have taken the center of technical discourse. Some people are advocating for AI safety, doing something drastic right now to prevent what they see as potential for imminent harm. I’m puzzled by this view, because I think it’s equivalent to the Unabomber’s position. Let me explain.
The Unabomber’s Position
Ted Kaczynski wrote his manifesto Industrial Society and its Future in 1995, following a 17-year campaign of sending mail bombs to airlines and universities (hence “Unabomber”). Kaczynski waged domestic terrorism to draw attention to his ideological thesis: he saw industrialization — the wave of technological progress from the 1800s through the modern day — as fundamentally bad, accusing it of:
Destroying human-scale communities;
Creating an industrial/technological societal system that subjugates the needs of humans to the needs of the system;
Being unsustainable both with respect to human society and nature, thereby leading up to a large and devastating eventual collapse.
Kaczynski advocated for a return to pre-industrial, agrarian society. His view was that all of industrialization was bad: whether you were in the 1950s and more industrialized than in the 1850s didn’t matter in principle. Being anywhere on the curve of exponential technological growth was bad in itself for two reasons:
The bad effects of that state, the industrializing motion, on humans;
Exponential growth cannot be “cut off.” The motion of technological progress is economically self-reinforcing. You’re either going all the way, or not at all.
Kaczynski did not think that it was possible to keep the good parts of industrialization, and live in a permanent 1950s, for example. You can’t “cut it off” midway through an exponential trend. To Kaczynski, the only solution was to revert all the way into a pre-exponential-growth, agrarian society, where growth is slow or nonexistent, and make the telltale signs of the exponential curve forbidden.
The AI Safety Position
The AI Safety, do something now crowd is very fixated on this point-in-time. Ban GPU sales. Global data center governance. Permits for LLMs over X parameters. Et cetera.
But the problem of what to do about AI isn’t limited to this moment in time. It’s forever. We are on a long exponential curve of self-reinforcing technological progress. Today’s rate of progress is small compared to what we’ll have ten years from now. On an exponential curve, the point-in-time slope only becomes steeper.1
Like I argued in my piece on e/acc,2 and repeating Kaczynzki's observation, you can’t really “cut it off” here. The motion of progress is self-reinforcing. We’re partway on an exponential curve, and the economic gains ahead are far too compelling and too easily within reach, for AI progress to be controlled. The dice are already cast.
Most importantly, the dice were cast a long time ago. I’d argue that AI was basically inevitable from 1800 onward. It’s one continuous growth curve. Once fossil fuel power started being harnessed, we had a source of energy with such great returns to its own gathering, that all of industrial progress from there was basically inevitable. On that path, someone would come up with the transistor, combine several of them into a logic gate to handle basic math, and making these things would once more have such great returns to their own production that digital progress from there was basically inevitable. A couple decades of optimizing these devices later, of course we’d have AI, just because we’d have a hell of a lot of logic gates and it turns out that’s pretty much all you need to make something that sort of looks like the brain.
The AI Safety Position is the Unabomber’s Position
AI Safety is not a point-in-time problem. It is not about the year 2023 or the year 2025, but it’s forever from here on our exponential growth curve: our tools will only become more powerful, not less. We entered this exponential growth two hundred years ago, and you can’t “cut it off” partway. There’s no staying in a permanent 1990s with the internet but without AI. There’s no staying in a permanent 1960s where the transistor exists but doesn’t get smaller. The market incentives, the self-reinforcing dynamics of technological progress, just don’t work that way. Kaczynski realized this.
AI Safetyists who are rigorous in their views are effectively taking Kaczynski’s position. They may not say it — as a society, we roundly reject Kaczynski’s position — but the logic follows. If they really don’t want AI, then there’s only one thing they could do: take us back into agrarian times, off an exponential trend that started two hundred years ago.
AI Safety is not about a point in time, it’s about an uninterruptible exponential growth curve. You have two choices only: you’re either on it, or you’re not. Those are the two options. The e/acc option is to be on the exponential growth curve, ad aspera per astra. The AI Safety option is to stay off it, Kacyznski-style, live in agrarian communes, and chase with pitchforks anyone trying to drill for oil.
Follow me on Twitter!
That makes AI Safety a losing battle in itself: regulating GPU sales might work while we’re brute-force-stacking computationally intensive methods, but our models will become exponentially get more efficient, while the hardware will become exponentially more available. AI Safetyists will be playing a perpetually harder game of whack-a-mole. Imagine trying to outlaw particular sequences of linear algebra operations when we have many orders of magnitude more computational power across billions of devices everywhere.
Bear in mind that the computational power required to achieve intelligence is objectively small: the brain uses only 20 watts! Our models have a long way to go in efficiency, but they will get there. There’s no chance anyone can regulate every 20-watt device on the planet.
I will disclose a summary of this piece, and thereby my personal views, in this footnote. I take the position that Pandora’s box has opened: AI progress cannot be responsibly controlled, and we’d be best off embracing it as inevitable and going faster rather than slower. Limiting AI progress only risks leaving us stuck in some dystopian intermediate state, where e.g. some organizations have AI and use that to subjugate others; the safest position in face of a grand transformation is to let the transformation apply as quickly as possible, hitting everywhere equally.