For the past few months, AI safety and accelerationism have taken the center of technical discourse. Some people are advocating for AI safety, doing something drastic right now to prevent what they see as potential for imminent harm. I’m puzzled by this view, because I think it’s equivalent to the Unabomber’s position. Let me explain.
#49: You're Taking the Unabomber's Position
#49: You're Taking the Unabomber's Position
#49: You're Taking the Unabomber's Position
For the past few months, AI safety and accelerationism have taken the center of technical discourse. Some people are advocating for AI safety, doing something drastic right now to prevent what they see as potential for imminent harm. I’m puzzled by this view, because I think it’s equivalent to the Unabomber’s position. Let me explain.