3 Comments

I would be happy with a law that says any potentially malevolent supercomputers must be powered by a single plug on a 100 yard extension cable that's plugged in in a room with no CCTV or robotic arms.

Expand full comment
Apr 1, 2023·edited Apr 1, 2023

I think of it this way: we're going to end up "replaced" sooner or later (possibly sooner, given the way things are going right now) because there's no special force of the universe that stops us from going extinct, so why not ensure that we have a say over what exactly our successor might be? As for the natural ecosystem, there's an excellent chance that we pushed it past the point of recovery decades ago and just haven't realized it yet.

Hearings and legislation won't do jack squat for the simple reason that the only reason the government will intervene is if it wants a piece of the pie. Addressing the issues Bender brings up will require nothing short of people taking power into their own hands through direct action. I can't say what that'll look like, that's up to the people undertaking said direct action to decide.

In any case, there's very little a ChatGPT-type AI could do to us that we haven't already done to ourselves dozens of times. Not saying that it should go completely unregulated, but everyone should pause for a moment to take stock of what AI as it is right now can and cannot do (to say nothing about how most of the malicious things it could do are less the fault of the AI and more the fault of its users).

Regarding Yudkowsky: he's never actually studied AI. All of his knowledge there is self-taught, and more often than not whoever teaches themselves has a fool for a teacher. Need I remind you of the tempest in a teapot that was Roko's basilisk?

Expand full comment