5 Comments

„can presumably make lots of copies of itself (and improved copies), each of which can think much, much faster than humans. How could one not be very worried?“

Major assumptions there. Particularly the “improved copies”.

Expand full comment
author

The much faster thinking, at least, is apparently indeed less obvious than I assumed. I guess I had in my mind something about electricity travelling much faster than signals in the brain --- and also from Zvi Mowshowitz's weekly AI reporting (which I recommend in the post) something about us appearing to AI as slow as trees appear to us. I found it, he quotes Joscha Bach in AI #21 (July 20th); and it started with Andrew Critch, who claimed on Twitter that over the next decade we should expect a speed advantage of between a hundred and a million times for AI over us, because computer chips "fire" a million times faster than neurons.

As for the self-improvement, I thought relegating it into parentheses was actually quite conservative, computer software can be modified after all ... but perhaps it only felt conservative because the "foom" thing is so famous. And certainly my whole attitude, dismissing the capability part as uninteresting or even trivial when compared the the motivation part, wasn't very conservative.

Expand full comment

Computer chips already fire millions of times faster than neurons. This was true in 1972.

What makes LLMs work is massive training sets on huge amounts of data and fairly well known algorithms.

ChatGPT exists in multiple servers in the cloud - which means in a data centre. It has no agency or ability to clone itself. (Feel free to ask) It has 570 Gb of data so it’s not going to be able to put itself on smaller devices, and would take a day to copy.

It can’t make better versions of itself because that would involve writing functioning code from scratch and it doesn’t know the code that trained it, nor can it write flawless code.

To get smarter it needs to train itself on ever more data which has taken years. The existing ChatGPT has not updated data for two years now.

Most importantly it doesn’t have agency, nor any kind of “consciousness” that survives the chat box.

I’m not saying that AI won’t get smarter, based only humans training it, or that it isn’t a threat to jobs perhaps. But the wild doom scenarios make no sense. Not with LLMs anyway. My attitude might change if there’s a new proven technology.

Expand full comment
author

Notwithstanding the word "imminent" in the post (I don't think it has to be a "long-term" issue), I didn't mean to claim that existing ChatGPT is extinction-level dangerous. So perhaps we agree anyway.

Not being an expert (as I admit early in the post), I can only say that all the technical issues that you bring up don't really appear like fundamental hurdles to me. If they can't be overcome already now, with different more agentic architectures, then possibly within a few years? And even with LLMs if the raw intelligence of those keeps increasing? For example, if it's too big now to be easily copied, that will tend to change, right?

Expand full comment

I would just add to your comment that the timeline dispute already happened in Yudkowsky vs Hotz debate. Hotz argued that time does matter, Yudkowsky argued that it doesn't, if we are doomed anyway.

So if your goal is to preserve humanity AND expand AI capabilities, you should not argue by saying there will be timelines we could potentially adjust to in case something goes wrong. You should rather provide empirical evidence that we are able to control the emergent models (which we are not and it is being openly admitted by Dario Amodei and others).

Expand full comment