The much faster thinking, at least, is apparently indeed less obvious than I assumed. I guess I had in my mind something about electricity travelling much faster than signals in the brain --- and also from Zvi Mowshowitz's weekly AI reporting (which I recommend in the post) something about us appearing to AI as slow as trees appear to us. I found it, he quotes Joscha Bach in AI #21 (July 20th); and it started with Andrew Critch, who claimed on Twitter that over the next decade we should expect a speed advantage of between a hundred and a million times for AI over us, because computer chips "fire" a million times faster than neurons.
As for the self-improvement, I thought relegating it into parentheses was actually quite conservative, computer software can be modified after all ... but perhaps it only felt conservative because the "foom" thing is so famous. And certainly my whole attitude, dismissing the capability part as uninteresting or even trivial when compared the the motivation part, wasn't very conservative.
Notwithstanding the word "imminent" in the post (I don't think it has to be a "long-term" issue), I didn't mean to claim that existing ChatGPT is extinction-level dangerous. So perhaps we agree anyway.
Not being an expert (as I admit early in the post), I can only say that all the technical issues that you bring up don't really appear like fundamental hurdles to me. If they can't be overcome already now, with different more agentic architectures, then possibly within a few years? And even with LLMs if the raw intelligence of those keeps increasing? For example, if it's too big now to be easily copied, that will tend to change, right?
I would just add to your comment that the timeline dispute already happened in Yudkowsky vs Hotz debate. Hotz argued that time does matter, Yudkowsky argued that it doesn't, if we are doomed anyway.
So if your goal is to preserve humanity AND expand AI capabilities, you should not argue by saying there will be timelines we could potentially adjust to in case something goes wrong. You should rather provide empirical evidence that we are able to control the emergent models (which we are not and it is being openly admitted by Dario Amodei and others).
The much faster thinking, at least, is apparently indeed less obvious than I assumed. I guess I had in my mind something about electricity travelling much faster than signals in the brain --- and also from Zvi Mowshowitz's weekly AI reporting (which I recommend in the post) something about us appearing to AI as slow as trees appear to us. I found it, he quotes Joscha Bach in AI #21 (July 20th); and it started with Andrew Critch, who claimed on Twitter that over the next decade we should expect a speed advantage of between a hundred and a million times for AI over us, because computer chips "fire" a million times faster than neurons.
As for the self-improvement, I thought relegating it into parentheses was actually quite conservative, computer software can be modified after all ... but perhaps it only felt conservative because the "foom" thing is so famous. And certainly my whole attitude, dismissing the capability part as uninteresting or even trivial when compared the the motivation part, wasn't very conservative.
Notwithstanding the word "imminent" in the post (I don't think it has to be a "long-term" issue), I didn't mean to claim that existing ChatGPT is extinction-level dangerous. So perhaps we agree anyway.
Not being an expert (as I admit early in the post), I can only say that all the technical issues that you bring up don't really appear like fundamental hurdles to me. If they can't be overcome already now, with different more agentic architectures, then possibly within a few years? And even with LLMs if the raw intelligence of those keeps increasing? For example, if it's too big now to be easily copied, that will tend to change, right?
I would just add to your comment that the timeline dispute already happened in Yudkowsky vs Hotz debate. Hotz argued that time does matter, Yudkowsky argued that it doesn't, if we are doomed anyway.
So if your goal is to preserve humanity AND expand AI capabilities, you should not argue by saying there will be timelines we could potentially adjust to in case something goes wrong. You should rather provide empirical evidence that we are able to control the emergent models (which we are not and it is being openly admitted by Dario Amodei and others).