Erik Hoel wonders if you can navigate the AI apocalypse as a sane person
He's worried about Artificial General Intelligence (AGI) with some good reason. I have reservations.
(Read it after this. It’s quite long.)
I left a comment (from my secret alter-ego lol), which I want to shamelessly boost here, as I touch on some of the issues I’ve been going on about long-windedly here.
Begin comment:
I'm dubious about the (current) prospects of AGI.
I agree with most of your analysis and concur that the long-term prospects, should things continue as they are, make it almost inevitable. (Whether that is a reasonable assumption is to my mind a very serious question, but for another time.)
I am however extremely suspicious of two ideas:
One, that we can be confident that intelligence and agency intersect as neatly as Bostrom, Yudkowsky, etc, hold that it does. The thing might be smart. So what? It's a chatbot responding to human prompts. Even if it were, hypothetically, more cognitively-competent than a human in any given reasoning task, that doesn't entail any equivalent to psychological motivations or volitions. The intelligence aspect is crucial, no doubt, but I don't see that we automatically get desire for free out of raw smarts. The machines still derive their purposes from the designers/users. Ascribing desires and other such attitudes to it is pareidolia.
(As an aside, this is the major reason I've always found the "superintelligent paper-clipper" to be incoherent. You're telling me it's that smart, but at the same time both unwilling to and incapable of thinking about the goals that these chimps gave it? Even we have the ability to reason about our ends and purposes, within the confines of broader imperatives set by natural selection and local cultural forms. It's not so simple as determine goal -> act on goal.)
Two, related to the above, it has no embodiment. A token-sorting task-solving algorithm, even with wide-ranging cognitive abilities, is still bodiless. We might could attribute to it superhuman 1337 hack0r powers, or an adeptness at social engineering, as I've seen from Bostrom and Yudkowsky over the years. But I am simply not convinced that the threat level of a disembodied thinking algorithm is there.
We forget how much of our own explicit intelligence depends on tacit, unconscious abilities latent within our embodied form and function. Any would-be Skynet in the wings has no such background to rely on (no doubt one reason that the Conscious Robot With Free Will has been so difficult to realize in practice).
Note that I'm not writing this out of skepticism. I do share your concern about the overall threat, and likewise believe that it isn't being taken seriously enough. But I am bearish on the time-scales and the idea that some sort of deep-learning LLM will "wake up" with murder in its heart. The attributes of agency simply aren't there, no matter how intelligent these things may become within near horizons.
I've come to think that any AGI will be far less like a malicious person with human-like motives and values, and more like an intrusion of our "cognitive ecosystem" by one or more types of mechanical intelligence that simply out-compete us in complex task reasoning skills. Conative attitudes entirely optional.
If we start talking about adding natural language processing to armed autonomous drone swarms, on the other, hand, you have my attention.
End comment.
In my piece the other day I mentioned that my main worry is not about intelligence as raw behavior, but agency. Intelligent machines aren’t moving under their own power, for their own reasons, setting their own intentions. Human designers fix their purposes.
For a variety of reasons — most having to do with these worries and agendas appearing in embryonic form on the hyper-rationalist LessWrong group blog and taking on their unique lens on the world — raw intelligence is the single variable of interest. Agency follows (waves hands violently) from it.
I don’t buy it.
Agency and original purpose are not necessary conclusions of the raw horsepower available for calculating or task-solving. But you might get something like them as unanticipated consequences should certain intelligent machines happen to have bodies and skill with language…
-Matt
P.S. You like? Then you share:
Tl;Dr: everyone's arguing about when and how AI will gain agency through its intelligence. You're arguing that agency doesn't derive from intelligence... It derives from will.
And the hard part for people to wrap their mind around is, does or can will derive from intelligence?