Why agency isn't about the idea or the will
[006] Does intelligent agency in machines follow from will, rather than intellect? Yes, but no. Both. It's complicated.
Commenting on my comment to Erik Hoel the other day, commenter DB commented:
Tl;Dr: everyone's arguing about when and how AI will gain agency through its intelligence. You're arguing that agency doesn't derive from intelligence... It derives from will.
And the hard part for people to wrap their mind around is, does or can will derive from intelligence?
The answer is no… and also yes.
I often draw on lost, forgotten, and little-known traditions to look at these questions, which leaves me well outside of the mainstream assumptions.
Confused? Me too. Let’s have a closer look.
It’s been brought to my attention that I’m not being obnoxious enough with my subscription prompts.
Agency isn’t thinking
In my AI series so far, one of the major themes is that thinking isn’t intelligence. Not in the way we think of intelligence as computation.
Take some set of data, run it through a series of explicit steps, and see what comes out the other end. If that’s computation, thinking isn’t computation. It isn’t only computation, if you want to get snippy about it.
Thinking reaches well beyond whatever cogitations go on inside your skull.
Thinking has a body. Thinking is a bodily activity. Thinking occurs in a peculiar form of organic life. Thinking isn’t entirely transparent to itself — you can’t focus real hard and become aware of whatever functions or processes in your brain give rise to it.
The Cartesian biases of Western techno-culture zero in on their favorite parts of the thinking person — the part that does things with explicit logic and mathematics — and try to forget that the rest exists.
But Cartesian analyses of mind and body fundamentally misrepresent the nature of cognition. We, collectively, have absorbed the metaphysical belief that matter is not like the mind. If a thing is physical, it is also not mental.
That’s led us on to this idea, today, that cognition is an exclusively internal process that happens “in” the mind or brain (the common confusion of these two words is one symptom of the sickness). Point at your skull — that’s where “your mind” is, and nowhere else.
If it’s mental, it’s not physical. If it’s physical, it’s not mental. This belief distorts a great deal of our ideas on… ideas.
And that goes for wanting as well as knowing.
If you rewind to ancient Greece, they had no concept of “will” as we presently know the term.
If you search through Plato, Aristotle, Epicurus, the Stoics, and oddballs like Diogenes, it’s simply not there. They speak of appetites and desires. They speak of “passions”, which is close enough to our idea of emotion. They talk of the intellect’s influence, or not, over these non-rational parts.
There’s no concept of a will, a self-determining cause within a person that moves without being moved.
That kind of talk doesn’t appear until we get to the Christian writers in late antiquity, mainly through the writings of St. Paul and St. Augustine. That idea of the will doesn’t survive the scientific revolution.
If will is desire plus intention — wanting what you want and bringing it about — then it, too, must be “in the head”. But then, where is it? Physics says “nah”, the world out there is all cause and effect. Neurology says “nah”, when we cut open your skull we just see brain tissue.
The trouble with the concept of the will is that it, too, tries to hijack agency.
If will is meant as a property or faculty belonging to mentality, and therefore distinct from physicality, then this is not what I mean by agency.
The concept doesn’t make much sense outside of Christian theology, frankly, and attempts to defend “freedom of the will” on strictly physical terms have always amused me. Free will isn’t the only meaningful concept of human freedom, but as in most of our tortured historically-ignorant online discussions, the term has become a symbolic catch-all for a range of different ideas, leaving us with little idea of what it means.
Anyhow, when framed as a debate between intellect and will, we’re still leaving the question of agency to the level of conscious awareness. Awareness of volition and intention rather than belief, but that’s a minor difference.
That’s a mistake. Agency is only weakly connected to consciousness.
Most agents, which are plants and non-mammal animals, have no conscious awareness. I don’t say this as a human supremacist. Consciousness simply isn’t the defining or most interesting feature of nonhuman agents.
Consciousness only becomes interesting to peculiar forms of life (ours) who achieve it through their peculiar form of agency.
Agency doesn’t derive from intellect or will.
Rather, intellect and will derive from agency.
Why?
Things make sense before they’re represented as knowledge
If mental and physical reality aren’t exclusive domains, then knowledge itself can’t be an exclusively mental property.
The belief that we can extract universal rules for knowledge and knowledge-producing processes from their context in the lives of human beings is a non-starter. Yet this universalism of intelligence is exactly what these “AI risk” and “AGI threat” discussions presuppose.
Long before we get to either the idea in the intellect or the desire in the will, we have to address the question of why these things exist at all.
Why does anything show up at all? Why are these things even questions for us? Why is there a “for us” at all? Rocks aren’t lying there worrying about their existential predicament.
Things show up, and they show up for a somebody. How does this happen? Before we can get to talking about the subject and the object, the mind and the world, we have to ask that question.
In those last two paragraphs I’ve summarized for you one of the major threads of Heidegger’s Being and Time.
Human existence is mostly carried on in a mode of unthinking activity. We aren’t consciously aware of the precise movement of our feet while out on a walk. Walking is mostly thoughtless. We know how to do it, even if we can’t say how.
Slip on a patch of ice that you didn’t notice, though, and you snap into intense conscious awareness. When the ordinary flow of experience breaks down, then we start to experience ourselves as a subject separate from the world of objects.
(It takes a crisis to provoke Cartesian-style thinking consciousness, which is one of Heidegger’s more intriguing insights.)
These ideas have made their way into cognitive science in a movement called embodied cognition.
Merleau-Ponty, who absorbed many of Heidegger’s ideas into a philosophy of embodiment, is an important touchstone in this development. His key idea is that agency isn’t a function of “mind” at all. Agency originates in the body’s powers of perception and movement.
The power to move for one’s own purposes is not a function of the intellect or conscious volition.
The intellect and the will arise from bodily activity interacting with the world. Beliefs, desires, judgments, and other such mental states are how we give expression to the activity of body-in-the-world.
Action comes first. Meaning is a fact about the ability of the human organism to act in the world. Intellect and will are downstream from action.
If you’ve been following this recent series of articles o’ mine, you might wonder why I’m now trying to make mind out as material after taking a shiv to materialism.
There’s a hint right in the question. Materialism is not simply the doctrine that things are matter. It is a doctrine about what matter is. Matter is defined by what excludes the mind.
Embodied cognition challenges the separation of mind from matter.
Funny enough, if you follow the Heidegger and Merleau-Ponty threads, you wind up at a position that isn’t too far from older Aristotelian ways of thinking. That’s a good place to be. It’s also a radically different head-space — though one that, bizarrely enough, has support from quantum physics.
So what’s all this mean for my gripes about AI?
The AI discussion is about algorithmic computations taking place in an allegedly bodiless intellect.
As long as that is true, any talk of “general intelligence”, including that snipe-hunt of “value alignment”, is pointless.
Sure, guy, you’ll make an intelligent being incomprehensibly smarter than you, but somehow out-smart it in creating its desires and values and preventing it from changing them as it wishes. Totally plausible scenario. 👍
Consider a scenario from embodied cognition instead.
Intelligence has a body. Explicit thinking happens against a background of implicit understanding, which — in humans, the only general intelligence we know of — depends almost entirely on unconscious bodily awareness and action.
Embodiment is inseparable from value and desire.
Since few are thinking about this, the result is that we are building machines with bodies that aren’t accounted for as bodies.
That’s a problem. Maybe not so much right now with these deep-learning tools everyone is excited about. But over the longer term, failure to consider the embodiment of artificial intelligence might be the problem.
Machines that share none of our bodily form and function are unlikely to ever share in any of our values, interests, or desires.
They aren’t likely to think as we think, either. And it doesn’t matter how smart you think you are in cooking up code; this isn’t a problem that can be solved with more computing horsepower, more math, and more “smarts”.
You may as well ask why carnivorous wasps don’t care about human suffering even after you give them your best arguments.
(Which also gives you a clue to how “human-friendly AGI” is mostly likely to come about, if it’s possible at all.)
One last thing to wrap this up. We are talking about “will”, of a sort, but it’s more to do with Schopenhauer’s Will — an impersonal, pointless, cosmic, metaphysical force that works in and through us, rather than falling inside the circle of things that humans consciously understand and control.
The analogy with current AI engineering couldn’t be more appropriate.
-Matt
P.S. Did you forget to share? Here’s a handy button for you to share:
Mea culpa, though I'm glad my wrongness was productively wrong. :)