Why not all thinking is thinking (ChatGPT Part 2)
[005] What we think about intelligence and consciousness is all messed up. Here's my small contribution to tidying up the confusion.
This is part 2 of a series on the ugliness of AI and machine intelligence. If you haven't read Part 1, use this link to read it first.
Part 2- Intelligence isn't the whole mind
Consciousness in the characteristically human form can be seen as what we attain when we come to formulate the significance of things for us. We then have an articulate view of our self and world. But things matter to us prior to this formulation.
Charles Taylor, “The Concept of a Person”
Is your AI smarter than a blackbird?
We've got a couple of blackbirds that come by the yard every day. They're funny birds with weird personalities. As they hunt and dig through the yard, you'll notice that they're pretty clever little birds who get up to a lot of different activities.
Descartes, who we talked about in the first part of this series, thought all animals were machines with no minds. I wonder about that. They're not writing sonnets or solving equations, but they're pretty smart little birds.
Why would I believe that blackbird is "just a machine" when he can do a lot of things that none of our machines can? And for that matter, is intelligence only a matter of being able to do things?
Blackbird acts for his own purposes. He doesn't need anyone to tell him what to do. Machines, even our best AIs, depend on human designers with human intentions.
You might start to wonder if we really have a good handle on behavior and mind at all. In this issue I'm going to unpack that question in these three points:
How the mechanical world-view warped our minds
Why we undervalue behaviors (and how this confuses us about the mind)
Why the difference between intelligence and mind matters more than ever
How the mechanical world-view warped our minds
In Part 1, we talked about the historical origins of the mechanical mind.
The problem with taking thinking as a kind of mechanism, as Descartes and Hobbes thought, is that we end up underselling the value of external behaviors and overstating the importance of inner mental qualities.
That's a strange thing to say about a materialistic point of view where mind is nothing but movement in matter.
Animal behaviors are instinct-driven little different from the machinery in a spring-wound clock. If thinking really is just motion and behavior, then what am I on about talking about underselling behaviors?
Behavior here means mere movement. Think of the pebbles falling down a mountain during an avalanche. Lots of behavior. But it's dumb motion. There's no intelligence in it. Even the blackbird is smarter than that.
When the materialist tells us that thinking is behavior, what he's saying is that what looks for all the world like "smart" thinking is really "dumb" inanimate motion. That goes for everything we consider uniquely human.
What I'm suggesting is that treating behaviors in animals as a solved problem creates its own problems.
Thanks to physics and evolutionary biology and such things, we know, allegedly, how living things move around. But then there's this missing X-factor in humans, which is the "what it is like" of experiencing objective reality in the first person point of view.
They call this the "hard problem", though we might as well call it the impossible problem. You're not going to squeeze subjective experience from raw matter, no matter how you arrange it and jiggle it around.
If I stopped right here, I'd have said nothing that you wouldn't get in thousands of other articles and books about the mind. This perspective is so common and widespread that you might think it's the only way we've ever talked about the mind.
Allow me to shock you from your slumbers.
The problem is making a problem out of the difference between physical things and consciousness.
When conscious sensation is the only mystery, it's easy to write it off. Sure, go have your fun in philosophy class thinking about your consciousness. We'll be over here engineering minds that think, or act like they are thinking, and who cares either way when we're rolling in billions of that sweet VC money?
Skeptics of ChatGPT and other language-processing systems are right to point out that it has no inner life. But they are deeply mistaken about why this matters to machine intelligence.
And the problem cuts both ways. We humans might be sentimental about consciousness because we're deluded by our own self-deceptions. Machines keep getting smarter and we're over here looking for a human-like consciousness and personality.
Both of them are wrong.
Why behaviors aren't enough (and the counter-intuitive reason that this confuses us)
Philosopher Charles Taylor once pointed out that in the history of ideas there have been two different ways of understanding persons.
We've already covered the first view. One one side of reality, there's moving matter. On the other side, there's thinking mind. There are no options in between. What makes the difference between a human person and other things, like animals, machines, and ocean currents, is consciousness.
This view doesn't have much to say about agents, which is any being that acts from purposes within itself. Animals and even plants can be described as agents, as they move from causes in their own natures.
The second view takes agents seriously and asks what makes them difference from inanimate things.
This position has fallen out of favor in recent history for a variety of reasons. The main two causes seem to be widespread hatred of Hegel, and the opinion that any ideas not part of nature as known to science are bunkum.
This tradition asks what is it that makes agents, like blackbird, different from things, like falling rocks in an avalanche.
The key difference does not come down to agents having consciousness or conscious experiences.
"What is crucial about agents is that things matter to them," writes Taylor. "We thus cannot simply identify agents by a performance criterion, nor assimilate animals to machines."
Say you do build a robot that walks or a chatbot that mimics superficial language skills. That mechanical intelligence doesn't become an agent just because "it does stuff", even if it does stuff well, like smashing the world-champion Go player.
The computers have no purposes of their own. Its purposes depend on our purposes. An agent has its own original purposes, purposes that belong to in itself and not as the result of outside forces or the derived purposes of human beings.
What makes an agent, what gives it original purposes, is not consciousness, but that things matter for it. Things have significance for us before we ever think about them or experience them in consciousness.
It's worth taking a breather here to appreciate what a major, world-shaking, mind-bending upset this is.
I challenge you to find any article, anywhere, in a mainstream publication discussing AI in the last 20 years that so much as mentions the difference between agents and things as important. The present discussions are so steeped in materialism, and so ignorant of the history of these ideas, that it's never considered. The Big Tech circuit assumes all of this as truth and then gets on building AI.
Agency has to be more than success in reasoning tasks.
One major reason, that hardly any AI guru will talk about, is that we have a range of feelings and experiences which don't fit the machine model. Writes Taylor:
We can understand this, if we examine more closely the range of human feelings like pride, shame, guilt, sense of worth, love, and so on. When we try to state what is particular to each one of these feelings, we find we can only do so if we describe the situation in which we feel them, and what we are inclined to do in it. Shame is what we feel in a situation of humiliating exposure, and we want to hide ourselves from this; fear what we feel in a situation of danger, and we want to escape it; guilt when we are aware of transgression; and so on.1
These experiences are not simply having of a state, whether that is a "raw feeling" or some objectively-describable physiological condition inside your skull.
The feeling cannot be cut completely free from our understanding of it.
"Formulating how we feel, or coming to adopt a new formulation, can frequently change how we feel," he continues. For emotions like these, our understanding of them is part of the emotion. They can't be treated as an independent object, which might be present in consciousness.
Conscious experiences are not the problem we're worried about. There's something else going on in agents and in human persons that makes us different from animals and objects.
Why the difference between intelligence and mind matters more than ever
Imagine you're sitting across the table from me in a cafe. We're having a coffee and a nice discussion.
With no warning, mid-sentence, my arm shoots into the air above my head. But I don't say anything about it. I don't even stop talking.
"What are you doing with your arm?" you ask me.
I look up, confused, and find my arm in the air. "Oh. How did that happen?"
You might think something was wrong with me. I might think something is wrong with me. Arms don't just reach over our heads for no reason at all if you're not in an Evil Dead movie.
Let's pause and rewind. We're sitting at the same table, having the same conversation. Except this time, I say "I need to stretch my shoulder" and then my arm reaches over my head.
Nothing is weird about this even though the same thing happened. In the first case, my arm just moved. In the second, I moved my arm. You know that I moved my arm because I told you I was doing it, and then I did it.
That's the difference between raw physical motion and an intentional action.
An intentional action differs from an involuntary movement in my body because I can tell you what I am doing when I act. You can ask me, "Hey Matt, what are you doing?" and reasonably expect a response. Try doing that with your toaster and see how far it gets you.
The mystery of the first-person "what it is like" is only a mystery because we've already defined agents by leaving out mind.
It's well and good to talk about behavioral responses to stimulus like a good materialist, to opine about "dopamine hits" like a good Twitter guru. But we're screwing up when we confuse mechanical motion with actions done for a purpose.
If you're a good skeptical Westerner, you'll see this argument and say that's just a philosopher playing word games. The Science has shown that your brain (or evolution or whatever) makes decisions for you and you only use your reasoning to justify it later. <smugface>
Sure, whatever.
It's fascinating to me that the people most likely to repeat this dogma are the least likely to think about how they got to the point where they could believe it with a straight face.
That mechanical view is hardly intuitive -- which is why we needed a complicated scientific philosophy to show us the "real causes" of our physical and mental behaviors -- which itself raises questions about just why we're so convinced today that these ideas are correct beyond any questioning.
This is neither inevitable, nor is it an entirely authentic view of nature. The baggage of unexamined historical ideas weighs more than the cost of the seat on this flight.
To that I have a heckuva interesting response. But it's going to have to wait until next time.
-Matt
P.S. If you’re digging the groove I’m laying down, do me a favor and share this with a friend who might like it or a nerd who will melt down over it. The more the merrier.
Tap the yellow button and send good vibes:
Charles Taylor, "The Concept of a Person", in Human Agency and Language, 100.
You glanced off something important that I just wanted to reiterate, that human beings already have a psychological tendency to blame people's behaviors on their character which is another way of saying they assume the behaviors always derive from considered, rational choices; or really, that if they aren't good behaviors then that reflects that their thinking is unconsidered, irrational, all of this leading to stupid. So this isn't just a philosophical crisis, it's a social one, people refusing to understand each other because they demand that understanding only narrowly derives from intellect.