Why ChatGPT is intelligent (and maybe you aren't)
[003] AI may be the greatest threat the human species has ever faced, even if ChatGPT is still pretty stupid.
Part 1- Intelligence is a fancy calculator
Out of all which we may define, (that is to say determine,) what that is, which is meant by this word Reason, when wee reckon it amongst the Faculties of the mind. For REASON, in this sense, is nothing but Reckoning (that is, Adding and Substracting) of the Consequences of generall names agreed upon, for the marking and signifying of our thoughts; I say marking them, when we reckon by ourselves; and signifying, when we demonstrate, or approve our reckonings to other men.
Thomas Hobbes, Leviathan (1651)
Head over to Open AI's ChatGPT interface and ask it a question.
You'll get a lot of good answers, if you know what to ask it. Sometimes, even if you don’t. It's a huge help with simple searches, basic factual information, even writing code.
But then it tells you something that feels off. You start to notice patterns in how it talks which aren't quite right. And then you find it's not telling you the truth, even though it sounded so confident.
After half an hour, all the answers seem repetitive and bland. It can't answer some questions, and sometimes it gets stuck in a loop where it doesn't seem to read you a all.
They call it "artificial intelligence", but it doesn't seem very smart. It's too mechanical.
The recent hype around ChatGPT and its cousins like MidJourney and DALL-E has brought the AI problem front and center. That problem has been marinating underground for decades now, though only hardcore nerds were clued in.
Once the machine mind threatened the piggy-banks, it could no longer be avoided.
Will these technologies put writers and artists and pretty much everyone else out of work? Possibly. I have my doubts, though they probably aren't your doubts. Read on to understand why.
I want to touch on a different angle which hasn't had much discussion. What does building real thinking machines mean for us human beings and our future prospects, in the bigger picture?
If you're keen-eyed, you've spotted the problem in that last paragraph. Are we really building thinking machines? Is this stuff really intelligent?
That's the topic of this first article in this series. The thesis I want to chew on is this:
The problem with AI isn't that it's too mechanical, but that it makes human thinking mechanical.
To get into the tasty meat of that discussion means, first of all, getting into some prep work. I'm going to cover three points:
1. Why the mind is already a machine
2. What’s different about ChatGPT?
3. Why the mechanical mind is a good thing, actually
Why the mind is already a machine
Maybe you think of AI as a modern thing which didn't show up until computer science got going around World War II. Maybe you think of it as something that coders do, or computer engineers, or machine-learning experts.
Maybe you don't think of it much at all, so let me fill you in.
What we call AI today got started along with the birth of computer science, which appears in the 1930s as a result of certain innovations in logic and mathematics. The details of that aren't important here.
It didn't take long before the smart fellas realized that the new computing devices might have a lot in common with human thinking.
From the dawn of the computing age, AI projects have overlapped with the sciences of the mind and the brain. Today's cognitive science, the study of how humans reason, can be understood as AI applied to the human mind. (Cognitive science applied to engineering of machines is AI.)
The history of intelligence, which our ancestors once called "reason", dates back many centuries. All the way back in Plato and Aristotle, in the 4th century BC, we find attempts to discover the "laws of thought", rules of logic and probability, that govern human reason.
These ancient projects already contain the embryonic form of today's AI, which also treats intelligence as a kind of computation.
Intelligence can be spelled out as a procedure, or system of procedures, for moving from data inputs to conclusions about that data. And the procedures of thinking can be built into material objects that physically reproduce the "laws of thought".
The passage quoted from Hobbes's Leviathan up at the beginning already spelled out the basics of AI back in 17th century England. Thinking is reckoning, which is how an Englishman in the 1600s would have spoken, less precisely, about an algorithm.
A few years before Hobbes, notorious Frenchman Rene Descartes set up the pitch by tossing out all of the old philosophy, with its occult powers and immaterial beings, and starting fresh. To do that, he needed to explain the physical and mental worlds with a rational approach. The way ahead is not by reasoning from speculation, but through a science based on analysis and counting.
Today we call this "calculation". In human beings, the power of reason is responsible for calculating. A couple of things happen here.
One, the intellect takes a special place among the other parts of the mind. Everything else, like emotions, bodily feelings, intuitive and imaginative experiences, and desires, becomes the second-class citizenry of the mind.
Two, reality itself becomes artificially divided up into whatever pieces can be measured. The actual material stuff in the world is, strictly speaking, of no interest to the intellect.
The warmth of a rock lying in the sun and the feeling of water falling on your shoulders aren't really there. What is there is a quantity that can be measured, the temperature of the rock, the the fluid dynamics of the water.
The effects on both mind and matter were, in a real sense, catastrophic. All physical objects are primarily their spatial quantities; all mental processes are simplified into the intellect's calculations.
With the stage set, Hobbes only needed a small leap to claim that thinking is nothing but mechanical processes unfolding in the motion of the material body.
This isn't a pointless history lesson. Modern AI research, reaching as far back as the 1940s, can be understood as continuing the Cartesian and Hobbesian mechanical theories of mind.
Only now it isn't armchair speculation. AI intends to build them.
Why is ChatGPT different?
The digital computer is logic built into a physical object.
It's not a trite metaphor to call the computer the realization of Descartes's and Hobbes's materialist theory of the mind.
The thing about a classical computer is that, for all it is good at solving well-defined problems with well-defined steps, it's pretty dumb about everything else. As it happens, "everything else" is a lot of stuff, and it's the stuff that matters to humans.
Solving differential equations that would stump the brightest human is trivial for a powerful computer, but the machine's intelligence doesn't generalize. Any problem that isn't well-defined and solvable with explicit steps designed into it by a human is beyond it.
That's the reason why we've never seen a machine "wake up" with human-like consciousness and commit a science fiction atrocity.
There's nothing there to wake up. The machine's intelligence, goals, and purposes all come from its designer. A friendly bumble bee doesn't need a human to tell it where to move, or how. It has its purpose within itself.
That was well and good for several decades.
Enter machine learning.
Without getting into details of the differences, machine learning uses techniques that borrow from neural networks (as used by our own brains) and statistical methods to leap beyond the limits of classical computers.
Even AI researchers aren't quite sure how these new tools do what they do. The fact that such devices can learn and beat the best human players of Go is less impressive than the fact that its builders have no idea how it did it. The "black box" of machine learning is an on-going problem with them. For now, there are two questions to think about.
Is this machine is intelligent?
Is it intelligent in the way humans are intelligent?
The jury's still out on (2). There's a lot of live controversy about what is going on in the human brain and how (or whether) brain-processes relate to thinking. It might be true, it might not. (My gut says "not".)
Point (1) is a lot easier to answer.
ChatGPT, like its cousin AlphaZero the nerd-slayer, is a kind of intelligence.
But hang on, you say. It isn't a human intelligence, and it lacks other features of mental life that we consider important. This includes conscious experiences, including reflective awareness of itself.
And if that weren’t enough, there are other aspects of thought, besides “feeling” consciousness, which the machine can't explain. These include the intentionality of mental states, which is the "aboutness" or "directedness" of a mental state at an object. When you look at a red thing and experience it as being red, how does your perception of redness become about the color red? That's not easy to answer.
AI also has a problem with rationality, which concerns the truth and logical consistency of thoughts. If a thought is true, then something must make it true, and distinguish it from a false thought. Likewise for relationships between thoughts, since it isn't possible to believe that two contradictory thoughts, p and not-p, are both true. Unthinking mechanical intelligence doesn't have any straightforward handle this.1
The problem, though, is deeper than the machine's capacities or what its present capacities leave out.
The trouble lurking ahead is that the machine's lack of real thinking, including conscious awareness, is no longer taken as a flaw or defect in its intelligent behavior.
Why the mechanical mind is a good thing, actually
Some today — some who are taken very seriously indeed by advocates and ideologues in Big Tech and venture capital — argue that getting to the heart of intelligence purified of the messy bodily and emotional baggage of free-range humans is the whole point, and the sooner the better.
We're already hearing people out there saying that they don't care if a graphic, or a movie, or a script or story, is produced by an algorithm rather than a human creator. What matters is the product, not the way it was made.
Whether you agree or not, there's an important truth captured in that attitude.
Almost every discussion about AI starts from the assumption that AI is about building machines that think as humans think.
That's just true enough to miss the most important detail.
Building machines that think like a human person means understanding human thinking as mechanical.
Philosopher Daniel Dennett has gone as far as to claim that AI just is philosophy and psychology. AI, like traditional epistemology, investigates the most general, abstract, top-down question of how knowledge is possible, making AI "a most abstract inquiry into the possibility of intelligence or knowledge".2
With one key difference.
AI is an engineering discipline that, unlike armchair speculation, intends to actually build intelligent machines.
Maybe that AI isn't "thinking like a person", or reasoning according to the laws of thought. So what? It's real, it's out-performing human thinkers at their own games, and the audiences don't care.
AI isn't idle speculation, it's reality.
In our culture, where results are everything, to construct mechanical intelligences is to demonstrate the undeniable fact that there's no more to human thinking than found in a deep understanding of its mechanical principles.
Let's summarize.
Western traditions of thought have always understood intelligence as a form of mechanical processing or procedures, long before the computer came on the scene and made AI a live possibility.
When the real-life computer came along, it was a product of this trend, but it also intensified the slow creep of machine words, analogies, and metaphors into common language.
You don't have to look far to find examples of this. Selfish genes, dopamine neurochemistry in the brain, faulty frontal cortex valves, 'evolution made me do it' explanations, and much more abound in the news and the pop culture. This attitude is even more common in the sciences, where scientists don’t often think well of the history or meaning of their ideas. They’re interested in the work (or the next grant application).
The more we've come to know the machine, the more we've become like the machine.
Real life is bearing this out. The fact that any “mere machine” is having rudimentary conversations with causal users was straight-up sci-fi just a decade ago.
ChatGPT is pretty dull and obvious now, in February 2023, sure.
That isn't the point. Ask yourself how its skill-set — or the skills of its great-grandkid bots — will measure up in another 5 years. Horizons of 10-20 years may be as impossible to predict as the effects of the smartphone in 1990 or the computer in 1910.
If this tech keeps improving, there will come a threshold beyond which nobody will be able to tell the difference between a human and the bot. That's Problem 1.
So far that's standard tech-guy Singularity-tier forecasting.
Problem 2 is that these machine-learning tools are early stage prototypes that physically realize the philosophical ideas that Hobbes and Descartes set out back in the 17th century.
This is important. It shows us that the threat of machine intelligence is only partly because of its effects on the outer world.
Forget about the lost jobs and economic devastation. What happens when you can no longer reliably tell the difference between a conversation with a real biological human being and a machine designed to talk to you?
What happens when visuals on your screen, with 4k Ultra-HD cinematic quality, can be generated by machines, in real time, to spec?
What does that mean for your sense of reality when the only means you have to tell the real from the fake are fake?
The damage is as much to our sense of self, to our most basic assumptions about ourselves, and to the languages available to talk about ourselves.
The human mind does include all of the non-intellectual things, like emotions and bodily feelings and movements, that we experience in everyday life. Our intelligence is embodied in our ways of living and acting. And our embodiment is in turn strongly shaped by our social natures, which are themselves structured through language, spoken, written, and gestural, based in face to face relationships with other human beings.
These all matter to us. Everything important about being human is in them.
Psychologist Pierre Janet once described the "reality function" as our ability to make contact with reality.
We all have this desire. But the world of the always-on info-outrage cycle has damaged, and for many atrophied, contact with reality. Many today, in particular the younger people who have no memory of life before the omnipresent internet, are adrift in a self-reinforcing cycle of neurosis and withdrawal, victims of their own anemic sense of reality.
Now take this trajectory and set throttle to “light-speed”.
Whether the machine is "intelligent" is, on balance, a relatively uninteresting question.
Whether we take it to be intelligent, and what effect our unthinking acceptance has on us, opens the door to far more frightening prospects than SkyNet's extermination campaign.
Next time, we’ll go on to look at the “intelligence problem” — how our confusion about the intellect and its relationship with the rest of the mind has left us in a weird and frightening place.
-Matt
P.S. I'll be continuing in this series, and adding in some other topics that don’t have much to do with AI. If you liked this post and also have nightmares about 80s sci-fi action films coming true, then do me a favor and share this post with someone who might like it.
See Dennett’s paper “Artificial Intelligence as Philosophy and as Psychology”, which is in his Brainstorms essay collection.
Love how you intertwine philosophy and computing, I'm looking forward to the rest of this series! In agreement re: feelings as a vital part of consciousness, and valuing reality in the sense of digging my fingers into dirt and accepting the slowness of biology.
Right. "Consciousness" could be defined as all the various inputs INCLUDING intelligence / rational thinking into our perception of both self and world, or we could ignore all the inputs coming from anything other than the frontal lobe of the brain and decide that's the only thing worth defining 'human' by. Everyone's rushing in to the second option without enough, well, self-reflection.