Why SkyNet is a distraction from real AI threats (ChatGPT Part 3)
[007] By looking for man in the machine, we're missing the machine in the man.
This is part 3 of a series on the ugliness of AI and machine intelligence. If you haven't read Part 1, use this link to read it first, or go here for Part 2.
Part 3- Why intelligence distracts us from the real consequences of AI
Terminator 2 flipped the original movie on its head by turning Arnold's killer T-800 into a sympathetic hero.
By the end of the movie, the soulless cyborg has become "one of the guys", down to the trademark Arnold one-liners delivered with flawless timing.
But one question always lingers over the Terminator.
He — and the pronoun already begs the question — is "just a machine". The only reason the T-800 seems human is down to the hard facts of its design. The Terminator itself explains that the reason it is so good with social behaviors is to better hunt us.
I've always thought that any real-live AI worthy of the name will either come out of language processing or robotics. The Terminator combines both in a single chassis. This raises an interesting and real hard question:
Does it really think, or does it only appear to think by tickling the hyper-sensitive brain-circuits of us hairless talking chimps?
The Terminator is a perfect case of an anthropocentric illusion.
We have no way to tell if it's really thinking in human-like ways, or simply a vastly complicated parrot.
Is there any difference?
Why is this a problem?
The standard-stock reply seems to be this:
"That Terminator can't be thinking since it's just a machine. It's only doing what it is programmed to do. If they flipped its switch back to 'kill', it would smoke John Connor without blinking."
True enough in the facts, maybe, but the argument built on them doesn't survive first contact.
To the point about being programmed, what do you think you're doing?
Every time you so much as take a breath, you're doing what nature programmed you to do. A good portion — not all, but most — of our motivations are grounded in biological needs. Food, water, sex, recognition and status among other humans are among the motivations "programmed" into us out of biological necessity.
It's the "not all" clause that raises the eyebrows.
"But the T-800 is only playing a part. It only looks friendly because it's programmed to hunt humans."
Philosophy happens in classrooms, and this has a way of disconnecting the reasons and arguments from concrete real-life experiences.
For example, have you never known a two-faced person? Never watched a true-crime show about serial killers? Ever had a person lie to you with total sincerity?
Human beings are the worst offenders at Machiavellian "dark triad" behaviors.
If you're worried that the machine can flip and kill you from hidden programming that it deliberately hides from you, what do you make of Ted Bundy?
Maybe we're so worried about knowing the machine's motives because we see the danger lurking in ourselves.
This argument would be an interesting response if the Terminator were a stupid machine. But it's not stupid at all. It's capable of language on a level that is indistinguishable from a free-range human. And that, reader, is a superpower we've yet to contend with in any mechanical form.
How is it that we can tell an agent from a thing?
At the end of the last issue, I wrote of the difference between mere movement and an intentional action. The difference is that, in the case of an action, there's a reason for doing it. You can ask me why I did whatever I did, and I can respond with my answer.
The give-and-ask of reasons can happen between humans in a way that it can't between a human and an electric stove.
But then there's the Terminator. It can give reasons for what it does. You can ask it why? and get an answer back.
The machine has stepped out of the realm of blind cause and effect, and into the world where humans speak to each other about our reasons, purposes, and intentional actions.
That's existential horror, right there.
What does this mean?
So far in this series I've explored the theme that, in the process of building machines that can think on their own, we've also transformed ourselves into mechanisms.
We may think that the point of AI is to build machines that think (etc) like us. But that is an anthropomorphic take on machine intelligence. The reality is much different.
When we talk about voluntary actions, intentional behaviors, having "purpose" in some of our movements, that's just a short-hand talk for certain mechanical processes that we don't fully appreciate.
We might think we're trying to build a machine that can talk to us. But it's much more likely that a machine that can talk to us will be built to hunt rather than serve. Not literally hunt, I hope. Figuratively speaking, the machine will use language as an instrument in pursuit of its own agenda.
Failure to appreciate this leads to the persistent error that AI is somehow "anthropomorphizing" nature and intelligence. Not so. All the neural networks, the deep-learning, the big data, that's not working to build a human-like intelligence.
We ignore this at our own substantial risk.
Most AI engineers today are perfectly content to build their machines as "stimulus response" devices which map perceptions to actions, with no explicit mental states in between. It doesn’t think, and it doesn’t matter that it doesn’t think.
It's the hunting that matters.
Whether the machine hunts out of malice or unthinking behavior isn't too important. It's a only small step to extend a mechanical chess player to a machine playing the game of war, or the game of marketing, or the game of seduction.
The Terminator looks like it is capable of intentional actions, including giving reasons for what it does. But it's a machine, in the pejorative meaning of the unthinking mechanism.
What do we make of this?
The conclusion can only be that our intentional actions are illusions. When we believe we have a purpose, even when we use these words to speak about actions that aren't merely physical movements, we're making a systematic mistake.
We talk about purpose and actions the way ancient peoples might have talked about a lightning strike as a bolt from the gods.
When we see Arnold's Terminator as a person, we aren't anthropomorphizing a machine.
We're falling for the illusion that we are anthropomorphic.
How AI might really bring about the Apocalypse
Apocalypse means "revelation". The end of the world is not a literal catastrophe fit for a summer blockbuster, but a metaphor for coming to see things as they really are.
AI sure might fit the bill if these threads we've discussed over the last few weeks have any truth to them.
If we come to see ourselves as we are, and we find that there's only a rattling mechanism where we believed there was a soul?
That would qualify as an apocalypse.
From Samuel Butler's "Book of the Machines" to Ted Kaczynski's anti-modernist warnings to Bill Joy's essay "Why The Future Doesn't Need Us", I'm hardly the first prophet of AI Apocalypse. But I might be one of the few discussing it in these terms. We aren't staring down a genocide at the point of a robot gun. The catastrophe (possibly) at hand is symbolic, and all the more terrifying for that reason.
We're looking at nothing less than the destruction of the idea of a human being.
Not that we will physically cease to exist. Rather, the mechanical inclinations and attitudes of our cultural life continue to undermine everything we believe about ourselves.
Maybe the Terminator is a machine that only looks like it understands. But is there really a difference between understanding words and appearing to understand them? Can we even make sense of that difference?
If a being speaks to us in such a way that we cannot tell the difference between it and a natural-born human person, is it relevant that we don't strictly know that it has a mind?
The threat here is that AI will show us that there's nothing remarkable inside us that is required for intelligent behavior. Our supposed mental powers would be unmasked as nothing but self-deception and illusion.
What is the result of overthrowing anthropocentrism?
The human being becomes an object of technological design. AI kicks the human mind out of philosophy and into the realm of engineering.
The thing that always gets me about the AI optimists is how they refuse to consider that time exists. No change happens without bringing consequences. We keep hearing about how AI is going to change our lives in the most mundane ways. From going shopping to driving around town, we can expect an AI revolution.
Instead of asking how AI is going to improve these activities, I find myself wondering what happens when the AI revolution makes them redundant?
Imagine telling Sam Goody (look it up) in 1999 that the internet was going to revolutionize music shopping. Hey, great! That's going to do wonders for music sales!
When I hear that AI is going to revolutionize visual art, writing, and who knows what else, I can only think of making the institution redundant.
It’s the unexpected consequences, the system-wide effects, the higher-order outcomes, that fascinate and concern me.
Change one thing, and everything else changes with it.
We thought the boring labor-intensive jobs would go to the machines. That was an optimistic view from a time before a smartphone attached to every optic nerve and social media warped our collective outrage thresholds with its automated world-shaping powers.
The AI can take over creative jobs, and the populace is such a stunted mass of blind appetites craving "content" that they don't care who, how, or what makes their mental gruel.
The Terminator might have been an accurate prophecy of AI doom after all. Not because of the nuclear war or ruthless efficiency in the arts of murder, but because of his art in the social dimension of words.
-Matt
P.S. Yes, this is a terribly pessimistic conclusion, but I am terribly pessimistic about Western civilization. I make up for it with a terrifying optimism about the prospects for the human spirit to carry on without the millstone of addictive technology around our necks.
Meanwhile, share this post: