Why Heidegger thought AI cyborgs would go loopy
AI and civilization are like a married couple that hate each other but won't split because they're still in love.
This is the last article I’m going to write about AI for awhile.
I’ve still got plenty of material, but I’m getting kind of bored of it, mostly because I’m making all the noise of an ant’s footprint in the crashing asteroid of panic over artificial intelligence. It’s a handy topic to talk about the philosophical ideas that are really interesting to me.
Next week, it’s on to another theme.
Last year I watched a Netflix series called Altered Carbon.
There's this one scene that rattled me.
The hero Takeshi Kovacs attends a party thrown by his decadent ultra-wealthy client. In this future world, everybody has a chip at the base of their skull with a copy of their mind, so they are immortal as long as they have the cash to buy new bodies.
He meets a degenerate artiste with a snake curled around her neck. We learn that the snake has its own brain-chip holding the mind of a convicted killer and rapist. She explains that, once, she tried to put his mind back into a human body, but all he did was slither and hiss on the ground.
Total irretrievable loss of self. Since the brain-chips don't age and die, he's going to spend the indefinite future like that.
I grew up in the South. I relate and sympathize with hard attitudes about punishment for violent criminals. But this crossed my lines. The snake-mind prison went beyond punishment.
The brain-chip technology allowed the justice system [sic] to unmake a person while extending his existence without end. This woman took pride, if not psychopathic glee, at the deviltry of her "art project", while her audience laughed and applauded.
Right now we're talking about the artificial intelligence as the calculating and uncaring force, but we aren't thinking nearly enough about the unending fountain of cruelty that is the clever homo sapiens.
We aren't thinking about the viciousness built into our own humanistic beliefs.
Even if you believe the unlikely story that "technology is just a tool", by its nature AI is a different proposition.
Much of the fear around it is due to the ugly parts of ourselves that we see in it.
The discussions happening in the mainstream leave out an important puzzle-piece, and I'm going to bring it out in this article.
AI is the end-game of humanist values
Rene Descartes and Thomas Hobbes, who I discussed back in issue 3, are the first moderns to conceive of the mind as a mechanism. Read that 3-part series if you want more background on this.
The tl;dr is that major threads in Western culture have always thought of the mind as a kind of machinery. Only in the decades since World War 2 have we had the capability to design and build actual devices with any claim to think.
After the war years, the game of thinking machinery got a level-up with a new science called “cybernetics” by its founder Norbert Wiener. The word comes from an old Greek term, kybernetes, which means “steersman” — the guy with his hands on the wheel of a ship navigating the water.
Using new concepts from information theory, the then-new digital computer, and the revolutionary leaps in math and logic that made those possible, Wiener aimed at nothing less than an objective science and technology of self-steering machinery.
Possibly the major hurdle to truly human-like machines, even today, is giving them purpose. The cutting edges of the GPT-4 model can’t do squat without a human typing into the chat-box.
Self-moving agency is something different from brute intelligence.
Cybernetics, as the study of communication and control, meant to explain purpose as complex systems of feedback loops.
God is supposed to have made man in His own image, and the propagation of the race may also be interpreted as a function in which one living being makes another in its own image. In our desire to glorify God with respect to man and Man with respect to matter, it is thus natural to assume that machines cannot make other machines in their own image; that this is something associated with a sharp dichotomy of systems into living and non-living; and that it is moreover associated with the other dichotomy between creator and creature. Is this, however, so?
Norbert Wiener, God and Golem, Inc.
No longer would you need a mysterious consciousness or will hiding away in an invisible spirit. Engineers could build machines that act under their own power, by their own causes, with their own “will”, by adjusting their behavior in response to information from the environment.
Today we don’t use the word cybernetics unless we’re talking about The Terminator or Ghost in the Shell. This is not because the program was left behind as one of those quaint fringe movements that soon dates itself. Quite the opposite. The basic concepts and ambitions of cybernetics have bled over into almost every other field.
The field(s) we call AI today are the great-grandchildren of the original cybernetics, building thinking-machines out of the raw stuff of physics and chemistry.
From the opposite direction, cybernetics found its way into the human and social sciences, influencing cognitive science, developmental psychology, neuroscience, and much more.
If you’ve ever heard someone talk about people as brains, and talk about brains as computers, you’ve experienced cybernetics first-hand. It’s part of common sense language now.
We’re talking less and less about minds as minds and persons as persons. Thoughts no longer require a thinker to have them.
Some did notice the threat, though, and the prophecies are worryingly accurate.
AI, the story without a story
Here’s the problem in short form:
If you can build devices that simulate self-moved movement, that’s no different from turning our own human consciousness and will into machinery.
If you get “will” out of feedback loops in silicon chips, you don’t need any different story about how “will” appears in naturally-evolved biological machines.
If machines can simulate agency, then — this story goes — there’s nothing more to say. The simulation is the simulated is the simulation.
German philosopher Martin Heidegger noticed this problem decades ago. He warned that cybernetics had replaced philosophy as the unifying ground of all our science and technology.
That’s a statement that needs elaborating. By philosophy, Heidegger meant the whole arc of metaphysics that began in Western thought with Plato and slammed into a brick wall with Nietzsche.
Metaphysics was the glue holding together the diverse fields of knowledge. Biology studied living things, physics studied matter and movement, anthropology studied human beings, and philosophy held them all together as the study of being.
Philosophy was the frame of reference that gave order to all other knowledge.
Cybernetics has taken over this role in our time, said Heidegger. Instead of “being”, “becoming”, “substance”, “cause”, and “existing”, our basics concepts now come from cybernetic sciences. We talk about “information”, “communication”, and “feedback” instead.
This is all practical, tangible stuff, too. You can build things with it. Unlike metaphysics, cybernetics aims to be an objective science with immediate usefulness in engineering.
Here’s the clincher.
The problem, says Heidegger, is that philosophy is a humanistic project. The aim of humanism is to gradually replace God, the divine and supernatural, with the will and interests of human beings.
Because humanist values are grounded in the assertion of the will, which seeks mastery over nature, that makes cybernetics the ultimate manifestation of human-centered value systems.
As Jean-Pierre Dupuy points out in The Mechanization of the Mind, Heidegger’s thought went on to shape the thinking of French deconstructionists and “postmodernist” writers, who happily used cybernetics to mount an attack on humanistic pretensions.
There’s a conflict here. Heidegger argues that cybernetics is humanism in its final form. His Francophone students argue that cybernetics destroys humanism.
Which is it?
It’s both, argues Dupuy.
Thinking machinery is the ultimate realization of humanist values, the triumph of mind over nature.
But the mind itself is seen as part of conquered nature. And, like the strip-mined earth and polluted seas, mind stands as a resource for us to tinker and prod and optimize with our gadgetry.
If cybernetics is the height of humanist values, cybernetics reveals the emptiness of humanism.
That’s a real stinger if you still buy the old tale of “enlightenment”, where Europeans got together in tea-rooms and threw off old superstitions with the Power of Facts and Reason.
Science and human progress align as far as humanism is the philosophical defense of the human mastery of nature. The more we can control, the more we can know, and the more we can control.
For awhile, you get cool stuff like electric lights and penicillin and better working conditions in lethal factories. The costs of these gains, the dirty, unsanitary cities, the open-pit mines, the unswimmable beaches, the poisoned air, are the price of human flourishing. You don’t want to live in caves, do you?
There comes a point, though, where the human-being becomes an object of scientific study. Which is misleading, because the scientific knowledge isn’t the real goal. Knowledge flows from technology’s power to poke and prod nature to give up her secrets; the point of gaining knowledge is what we can do with it. Science supposes technology.
Meanwhile, the scientific knowledge accumulated through expanding technical capabilities helps us better describes things as they are. At the same time the new knowledge revises our categories and concepts, even our most basic beliefs about ourselves.
Science and technology leave nothing alone.
Within this world man is just another machine — no surprise there. But in the name of what, or of whom, will man, thus artificialized, exercise his increased power over himself? In the name of this very blind mechanism with which he is identified? In the name of a meaning that he claims is mere appearance or phenomenon? His will and capacity for choice are now left dangling over the abyss. The attempt to restore mind to the natural world that gave birth to it ends up exiling the mind from the world and from nature.
Jean-Pierre Dupuy, “Cybernetics is an Antihumanism”
There soon comes a point where categories once taken for granted become subject to conscious design.
You need look no further than the current battles, sides lined up for and against, around the transgender movement to see concrete results of cybernetics play out in real time.
This is but one angle of hundreds.
Should you be able to walk outside without constant surveillance by face-recognition cameras? Do you have a right to walk through your city without a GPS tag keeping tabs?
AI blurs mind into machine. What about biology, which no longer recognizes an original difference between life and the unliving? Matter is life is mind; these are humanist values.
Humanist values are technology, and technology can lead to nothing but the abolition of humanist values.
The contradiction is not a problem because cybernetics is the science of contradictions.
Gregory Bateson explains better than I can in this passage from an interview with Fritjof Capra:
You see, when you get circular trains of causation, as you always do in the living world, the use of logic will make you walk into paradoxes. Just take the thermostat, a simple sense organ, yes?”
He looked at me, questioning whether I followed and, seeing that I did, he continued.
“If it’s on, it’s off; if it’s off, it’s on. If yes, then no; if no, then yes.”
With that he stopped to let me puzzle about what he had said. His last sentence reminded me of the classical paradoxes of Aristotelian logic, which was, of course, intended. So I risked a jump.
“You mean, do thermostats lie?”
Bateson’s eyes lit up: “Yes-no-yes-no-yes-no. You see, the cybernetic equivalent of logic is oscillation.”
Cybernetics is the physical realization of logical paradox.
On a map of world history, there's a giant "YOU ARE HERE" sticker on the present moment.
The real threat of AI isn’t from the machines
Mainstream discussions are still looking at humanistic values as separate from the mere tool of AI.
If AI is a threat, they say, it will be either because it acquires a human-like ability to act on its own or because human beings use it badly.
Both of these anthropomorphize the AI. The danger we see is ourselves; it will be like us. But that "like us" is an image created by certain human beings, in a certain culture, at particular moment in history.
The intelligent machinery of AI is simply ourselves seen in a mirror, with cybernetics as the end-point of this flattering self-portrait.
Cybernetics tells a story that doesn’t appear as a story.
We’re in a walking, talking contradiction, where humans aren’t quite human and machines aren’t quite mechanical.
The interesting stuff is going to happen at the boundary-zone between the human operator and the AI.
In the feedback loops between human beings and machines echoing back their beliefs and desires.
I am skeptical of any political attempts to save us from AI. Regulations, moratoria, open letters, new laws, “won’t you think of the children” —
All noise.
Whatever damage AI might bring was done decades, if not centuries ago.
Consider this.
The same bureaucracies charged with regulating AI will have to make use of it. It isn't that I don't trust their motivations—it's that they have no motivations once the feedback-loop of AI enters the dance. Regulatory agencies are already cybernetic systems, their human parts acting according to their own blind rules and incentives which are eerily inhuman. The HR manual is analog GPT-4.
And not only that. The nature of AI technology makes it unlike regulating air travel or cigarettes or firearms.
AI changes what we want and how we act and it uses our motivations as its own.
The brain-chip technology in Altered Carbon wasn't "good or evil" in itself. It enabled new kinds of evil along with new kinds of good.1 But ultimately it was the human beings, their motivations and their actions, that made the moral difference.
I’m not saying that we can’t bank on similar issues turning up with AI. No doubt we will — we already are.
What I’m asking is that you consider how these problems arise.
If the AI is cold, pitiless, and psychopathic, where did it pick up those traits? How did they come to be characteristics of the machine?
AI’s intentions are our intentions.
What it wants is nothing more than what we want...
... and what we want is nothing more than what we get from the AI.
The feedback loop between Man and Machine is the danger.
Taking control over ourselves with our machines is not the opposite of "our values". It is the logical conclusion of those values.
To control AI requires AI.
Giving control of AI to government regulators and international bodies is in no way a defense of human liberty and equality against the growth of "thinking" cognitive technology.
State control accelerates the merger of the state, the corporation, and the AI technologies. Omnipresent surveillance, digital currencies tied to your every move, algorithms timing the rhythms of life, enforced by automated drones (armed, naturally), all "for your own good".
Sounds a whole lot like an AI gone wrong.
As a philosophical idea, AI threatens our personal, psychological sense of being-human.
At the level of politics, cybernetic feedback-loops obliterate the difference between public and private power, threatening the very idea of a political subject with rights and liberty.
I am wary of AI out in the wild.
I am positively terrified of the unmasked State taking possession of it.
The end of humanism is not the end of human existence.
Humanism is a philosophical perspective that puts human desires and interests front and center of all values; even science and technology exist to serve human needs and wants.
Cybernetics, which is AI, forces us to rethink what we are without the pretensions of a self-absorbed ego filtering all that we know and experience. This might lead to ugly outcomes, and one doesn’t want to wander too freely into optimism or pessimism, but there is precedent.
The classical world of Greece and Rome lived, for the most part, with this reality. Their world, ruled by gods and fate, is not terribly different from a world where intelligent, mischievous non-human beings exist outside of our control.
We would no longer be master. But that is not the worst possible outcome. It may be that clinging to anthropocentric values and ideals which leave us in charge — or feeling that we are, in spite of our many incapacities — might be the road to ruin.
There is a reason that pride and vanity rank as the greatest of the sins. As I read the hand-wringing intellectuals and smart-boys worrying over the threats of artificial intelligence, the fear I see is less about “X-risk”. That’s the official, logical, rational answer, which means it is cover for the real motivation.
They’re afraid of losing their status as smartest people in the room, the self-appointed sages and guardians of Humanity and Civilization.
AI makes the know-it-all’s smugness impossible.
Heidegger wrote that the real “barbarians” claim to be humanists, acting for human interests, while strip-mining the earth, polluting fished-out seas, and transforming the human population into content-addicted zombies who do what they’re told in exchange for cheap gratification.
Heidegger didn’t say that last part, but he would have had he lived to see it.
The psychopathic lady in Altered Carbon carried a damned soul in a snake’s body. She took pride in the justice of her work. A righteous punishment in exchange for, admittedly horrific, crimes.
She already thinks and acts as a calculating machine, the very embodiment of rational, civilized justice.
Great evils done in good’s name are still evil. The feedback loops between the human and machine intelligence enable evil by presenting evil as mere engineering.
If there is a hope here, for the homo sapiens species if not the pretensions of mastery, it will come from understanding that the story of cybernetics is a story.
It’s a story that tells us that it is not a story. A Liar’s Paradox lies at the heart of everything.
AI is technology, but it is also a narrative built around a set of concepts; one more picture that human beings have drawn of ourselves.
We can be pulled into it, and many of us are, and more will be, but there is no inevitability to that outcome. The difference is that, this time, we’re painting portraits that can paint us back.
Shocking, yes, but not yet a threat to freedom, to your ability to tell a different story, paint a different picture of yourself.
What do we do about it? Stay tuned.
-Matt
P.S. Let me know what you think, or share this article to infuriate a nerd:
Maybe. I'm skeptical that the 'consciousness transfer' technology could be what it was made as. That’s a rant for another time.