The future hasn't become possible yet
Here's one powerful reason why all predictions about the future of civilization and homo sapiens are wrong.
One of the more fascinating things about writing a semi-regular newsletter is that near-magical way that ideas beget ideas.
The more I write, the more I find things to write about. As much as popular culture glamorizes the writer as an aloof, remote, not-so-sane sort of word-magician, the activity of writing is tangible, visceral, even physical to a degree that most outsiders and wannabes don’t appreciate. Writing is 99% about your disciplined practice.
The more I think about what to write, the more anxious and doubtful I get. Picking up some random comment, a thought out of the air, or starting a gentle, warm-spirited fight with another blogger, meanwhile, gives shape and energy to a thought. The more time I spend in the trenches getting words on to the page, the less blockage in the idea flow.
Writing begets writing, creating a positive feedback loop.
The reason that positive feedback looks magical from the outside is that you have to make the leap to the other side in order to know where you are.
You don’t get there by sitting here and watching. If ideas create ideas, what blocks you creates more blockage.
The other week I wrote a piece about certainty and its limitations.
I want to pick up a thread raised there, which itself began in a short exchange I had in a comment section.
I am profoundly dubious about “our” ability to predict and control AI, or really any technology.
Not least of which because I have no idea of who this “we” involves. We throw around the first-person plural so casually that you’d think any random blogger could serve as the voice for the whole species always, now, and forever. (See?)
“Humanity”, if that idea has any meaning beyond sloganeering, is defined more by its fragmentation than its unity. (And you wouldn’t want it any other way.)
There’s another reason for this limitation which is far more important. I’ll get to that in a moment. Before that, here’s part of a response I received after expressing my doubts that “we” can foresee and stave off AI’s looming threats by Doing Something.
In response to my comparing future AI predictions as on par with predictions about the internet made in the year 1900:
This is more like predicting the impact of the internet in 1995, you know, after it was invented. AI is here, and people are using it.
Some of those predictions were spot on, some of them weren't. Entire industries were created, entire industries were lost.
…
Again, these are not pie-in-the-sky predictions, these are things we've seen happen already in history, and are already happening now.
There’s a lot going on in these few lines. Here’s the provocation I’m going to pull out of this haystack:
Has AI really been invented yet?
Yes, I know, I know. I’ve read all the headlines. I’ve been typing to the GPTs and Diffusions and the other whatevers for many months now.
I’m also just old enough to remember the ancient days of the 1990s, when your desktop computer screamed as it touched the internet, when browsers only had one tab (which could crash at any time if the gif that took 5 minutes to download didn’t go fast enough), and Usenet was the social media platform.
The AI discussion was going on even back then, and that wasn’t nearly the beginning. In its modern form you can trace it back to World War 2 and the cognitive revolution in the 1960s. Few people today seem to understand how old these problems are, what with all the hype.
My claim at this here newsletter is straightforward:
Our present situation with artificial intelligence may be best understood as painting a picture of ourselves as a machine, and then bending the resources of civilization towards building the portrait.
My comment partner up above seems to be under the impression that AI is a known quantity, a done deal, and all that remains is to look ahead to a repeat of the Industrial Revolution in the “creative sectors”.
That AI ain’t naught but a steam-powered loom what works like a brain, I says.
If that is the best thinking we can bring to the table, we’re cooked worse than Christmas goose. Assuming that tomorrow will be just like yesterday is the difference between Tom Turkey’s high life in August and his prospects come Thanksgiving morning.
About a decade ago when the original not-Substack blogosphere was still going strong, philosopher and fiction author R. Scott Bakker posted the following on his blog Three Pound Brain:
So for years now I’ve had this pet way of understanding evolution in terms of effect feedback (EF) mechanisms, structures whose functions produce effects that alter the original structure. Morphological effect feedback mechanisms started the show: DNA and reproductive mutation (and other mechanisms) allowed adaptive, informatic reorganization according to the environmental effectiveness of various morphological outputs. Life’s great invention, as they say, was death.
R. Scott Bakker, “The Posthuman as Evolution 3.0”
Later on, intelligent behavior grounded in the nervous system replaces DNA as the site of feedback effects:
At a certain point, however, morphological outputs became sophisticated enough to enable a secondary, intragenerational EF process, what might be called behavioural effect feedback. At this level, the central nervous system, rather than DNA, was the site of adaptive reorganization, producing behavioural outputs that are selected or extinguished according to their effectiveness in situ.
Instead of waiting for that next generation of fruit flies, you get hairless apes that remember and innovate within a single lifetime. Once these feedback loop carry on far enough, something wild happens, leading to
the point where neural adaptive reorganization generates behaviours (in this case, tool-making) such that morphological EF ceases to be a periodic and inflexible physiological generative and performative constraint on behavioural EF. Put differently, the posthuman is the point where morphology becomes circumstantially plastic.
To put that into plain-talk for us po’ folks, technological civilization is leading to a situation where human behaviors (assisted by tools) will be able to alter human bodies, including the brain.
Since brains are essential physical aspects of minds, changes there lead to inevitable changes in behaviors. Changes in behavior then yield further changes in brains… which cause even more behavior changes…
You can see where this is going.
Or maybe you can’t, to be more accurate.
Scott got sick of the internet as all sane people do sooner or later and vanished into what I hope is a happier life in the analog. While he wrote on the Three Pound Brain, though, he offered up a frightening and apocalyptic scenario involving the future prospects of human life with the coming of artificial intelligence.
We are not so much witnessing the collapse of morphology into behaviour as the acceleration of the circuit between the two approaching some kind of asymptotic limit that we cannot imagine. What happens when the mouth of behaviour after digesting the tail and spine of morphology, finally consumes the head?
What’s at stake, in other words, is nothing other than the fundamental EF structure of life itself.
Pay close attention to that line about the asymptotic limit.
Understand, reader, that neither Scott nor I use the term artificial intelligence to refer to the chat-bots you ask to cheat on your term papers or make weird pictures for likes on Twitter. We’re talking about an altogether larger idea:
The transformation of intelligent behavior into design and engineering.
As he phrased it so neatly:
The posthuman is the point where we put our body on the lathe (with the rest of our tools).
Much of our common ideas about technology rest on a simple relationship:
Mind → World
Us human subjects stand over here, in mind-land, devising clever schemes and designing useful objects to enact our wishes.
The world stands over there, a lump of undifferentiated junk that practically begs us to beat it into shape according to our wishes.
We humans stand over dead nature as lords and masters. That’s the Cartesian and humanist biases talking.
Artificial intelligence knocks this faith in human mastery right off its axle.
The master who designs and builds the machines becomes a machine to be designed and built.
If you’re unfamiliar with positive feedback loops, the basic idea is easy to grasp. Change in one direction accelerates change in that direction. What you get, you get more of.
Slowly, then all at once.
AI transforms the creator into the created. That picture in the mirror changes us, which inspires further changes in the mirror image, which changes us, which…1
What does this mean for predicting AI’s impacts and consequences?
When I sit at the computer screen in front of a blank document and think, the ideas hide from me. When I start typing and forget myself, I can’t keep them away.
I can’t get there from here. The only way to get there is to make the leap from rest to motion.
Positive feedback in complex systems often involves a change of quality along with the physical quantities. Water changes from a liquid to a gas at 212 degrees Fahrenheit. More becomes different.
The only way to make the leap is to make the leap.
AI discussions right now assume enduring and unchanging form and function in the homo sapiens species.
They play this game from the unlikely assumption that there is something special about the human mind that will survive unchanged through a radical revolution in the nature of intelligence. We’ll have economies and social structures more or less like today, only with walls that talk to you and cars driving themselves.
Minus a few outliers like Scott Bakker, few have even recognized this angle much less explored it in any depth.
When I asked whether we’ve invented AI yet, I wasn’t provoking for the sake of it.
AI exists on the far side of this embryonic positive feedback loop we’re now living through.
What AI can be hasn’t come into existence yet.
Catch your breath and read that line a second time.
Understand this: if Bakker is remotely right, then we are not even at the beginning of a future belonging to artificial intelligence.
The hyped-up AI discourse right now is focused on a narrow slice of concerns, shaped by a narrow slice of venture capitalists, economists, social media influencers, startup founders, research scientists, reporters, entrepreneurs, and other “thought leaders” representing the professional managerial classes.
They ask if we’ll still have jobs. I ask if we’ll still have spinal columns and DNA.
If the person you think that you are is nothing but a machine, then you can and will be put on the assembly line as long as technological civilization continues its present course. Whether it will is an open question. There are voices warning that it will not.
Not exactly helpful to the prophet and seer.
Paraphrasing the title of an essay by F. A. Hayek, all of the systems we have built are results of human action but not human design. Hayek understood that the division of mind and world, with roots in the much older distinction between the natural and the artificial, was not up to the task of explaining social behaviors at scale.
There is an order at work, a direction and even an “intention” (which mock-quotes strongly emphasized) behind complex behavior, whether in the wider society or a vastly complex object like the human brain.
No one group, organization, plan, or single mind is in charge of such things. What looks like the result of design is spontaneous order created by unconscious forces and tacit knowledge embodied in our constructs.
Wherever we’re going, there’s no captain at the wheel and we passengers have no eyes to see.
AI is a field of players, or, in what I believe the more apt metaphor, a new ecology and meta-ecology of intelligent agents, playing in the ecosystem we’ve built around Earth’s surface.
Each move on the board changes the rules for the next move.
How that’s going to end is impossible to predict or plan. The other side of the positive feedback curve cannot be foreseen because it hasn’t become possible yet.
If AI is an accelerated artificial form of evolutionary selection processes, then only AI can know where AI is going.
The only way to get there is to get there… and once you get there, you can’t come back home.
Thanks for reading.
-Matt
p.s. If you found this valuable, interesting, funny, or it made you upset that you had to use your mind for activities that don't involve infinite scrolling, I ask that you do me a favor and share it with just one person.
Here's a handy button for you ⤵
If you ask who that “us” is, I mean the concrete, actually-existing human persons under the influence of technology, including the metaphors we use for understanding ourselves. I don’t think about myself the way my grandfather did, or even could have. My ancestors prior to the 19th century saw themselves in terms that today may strike us as quaint, but more often as opaque and difficult to make sense of. They had no easy metaphors for genes, neurons, micro-processors, or electronic switchboards.