Bad Takes on AI this week
Over at
this week, posted a round-up post that included several Bad Takes on AI which got my hackles up.Here’s two (okay, 2.5) gripes I want to make for your entertainment.
Steven Pinker is awful, but this is a leap too far
It is Pinker’s academic work that is unmoored from this technology. Same for all of Chomsky’s proposals about language. It turns out that, for the creation of machines that can write college-level essays on every topic under the sun, all of Chomsky’s academic work on language was unnecessary at best, useless and distracting at worst.
While some parts of the cognitive science edifice is true for humans, for artificial minds the frameworks these intellectuals have spent their whole lives promoting are null and void.
I'm no Steven Pinker fan. His silly polemic defending his fantasy of the Enlightenment convinced me that he is a conventional thinker with little sophistication or subtlety in the history of ideas.
But this hard skepticism about cognitive science seems to me far too quick a leap.
We have artificial minds now?
No. We have deep-learning tools that do impressive things, in certain domains, provided i) they have vast sets of data to trawl and ii) enough computing horsepower to trawl it for meaningful patterns.
This is a kind of intelligence. But we aren’t going to get to the serious stuff — strong AI capable of not only human-like thinking, but self-directed agency — with this method.
Pioneers Gary Marcus and Ernest Davis argue this case in their recent book Rebooting AI.
Deep learning is going to run into barriers sooner than later, settling into the bland incrementalism that follows every revolution. Their case is that deep-learning is hampered by its endless appetite for data, which simulates thinking in domains where the necessary data is available. Despite impressive results in certain areas (where brute learning happens to be ideal), the problems of context and true understanding, essential for mind-like intelligence, remain beyond it.
Look, I’ve marked the essays of college students in the not too distant past. Spoofing these essays is not a high bar to clear. That a device as awkward and limited as ChatGPT can beat college-student essays is more telling about the state of university-level examinations than the prowess of AI.
So long as the likes of DALL-E and GPT “think” in raw statistical processes without regard for the content of the knowledge they work with, they are going to remain profoundly limited in their abilities.
Erik seems to think that this problem lies in the past, that we’ve solved mind and now we’re just waiting on the inevitable Terminators.
I disagree. The cognitivist program of AI failed for a variety of reasons, but something else is going on here with deep learning. Top-down knowledge-based reasoning systems clearly aren’t up to the task in many areas of intelligence. But neither are the bottom-up neuro-inspired learning systems causing a fuss right now.
It’s telling that in actual human beings, there’s a blend of both learning processes and systems of abstract knowledge. An intelligence lacking one or the other is missing vital ingredients.
This is a fact so obvious that no intellectual could dare notice it.
We have to stop listening to nerds about morality
In response to Scott Aaronson’s remarks on morality, Erik writes:
And even our notions of “good” are implicitly human-centric. We are altruistic group-based primates who birth our young and have to work to keep them alive while they’re helpless. We evolved. AIs did not. An AGIs “good” might look horrific to us, but to it, it would be doing what it finds moral and necessary.
As much as I agree with this, I could hardly disagree more.
One of the things you learn with a historical and comparative dive into the history of ideas is just how alien different human cultures can be.
To take just one example which I know well, if you were to poll the average Athenian around the time of Plato, in the early 5th and late 4th centuries BC, about their “moral” beliefs, the first thing you’d have to do is explain what you meant.
The ancient Greek world had no concept of “the moral”.
When you open up one of Plato’s dialogues or Aristotle’s Ethics, you will find the word “moral” all over the place. This is a necessity of translating ancient Greek into English. But realize that there is no Greek equivalent to our word “moral” in these works.
When you see “moral” in Plato’s Republic, the original word is usually dikaiosune or one of its relatives, which is closer in sense to our word “justice”. Closer, but not quite. In the dialogues with Socrates, there is considerable ambiguity about whether justice refers to justice in an objective sense, as we can speak of “the justice of society”, or whether it means a virtue of character, as in speaking of “a just person”.
Aristotle opts for the second (usually) throughout the Ethics. But there are mine-fields here. His key concept is the ethikai arete. Most any English translation will call this “moral virtues” or “ethical virtues”, but the least misleading form is “excellences of character”.
But hang on, there’s more. In Aristotle’s universe, a character trait is more than a psychological disposition to behave in certain ways. The virtues are also habitual responses to aspects of (moral) reality beyond the person. A virtuous person sees moral standards as giving reasons to do, or not do, certain things.
This becomes clear with justice, which belongs to the category of virtues. Many centuries later, St. Thomas Aquinas will enlarge it to include theological virtues, notably caritas (charity), which is the root of today’s altruistic ideals. To be just or charitable is to respond to standards that aren’t entirely within oneself.
Yet, mysteriously, the Aristotelian category of “moral virtues” also includes the personal qualities of courage (andreia) and temperance or self-discipline (sophrosune).
In today’s frame of mind, that makes no sense. “Morality” in our time refers to other-regarding characteristics or actions, like justice, or the altruism of human-centric morality. I am moral if I give you what I owe you, or what you were promised; I am moral if I am selfless in giving aid to others.
But courage? Self-discipline? We’ve lopped those off from the moral world and lumped them over here in this pile called “psychology”.
The differences here run deep beneath our surface-level commitments to particular moral beliefs and values. The punchline, as Elizabeth Anscombe and Alastair MacIntyre famously pointed out, is that we moderns no longer occupy the same moral world-view as did ancient peoples. We use the same words, but they lack the context that once gave them meaning.
The other thing the modern mind would notice about the ancient Greeks is how little their standards of virtue and goodness had to do with today’s humanistic ideals.
The humanism that took hold in Europe in the renaissance is hardly a universal and timeless point of view. Any religious believer understands a notion of goodness that is not relative to human will and interest, being grounded instead in an unconditional ideal of intrinsic worth.
We don’t really get that anymore. Secular “morality” is best mock-quoted, as it breaks down to securing tangible, concrete forms of satisfaction. We’ve shifted from real-deal Goodness to “good”, which is some psychological or sociological condition.
Compared to the real-deal ancient ideals of Good, “our” human-centered moral ideals are the historical anomaly. Which tells us something deep and interesting about how morality does intersect with our motivations and actions. Unfortunately, none of that makes it into these debates.
This kind of ignorance and incuriousity is par for the course in intellectual circles that carry on oblivious to this background.
They don’t think they need the old stuff. They’ve got Logic and Reason and Science, and that’s enough to think out the answers.
What I see, meanwhile, is a cohort of people who may as well be arguing about the properties of four-sided triangles. Like “consciousness” and “free will”, “morality” is another vague, ill-defined concept that modern intellectuals love to debate with no real idea of what they are saying.
Let it suffice to say that I don’t agree (at all) with Aaronson’s claim a more intelligent being is a more “moral” being.
Plato and Aristotle were both skeptical about the virtues of mere cunning, versus true wisdom. They also lived in an age where the heroism of Hesiod and Homer, displayed with a cleverness often matched by brutality, still held sway over common-sense attitudes. Modern takes on “intelligence” don’t begin to mark that sort of difference.
That said, I also don’t agree with the fanciful claim of the Orthogonality Thesis that that there is no connection between intelligence and goals, either. Erik is closer to the mark than Scott, but the debate as it stands still neglects the most important pieces of the morality puzzle.
BONUS: What’s the real bitter lesson?
Let’s circle back to the failures of top-down cognitivism. The bitter lesson, we’re told, is that brute-forcing search and learning methods with computing horsepower plus vast data-sets out-performs top-down, knowledge-based approaches to AI.
The lesson yields two key points:
We have to learn the bitter lesson that building in how we think we think does not work in the long run.
And:
The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds
I agree with both points in the broad brush-strokes. My complaint, as always, is in the qualifications.
The good point here is the attack on the pretensions of cognitivists in the area of mind-design.
From George Miller and Herb Simon and Chomsky to Pinker, what the cognitivists think they know about the mind is vastly, incomparably different from the concrete reality of being a human person. Our conscious, explicit knowledge is but a fragment of the totality of mental being.
So far so good. Where I want to push back is in this belief that you'll get mind (or more properly, something equipped with a robust mind-like skill-set) by opting for the “bottom up” approach, represented by deep learning and its cousins.
The reasoning seems to be, if we don’t get mind from hand-coding knowledge, then the alternative is to drop into the basement and study the gears and levers that make the thing move. A learning tool with sufficient GPU clusters and a vast ecology of data will be able to replicate human knowledge by learning how to do what we do.
Maybe. I am dubious.
The under-the-hood assumption is one that I've discussed many times before: that your thinking and experiencing, reader, is nothing but mechanical processing.
In the outlines this assumption is already there in works from the likes of Hobbes, Descartes, Spinoza, Locke, and Hume. We're dealing with a 17th century take on the mind and calling it progress.
I'm simply not convinced you get the higher-order mental qualities for free — this includes our ill-defined notions of “freedom” and “agency” — with a robust-enough mechanism working at the ground floor of inputs from the senses (or whatever equivalent in deep-learning tools).
I admit that is a controversial claim on my part. However, the reason it is controversial is that most everyone fascinated by AI is already a committed materialist and naturalist who takes scientific discovery as seriously as the pious Catholic takes the Holy Communion. This is a matter of attitude and preference, not cold logic and scientific fact.
Controversial doesn't mean wrong. I come with my own attitudes: I mistrust these people who are alienated from the world, from other people, and often from their own bodies and emotions, who pretend to have special expertise on human nature. I give them no regard at all in matters of the good and the right, which they understand as well as I understand how to win a Superbowl.
But they're also the ones designing and building these machines, while spending their free time lecturing us on the ethical and political consequences.
If the machines go haywire on us, I suspect it will have a great deal to do with what their human builders bring to them.
If you wonder why I spend the time writing these posts, that is why. We're letting people who are least in-touch with the non-intellectual parts of human existence decide its future.
-Matt