The mostly-forgotten writer T. E. Hulme once wrote that we must separate the subject-matter of philosophy, which is a discipline as technical as mathematics and physics, from the study of satisfaction.
No, Hulme wasn't talking about the satisfaction of biting into a Snicker's bar. What he had in mind was whatever it is that makes a theory satisfactory to the reader.
Why do we take some ideas as acceptable, meaningful, and valid? Why do other ideas get the heave-ho like an unruly drunk?
It's tempting to say, "Because the worthy ideas prove themselves." Yes, of course. Every defender of every idea has said that β even the ideas that turned out wrong.
Proof isn't good enough. What counts as evidence? Why? What counted as good evidence for a theory in 1850 would get laughed out of many rooms today. Giving reasons for a theory is more complicated than you thought.
Thinkers in every field β philosophers are hardly the only offenders β accept theories for reasons that transcend the evidence. In the absence of direct evidence, which can never tell us which theory is correct anyway, why are some theories more acceptable than others?1
More recently, Harry Frankfurt wrote a well-known essay in which he lamented that philosophers don't address the question of what we care about. Philosophers happily offer theories of what we know, called epistemology, and theories of what's right to do, called ethics.
But they don't talk about what we love, or what is worth loving.
The objects of care and love have been thought off-limits to the technical discipline of philosophy for a long time now.
As far as official history goes, Philosophy with a capital P made its final stand in the late 19th century, culminating in the ideas of Hegel and playing out in the materialized revolutions of Marx and counter-revolutions of the fascists. That momentous fireball collapsed into a vacuum which has since been filled by the sciences and depression.
What's left of philosophy tinkers away in the wreckage, working on boring technical problems of interest to nobody (not even the people working on them).
In the early 20th century G. E. Moore wrote a book in this spirit, called the Principia Ethica, which helped close the door on philosophers caring about caring.
There's nothing to say about "good", Moore argued, because there is no property that give the word meaning. For any statement claiming that "X is good", it's always possible to ask, "But is X good?" And when you look for the "is good", you don't find it. You can't see it, touch it, or taste it. You can't study it in the lab. You can't reason your way to it with logic or math.
Ergo, "good" is not a predicate. Good is about no-thing. Ever since, a great portion of moral philosophy has taken moral language to mean something quite different from respectable fact-stating declarative sentences.
Words like "good" express a feeling of approval, or a choice I've made, or a recommendation. Moral concepts analyze into descriptions of fact plus a value judgment.
What moral words don't do is describe the facts of reality.
What's surprising about this tale is not that it is almost entirely wrong (though it is). The so-called normative domains of ethics, morals, and aesthetics have proven robust in their stubborn refusal to transform into technical problems for science and logic.
What's surprising is that those most vocal about the Death of Philosophy, today, are most likely to get tangled up in seriously bad philosophizing.
This is acutely obvious in the case of the AI Guru lecturing you about objective values and the urgent need to "align" the AI with "our values".
Because moral beliefs are subjective, they are personal, private, and, because disconnected from the background of objective facts, arbitrary. Morality expresses my convictions, or perhaps our convictions as a society, but it says nothing about how the world is without me. Which is the real goal, a morality as objective and absolute as math.
Philosophers, meanwhile, turned their attention to the analysis of concepts, so they have no advice to give about the nature or essence of morals. Moral philosophy turned a game played with language, analyzing the concepts of good and right while refusing to say anything about the actual values or actions that matter to us.
Even today, the great weight of non-cognitivist and anti-realist theories of morality remain a powerful force.2 And I don't mean inside the philosophy classroom.
Moral relativism, skepticism, and outright nihilism are fashionable doctrines for people in the street. Ask any 20 year old about someone else's life choices and you'll find a lot of hemming and hawing, likely with some language about "not judging" others. Radical tolerance is indistinguishable from absence of firm commitments.
This has all had a tremendous effect on how we see good and bad, right and wrong. In short: we don't see them at all.
You can look for answers in science. Study physiological responses. Study neurological activity. Study psychology and cognitive science. Figure out where there's suffering. Figure out what natural selection and our genes "program us" to do. Call that moral.
But it isn't moral. Morality concerns the normativeβthe ideal against the real. Science shows us the real and, officially, says nothing about ideals.
It takes an extra step in the argument to make the real identical with the ideal, or to derive the ideal from the real. If the ought-to-be is no different from the what-is, this is a mostly pointless exercise.
You can turn to cold reason for answers. This is the basis of liberal ideals in politics and ethics. Set the conditions of fairness and equality and leave the details alone. Immanuel Kant's morality says that what is right is the action that you would universalize as a law for everyone.
There's no substance to the moral law, only form. Why would you care about the abstraction of "rationality" if it never concerns your real desires and motivations?
If science gets reality without ideals, reason gets the ideal without the real.
None of this tells us what is worth caring about.
Worse still, it leaves us with a confused mess sitting somewhere between subjectivized moral ideals and objective ideals forced on you by The Experts.
Here is the water that your AI Guru swims in.
A godless, meaningless, purposeless universe ruled by pitiless laws, which has no room for morality as the ancient world knew it.
This is the result of the Great Disenchantment of European culture since the scientific and industrial revolutions.
The old qualitative distinctions have yielded, slowly but without mercy, to quantitative scientific and mathematical explanations. Morality is not something there; it is something done by humans.
In such a world, there are no ideals. There are no standards. There is only what there is.
You can try to reconstruct something of the old order out of that, but as Alastair MacIntyre put it in the opening fable of After Virtue, we're like archaeologists trying to recover the meaning of an artifact from the rubble of a long-dead city.
The AI Guru wants a morality for AI, but they are sifting through the wreckage of meaning looking for something that looks just enough like "good" and "right" to pass the muster.
What they're looking for is something that won't kill us, metaphorically, or literally. The reasonable, respectable inquiry looks an awful lot like a flimsy mask over deep panic.
Question is, what makes these satisfactory?
Why are their ideas worth caring about?
Why do they matter?
I appreciate that there's a need to do something to rein in the automated mind-machinery we're unleashing on the world.
As I've argued in these hallowed digital pages, the damage has long-since been done.
We've already become machines to ourselves. Today's living breathing persons are as likely to refer to themselves as "brains", as products of "oppression", as un-responsible expressions of situational factors, or the result of unruly genes.
The actual building of thinking artifacts is the conclusion of these self-inflicted wounds.
Want to do something about it?
The place to start is with the question:
What's worth caring about?
If your AI Guru isn't asking these questions, he's another robot. Is it worth listening to him?
-Matt
P.S. Donβt make me tap the sign:
This is called the underdetermination of theory by evidence. It's a logical problem that results when you make generalizations from particular observations. There are an infinite number of theories that fit any finite set of evidence.
Moral non-cognitivism is a family of moral (meta-ethical) theories that interpret moral language as expressing subjective attitudes. Non-cognitivists deny that moral statements express beliefs that can be true or false. Moral anti-realists argue that moral words do not refer to anything, because there is no objective moral reality for them to be about.
If you arrange these essays into a book, so far this is the best introduction.