In Leaving it Implicit, Scott Bakker throws down the gauntlet: normativity, our idealistic judgments about good and bad aren’t what we assume they are in our everyday social dealings, because those judgments could apply only to mechanisms, there being no such things as goodness or badness in the natural world. If you think otherwise, Scott says, you’ve got a lot of explaining to do, given science-centered ontology. Moreover, naturalism doesn’t imply normativity just because naturalists use terms that can be interpreted normatively, because those terms can also be interpreted in mechanistic ways. Thus, the time of reckoning is nigh and the apocalypse will come not at the hands of some angry parent in the sky but through our advances in objectively understanding the world.
Framing the Issue
I’m going to try to break down Scott’s argument and my response to it with a minimum of jargon and I aim to chart new territory instead of rehashing our previous discussion. So what I noticed when I read “Leaving it Implicit” is that Scott’s conclusions follow in part from his way of framing the issue. He makes certain background assumptions and if you accept them, you’ll be more favourably inclined to heeding his prediction that all folk psychological categories will be eliminated as premodern bits of magical thinking. Three of these assumptions are as follows.
First, he assumes that Western philosophy is a protoscience, that philosophers are after theories that explain the facts, that they employ a second-order, meta-language which is meant to support our first-order, natural one. For example, Scott says, “From the mechanical perspective, in other words, the normative philosopher has only the murkiest idea of what’s going on. They theorize ‘takings as’ and ‘rules’ and ‘commitments’ and ‘entitlements’ and ‘uses’—they develop their theoretical vocabulary—absent any mechanical information…” (my emphases). Notice how normative philosophy is here assumed to be in the business of providing theories, but because philosophical theories are worse than scientific ones, the former are at best murky ideas. Scientists use their methods to test their theories of the external world, whereas normative philosophers rely on intuition, which makes for less reliable theories. Likewise, Scott speaks of philosophies of meaning and normativity as “controversial sketches,” compared to what we know of the brain, the latter being the “most complicated mechanism known.” Finally, Scott says, “My first order use of ‘use’ no more commits me to any second-order interpretation of the ‘meaning of use’ as something essentially normative than uttering the Lord’s name in vain commits me to Christianity.” This distinction between first- and second-order interpretations, which Scott makes in a number of writings, is consistent with the science-centered construal of philosophy as a protoscience. The assumption is that philosophers are trying to reductively explain the phenomena that reveal themselves in ordinary language, such as our talk of what symbols mean or of which actions are morally better than others.
First, he assumes that Western philosophy is a protoscience, that philosophers are after theories that explain the facts, that they employ a second-order, meta-language which is meant to support our first-order, natural one. For example, Scott says, “From the mechanical perspective, in other words, the normative philosopher has only the murkiest idea of what’s going on. They theorize ‘takings as’ and ‘rules’ and ‘commitments’ and ‘entitlements’ and ‘uses’—they develop their theoretical vocabulary—absent any mechanical information…” (my emphases). Notice how normative philosophy is here assumed to be in the business of providing theories, but because philosophical theories are worse than scientific ones, the former are at best murky ideas. Scientists use their methods to test their theories of the external world, whereas normative philosophers rely on intuition, which makes for less reliable theories. Likewise, Scott speaks of philosophies of meaning and normativity as “controversial sketches,” compared to what we know of the brain, the latter being the “most complicated mechanism known.” Finally, Scott says, “My first order use of ‘use’ no more commits me to any second-order interpretation of the ‘meaning of use’ as something essentially normative than uttering the Lord’s name in vain commits me to Christianity.” This distinction between first- and second-order interpretations, which Scott makes in a number of writings, is consistent with the science-centered construal of philosophy as a protoscience. The assumption is that philosophers are trying to reductively explain the phenomena that reveal themselves in ordinary language, such as our talk of what symbols mean or of which actions are morally better than others.
Second, he thinks of mental processes as heuristics and he interprets heuristics not just as naturally selected procedures, but as solutions to what he calls “narrow problem ecologies” (my emphasis). This means that a mental process is a naturally selected and thus flawed shortcut to aid us in our endeavour to survive, because Mother Nature is a blind designer and she had limited resources at her disposal. One flaw of our thought processes is that they’re blind to their mechanical nature: we evolved to be preoccupied with external threats, not with internal truths, which is why our main senses point outward, leaving us with little information indicating the mind’s nature. Scott further assumes that because heuristics are made more efficient in so far as they leave out information, that deficiency limits their optimal areas of application. Scott says, for example, that “They [normative philosophers] have no inkling that they’re relying on any heuristics at all, let alone a variety of them, let alone any clear sense of the narrow problem-ecologies they are adapted to solve…We know that heuristics possess problem ecologies, that they are only effective in parochial contexts” (my emphases). By “parochial,” Scott means that those heuristics have a very narrow scope of effectiveness. Again, he says, “On the mechanical perspective, normative cognition involves the application of specialized heuristics in specialized problem-ecologies—ways we’ve evolved (and learned) to muddle throughour own mad complexities” (my emphasis). Notice the connection here between the fact that nature equips us only with ways of muddling through the problem of figuring out our inner nature, and the specialized or narrow range of problems our heuristics are adapted to solve.
Third, Scott employs a mechanistic vocabulary. He speaks of mechanisms, heuristics, and gadgets in the mind. On the surface, then, he assumes the mind is a kind of machine. All of these terms have unfortunate connotations, from a naturalistic perspective, and although Scott may not be committed to those extended meanings, they might inadvertently do some of the work in his rhetorical case against folk psychology. On the naturalistic view, there is no intelligent designer of organisms, so biological systems can’t literally be machines in the ordinary sense. The mechanistic vocabulary must be metaphorical, so we should distinguish between the literal, naturalistic meanings and the extended, commonsense ones that Scott’s explanations have to abandon. For example, Scott says, “Evolution has given me all these great, normative gadgets—I would be an idiot not to use them! But please, if you want to convince me that these gadgets aren’t gadgets at all, that they are something radically different from anything in nature, then you’re going to have to tell me how and why.” Now, a gadget is a mechanical contrivance and a contrivance is something that’s planned with great ingenuity. Nature plans nothing, so Scott must be using “gadget” as a metaphor.
Do you see how these three ways of framing the debate between the mechanist and the transcendentalist or normative philosopher all by themselves cast doubt on normative discourse? If normativity is defended by philosophers, not by scientists (in their professional capacities), and philosophy is only a protoscience, we ought to favour the scientific view of the normative, which means we should stop talking about it since scientists don’t do so. If we philosophically learn about ourselves through intuition and other heuristic processes, and those processes aren’t designed to work well in that context, since our minds are adapted to coping with the external world, we have no reason to trust what we think we discover with those innate modes of access. Finally, if we accept the mechanistic discourse, we lose the ability to conceive of what normativity might be, since good and bad are clearly nowhere intrinsically to be found in something as material and objective as a machine.
Philosophy as the Search for a Wise Way of Life
I have problems with all three of those background assumptions and I’ll take them up in order. To be sure, much ancient and modern Western philosophy is indeed concerned with acquiring empirical or transcendent knowledge, and to this extent premodern philosophy might be looked at as an inchoate attempt to do what scientists now excel at doing, while modern philosophy is clearly influenced by scientific methods. But what Scott’s concept of philosophy leaves out is the old interest in wisdom as opposed to knowledge. Wisdom is the ability to live well. Wisdom may require some knowledge, but wisdom itself is more like the skill of living well than like any set of statements. In fact, while the ancient Greek philosophers did seem to love knowledge regardless of its uses, which is why they followed their often counterintuitive hypotheses to the furthest logical reaches, they also saw the search for knowledge as being in harmony with the searches for beauty and goodness. And so what we might think of as empirical science wasn’t taken to overshadow aesthetics or ethics. In the modern period, though, that overshadowing did take place, especially in academic philosophy and even more specifically in analytic, science-centered circles. To the extent that philosophers cleave to what scientists say, and scientists don’t directly address normative questions, philosophers too ignore the latter or else reduce them to questions that might be answered by protoscientific methods, such as by the use of thought experiments.
So this is the reason why Scott construes philosophy as he does, but the fact is that philosophy needn’t be thought of as excluding normativity at the outset. The fact is that we shouldn’t beg the question one way or the other. After all, the traditional search for wisdom presupposes the reality of the normative, so if we define philosophy in those terms, we beg the question against Scott’s mechanistic conclusions. The best course, then, is to be open-minded about the nature of philosophy. If our independent arguments establish that there’s no such thing as normativity, then to the extent that philosophers are interested in wisdom (in what we ought to do in all situations), philosophy is in danger of being a sham. But those arguments had indeed better be independent of any framing of philosophy as being concerned merely with theoretical matters, meaning with highly general questions of fact. Any argument for or against normative philosophy which assumes either framing begs the question and carries no weight.
The same goes for an argument against normativity which assumes that the first- and second-order language distinction exhausts philosophy’s role, since if philosophy includes the search for wisdom (for a way of living) and not just for knowledge (for a set of statements of fact), philosophy transcends that distinction. Philosophy might be more like a kind of training to turn people into mental athletes, as it were, and to the extent that philosophers speak when they train, that speaking might play some causal role in shaping the philosopher’s skills, so that the content of the statements is relatively unimportant. In a similar way (but with a much different lesson), it doesn’t matter so much what Muslim children in parts of the Middle East are saying when they’re forced to repeat the Koran out loud, over and over again. What matters is that they come to love the Koran, that they’re turned into Muslims. Here, language is part of a practice of personal transformation which a mechanist should be able to appreciate. To take another example, philosophical texts might amount not to theories in the protoscientific sense, but to myths, fictions, or artworks which likewise are meant to affect us and change our way of life. To this extent, philosophy would be closer to religion than to science. And just as treating religious questions as empirical ones about scientifically discoverable facts provides us with only a cheap and irrelevant refutation of theistic religion, since religion is likewise about practice and not just knowledge, so too we might doubt such a science-centered approach to philosophy.
To see the relevance of this, consider Scott’s set-up of a certain rhetorical question: “Normative cognition, in other words, is a biomechanical way of getting around the absence of biomechanical information. What else would it be?” Here’s what else: a step in the process of turning one sort of creature into another sort. Specifically, normativity might be needed to turn animals into people. The fact that this remains a mechanisticpossibility leads me to puzzle over why Scott says, “Not only are we blind to the astronomical complexities of what we are and what we do…” (my emphasis). Normally, Scott says only that we’re blind to what we are on the inside, but here he adds that we’re blind to what we do. That’s clearly not so, since our actions are observable along with the rest of the external world. And the relevance of this, of course, is that if philosophers are after a certain way of living, what we are on the inside might not be as relevant as the differences between our apparent actions. Thus, when Scott asks the rhetorical question, “But aside from intuition (or whatever it is that disposes us to affirm certain ‘inferences’ more than others), just what does inform normative theoretical vocabularies?” the answer might be that those vocabularies rest on experience of human behaviour. We can learn about our behaviour in the same ways we learn about that of the animals we hunt or about the weather or other environmental factors with which we have to cope. We learn that some actions lead to failure while others lead to success and some are heroic while others are destructive and counterproductive. And no blind intuition need be instrumental in that experience.
That experiential basis of normative discourse can be entirely causal—and indeed protoscientific! We can ignore the content of words like “good” or “evil,” and just appreciate the impact these concepts have on our behaviour. We can even understand how these conceptions might have evolved: by helping to civilize our prehistoric, animalistic ancestors, notions of meaning, beauty, and goodness helped open up the niche in which we’ve dominated for some millennia. We may survive partly because of the utility of our fictions, including our self-deceptions, and a mechanist need have no quarrel whatsoever with that possibility. This is because that possibility is entirely consistent with the mechanistic view that semantic and normative properties are unreal. I haven’t appealed to the factual basis of any normative statement; instead, I’ve posited some process of enculturation in which normative conceptions are links in a causal chain that needn’t subtract from our evolutionary fitness. No magic, no premodern superstition, no romantic, Luddite or otherwise antiscientific prejudice, but a charitablemechanistic interpretation of the role of normative philosophy. In everyday interactions, we presuppose normativity and part of the philosopher’s job, mechanistically or instrumentally speaking, isn’t to get at the facts, but to explore what we’re doing in everyday experience, to chart the territory, to speculate on how the territory might be expanded or altered, and so on. And the philosophical creativity reinforces or reorients the everyday experience.
Yet another science-centered way of looking at philosophy is to emphasize the lack of consensus among professional philosophers. In many of his writings, Scott bemoans the fact that philosophers can’t agree on how to naturalize meaning and normativity. In the above-cited article, he does the same with regard to mathematics: “In fact, it seems pretty clear that we have no consensus-compelling idea of what mathematics is.” But any such lack of consensus should be of little concern. First, consensus is ideal in science, but who says philosophy or mathematics should be scientific? Who says philosophers or mathematicians are concerned just with objectivity and the facts? Evidently, these are more creative, free-wheeling disciplines. (Indeed, physicists are often surprisedby how useful mathematics has turned out to be in explanations of phenomena.) So one of many reasons why philosophers may disagree on the nature of goodness, for example, is that the point of philosophy may not be to get us to agree on the facts; more neutrally, the point may be to relish our freedom to create ideas, to test them, in effect, for exaptive value in terms of their potential to transform us. Also, there’s currently a lack of consensus in physics as to the ultimate nature of matter at the quantum level, but surely Scott doesn’t think this undermines science. This would be because his naturalism is presumably of the methodological rather than the ontological variety, which is to say he’s pragmatic about the benefits of science. Likewise, we might be pragmatic about the benefits (and the weaknesses and drawbacks) of philosophy and math.
Heuristics and the Freedom to Create Ourselves
As for Scott’s talk of heuristics, I doubt his use of that term is standard in cognitive science. We can define our terms as we like, as long as we’re upfront about it, but I don’t see why the fact that a heuristic is an evolved quasi-algorithmic process that skips over various steps so as not to use up precious mental energy, entails that a heuristic works best under only limited circumstances. On the contrary, the notion of “specialized heuristics” strikes me as oxymoronic (unless “heuristic” is taken more broadly to mean any procedure that helps in learning, which would include the algorithm). It’s the algorithm that’s limited because it can be over-specialized, not the heuristic. A heuristic isn’t like a giraffe’s neck, for example. The giraffe has all its stock in one company, as it were, and its long neck makes certain tasks very awkward for that animal. Likewise, the more steps you pile into an algorithm to prevent any possibility of error due to the system’s improvisation, the more narrowly you define the conditions under which the program can succeed. For example, take the recipe of baking a cake. If this recipe is an algorithm, the recipe must list all of the steps to be followed, leaving nothing to chance. This means you must have on hand all of the ingredients to complete the steps. If you lack an ingredient, the recipe won’t work! The algorithm will grind to a halt and you’ll be spinning your wheels, unable to complete the process. But suppose the recipe is a heuristic so that instead of specifying exactly what you have to do, such that even a robot could follow the procedure, the recipe says something like, “Add ingredients X, Y, or Z in whatever measurements you like; it’s up to you, since this part is just a matter of taste.” In this case, the recipe has more domains of application, since now the recipe will work if you have Y but X or Z, or Z but not X or Y, and so on.
So as I understand the distinction between algorithms and heuristics, it’s the algorithm that’s in danger of being overspecialized, while the heuristic is more flexible and has a greater range of potential applications. The heuristic ignores certain information as inessential, which means the heuristic is open to being tried in different contexts. You try the simplified procedure and see if it works with this or that ingredient, but unlike with an algorithm, there’s no guarantee of success. What you get instead of that guarantee is precisely greater freedom of application. In fact, “heuristic” is often synonymous with “rule of thumb” or with “trial and error process,” meaning a process that’s trotted out as a last resort in many different situations because of its flexibility. So this whole business of saying we shouldn’t trust our intuitions, because they’re heuristics and therefore they weren’t adapted to informing us about our inner nature has little merit, as far as I can see. It’s true that with any heuristic, if you apply it here rather than there, there’s no guarantee of success. Still, unlike with an algorithm, you have at least a chance of success with the heuristic even under strange or unforeseen circumstances, whereas an overspecialized algorithm (thought process) would land you flat on your face when you’re out of your element. This is why computers, for example, look amazing when doing math, but foolish when trying to understand emotions or the history that makes for cultural meaning. Algorithms aren’t flexible enough to deal with such subjective matters. For those, you might need intuitions and rules of thumb.
The fact that we didn’t evolve a reliable way of processing inner information doesn’t preclude the possibility of hitting upon some truths with intuitions. True, there’s little reason to think we’ll learn about neural mechanisms just through introspection, but this assumes we’re identical with those mechanisms. As I suggested above, the point of normative discourse may be not to inform us about any such mechanistic fact, but to transform us from an animal, which is indeed identical with its bodily mechanisms, into a civilized person who extends his or her body in the form of social and technological systems. And those latter systems are observable by our reliable outer senses, so they can’t be so easily gainsaid.
What’s the relevance of technological extensions of the mind, for example? Well, as I write elsewhere, the modern philosophical discussion of meaning and value takes for granted only the relatively recent use of symbols, which assumes symbols are supposed to add up to statements that correspond with facts. Our ancient ancestors apparently had a rather alien mindset, the so-called mythopoeic mode of cognition, as expressed by their bizarre myths and religious practices. What were symbols to the ancient Egyptians or Babylonians, let alone to their ancestors, who believed in magic and who had no thoroughgoing distinction between subject and object? On the science-centered reading, the ancients were simply deluding themselves with panpsychist fantasies of an enchanted world. On a more charitable interpretation, the ancients were using normative conceptions in an early phase of converting themselves from animals into people. It doesn’t matter so much what they were saying; what matters is the effect of that step in the processes of personalizing ourselves and of creating civilization. For example, the empirical falsehood of ancient myths is irrelevant if those myths aren’t theories. What they are instead are phenomenological journals, poetic records of what it felt like subjectively to be a newly evolved person living in a particular time and practicing the linguistic powers to express not just reason, but imagination, emotion, and willpower. If we look at everything through science-tinted lenses, we miss that forest for the trees. Of course mythical and normative discourses are likely not factual. There’s no such thing as goodness. What there is instead is a creative, natural process of evolution, which transformed certain hairy primates into people who tell stories and who obsessively turn our more threatening, natural environments into technological, functional ones that fulfill our myths about our elevated status, by serving us as if we were gods. Ironically, I speculate, the consolation of technology is that far from furthering scientific disenchantment of nature, our machines re-enchant the world by making something like the mythopoeic mindset viable once more.
This is where exaptation comes together with heuristics. Both are matters of flexibility, deriving from the fact that there’s no mind dictating what has to evolve. A mechanist has reason to be open-minded at this point. I’m not talking about the evolution of anything supernatural or magical. What I’m saying is that if a species has its survival taken care of by certain reliable means, such as by its knowledge of natural mechanisms and thus its mastery of weapons and other tools, that species is free to play, to develop heuristics, tinker with its onboard faculties, and see what becomes of that experimentation. That’s apparently what our ancestors did. They used language to gain control of their thoughts, which gave them ways of organizing and regimenting their mental states. That was how certain animals learned to personify themselves. Lots of animals have weapons; where we differed was the magnitude of our curiosity, creativity, and self-control, which exploded once our skills at surviving together (by using fire and farming and building shelters, and so on) gave us the luxury of free time. We told stories (myths and philosophical speculations), which broadened the mind and tamed our behaviour. The truth status of those stories is irrelevant from the mechanistic perspective, but that doesn’t mean the mechanist can afford to dismiss them, because those stories and delusions may be instrumental in an evolutionary process with which we must contend. Refuting our myths from a science-centered perspective will have zero effect if they operate on a nonrational level. The Age of Reason hasn’t come close to ending superstitions, because it’s not enough to understand our cognitive biases; we need a practice, a nascent posthuman lifestyle to develop the form of life that matches our postmodern ideals.
Metaphors and the Mechanist’s Neutrality
Finally, “mechanism,” “gadget,” and even “heuristic” all derive from commonsense experience of our artifacts which presuppose normativity. “Mechanism” became a popular description of a natural system during the Enlightenment, when scientists struck a deistic compromise with the theistic masses. Early modern scientists affirmed that there is a God, but maintained that his creation runs more or less by itself—like the machines we create. This was a metaphor, and if we leave behind deism for atheism, the metaphor loses its rationale. Maybe, the word still has some use when applied to natural systems like the brain. Words are free to change their meaning if we find the new meaning useful. But this apparently vestigial use of “mechanism” is suspicious. Naturalists should avoid confusions by coining words that express the radical, one-sided philosophical implications of naturalism: no meaning, purpose, goodness, God, and so on.
Notice that “natural process” is likewise metaphorical (and technically oxymoronic), since “process” is again a teleological notion, having to do originally with a series of actions directed towards some end, as in the process of building a fire. The notion that the brain or a cognitive capacity is a “gadget” is obviously metaphorical, as I’ve said. The point of this metaphor, I take it, is that, to the extent that mental processes are like gadgets, we shouldn’t assume they’re equally useful in all contexts, to say the least. You can’t tell time with just a chair, for example. And indeed, worldviews, or thoughts we deliberately put together, might be compared to gadgets, but the metaphor is stretched when we speak of evolved gadgets, as Scott does. He says, “Evolution has given me all these great, normative gadgets.” Here, you don’t have to go far to see what’s implicit in this metaphor, namely the connotation that the gadget’s function derives from the designer’s normative thought about which effects are good, as it does in the case of a human-made gadget. But in evolution there’s no such designer, so the metaphor is misleading. Again, in the cognitive scientific context, “heuristic” derives from computing. Computers implement algorithms or heuristics, because we interpret their internal changes as steps that follow the rules we program into them. Applying that anthropocentric discourse to products of natural selection leaves us with connotations to which the “mechanist” or naturalist isn’t entitled. Nature doesn’t program anything into us, our neurons don’t follow rules (unless we’re consciously programming ourselves), and there’s no intended end of our behaviour as far as natural selection is concerned. And let’s not even get into “progressive naturalism.”
What’s the upshot of this point about metaphors? Well, once we strip away the anthropocentric meanings of the naturalist’s terms, we’re left with a more neutral viewpoint, I think, which should be open to the utility of normative and semantic concepts. Where Scott and I should agree is that there may be a big transformation afoot. Normative concepts were instrumental in adapting our animalistic ancestors to the niche in which they’d have to function as people, as creatures that transcended what they used to be. Myths, delusions, and technology play roles in that transformation. Perhaps we’re losing faith in that way of life, because of technoscientific progress, and so we’re searching for a new way. Perhaps we’ll have to give up the old ways of thinking, to turn us into creatures that can survive in some new domain, such as cyberspace or outer space. My point is that we should be charitable and thoroughly “mechanistic” in our interpretation of philosophical and religious speculations. Let’s not dismiss them on empirical grounds, by presupposing an ultrarationalistic worldview that’s preoccupied with knowledge and with actual facts, because we might then miss transformations that lead to future facts. Normativity may have a causal role to play in such transformations, as may philosophy as the search for a certain way of life.
In fact, Scott’s metaphor of normativity as an array of “gadgets” or functional heuristics is consistent with what I’ve said here about the evolutionary role of fictions and delusions. We both suspect that the manifest image, the ordinary conception of the self, corresponds to no reality, that that self doesn’t factually exist. But we seem to differ on the implications of this naturalism and on what to make of philosophy. Scott thinks philosophy is in big trouble to the extent that it takes the folk picture of the self more seriously than it takes the scientific one. But this science-centered framing of the problem is insufficiently mechanistic, since it credits scientific theories more than philosophical “sketches,” whereas mechanistically, which is to say instrumentally speaking, all symbols are equally meaningless, there being no such thing as meaning. If Scott’s point is that science is more efficacious or useful in evolutionary terms, compared to philosophy or the folk conception of the self, his point is far from obvious. You see, if we accept the radical implications of naturalism, we must be more open-minded than before, not less so. A true pragmatist will accept whatever works. With no ideology to take seriously anymore, with no commitment to ideas as brainchildren, the inchoate posthuman has less reason to judge or to exclude. We must be neutral in considering technoscience, naturalism, theistic religion, and normative philosophy all as processes, mechanisms, global developments, and the like. A radical naturalist has no basis for saying that religion is bad or false, for example, if this naturalist thinks only in terms of context-dependent transformations of systems. Now, I don’t think this necessarily lands us in postmodern relativism, because I think aesthetic standards remain. My question is whether the “mechanist” has some other standards to license the devaluation of philosophy compared to science. If philosophy isn’t after the facts, we should watch what it does and see how it fits into the bigger picture.
Authentic philosophy, as distinct from science or religion, trains us to be a type of person. As I say in a reply to another of Scott's articles, this is complicated by esoteric and exoteric, or elite and mass social functions. Here’s how the evolutionary transformation might work, in a nutshell. The masses personify themselves by trusting in myths that function as self-reinforcing delusions. That keeps civilization running; in particular, it maintains our luxury and our freedom to create ways of life. The philosophical elite stand apart from this process, not as godlike controllers, but as marginalized observers who see the tragedy at work. Philosophers are trained to be skeptics, to ask endless questions and to take nothing on faith. They know there are no gods or moral properties, as matters of fact, but they also suspect that these notions are part of some larger turn of events. They’re awestruck by nature’s audacity, as it were: our self-deceptions may be instrumental, in which case, to borrow the inadequate and potentially misleading metaphor, we’re cogs in a machine. Philosophers are specially equipped to know this, and wisdom is something like the ability to live well in spite of those alienating doubts. What can it be to live well if there’s no such thing as goodness and normativity isn’t factual? Whatever it is, it must be equal to a natural turn of events. Maybe the wise person sees how events are largely going and realizes there’s some role for skeptical outsiders, so that even they can be part of the greater whole.













