Mechanists and Transcendentalists, Kant we all just get along?

Scott Bakker’s article, Necessary Magic, is a trenchant rejoinder to my article, Scientism and the Artistic Side of Knowledge, (SASK). In the following response, I’ll try to clarify some of the relevant issues in our discussion and then I’ll address the central points of disagreement. As indicated by this article’s title, I think that, a scientistic interpretation of cognitive science notwithstanding, BBT’s mechanistic self-image is consistent with a transcendental interpretation of how we appear to ourselves through introspection. We are not factuallywhat our intuitions say we are, but that matters most to those who assume a scientistic conception of knowledge. If we act as good mechanists and ask what the intuitive self-image is efficient at doing, we should be led to agree with Scott when he says that that self-image is a lie. So we’re good at lying to ourselves and indeed we’re naturally built to do just that, perhaps because we can’t stomach the natural facts. We retreat to the matrix of illusions, as it were, and because scientists are bent on discovering the underlying facts, we could use a strategy for heroically dealing with both perspectives, since both seem inevitable for machines like us. That’s where aesthetic, ethical, and existential standards can come into play, and so I think Scott’s project and the philosophy I call existential cosmicism are largely harmonious.

Scientism and Transcendentalism

Now, then, to the preliminaries. SASK was motivated by the debate between Scott and Terence Blake. Blake contrasted scientism with pluralism, and I was interested in how far the scientistic line can be pushed, so that’s why I wrote about scientism in the context of BBT. But is BBT scientistic or not? “Scientism” has a nonpejorative core meaning, but also pejorative connotations. According to the core definition, scientism is the belief that the sciences are the only disciplines that supply us with knowledge. Scott says that “humans are theoretically incompetent, and that science is the one institutional prosthetic that clearly affords them some competence.” This seems scientistic in the core sense, although he also says that true claims can “drift about” in nonscientific philosophy. So if “scientism” is tweaked to mean that science is the only reliablesource of knowledge, Scott’s view is scientistic, for whatever that nonpejorative characterization is worth.

The reason the word is usually read as pejorative, though, is that philosophers have reached some consensus that scientism refutes itself. After all, scientism is a philosophical rather than a scientific proposition. Just ask yourself, then, whether the claim that science is the only reliable source of knowledge is itself reliable. If not, we needn’t trust that all knowledge comes from the sciences, and if so, we have the paradox of knowledge that comes reliably from a nonscientific discipline (philosophy). Either way, scientism is unstable. So is BBT scientistic in this pejorative sense? This raises the issue of presuppositions which is perhaps my main point of disagreement with BBT.

Scott argues for the exclusive reliability of scientific knowledge, by way of induction:scientific methods have been the only ones to lead to cognitive progress, whereas many intuition-based myths and prejudices have fallen under the scientific blitzkrieg. But as the philosopher David Hume showed, this kind of simple inductive reasoning, which generalizes on the basis of particulars, isn’t entirely rational, or strictly logical. This reasoning rests on faith that the future will be like the past. That assumption goes beyond the data. As Hume put it, we form the “habit” of mentally connecting sensations, in something like the way a dog or a mouse is trained to think in restricted ways, to project patterns onto the stimuli. Immanuel Kant went further when he said that the mind has innate ways of thinking, so that self-knowledge can be nontrivially indubitable (or “synthetic a priori,” as he put it). If we instinctively trust that the future will be like the past, for example, we’ll tend to reason inductively, just as if we’re prone to numerous cognitive biases, as cognitive scientists have shown we are, those biases will colour our conception of the world. I’ll call this thesis that we have knowably-innate ways of thinking transcendentalism.

Now, at first glance, BBT is transcendental since BBT says the mind is made up of mechanisms that aren’t (yet) controlled by us. We think in the way they cause us to think, although scientists have found ways to get around some of those mechanisms, or at least to add some methods to them. But where Scott differs from the transcendentalist is that the latter bestows the title of “knowledge” on some of the fruits of innate mental processes, whereas Scott maintains that we’re theoretically incompetent, that intuitive self-knowledge is unreliable because our innate ways of thinking are bound to mischaracterize what’s going on, to work with illusions rather than reality, with mere artifacts of the limitations of those mental processes rather than with the mechanistic underpinnings. This is a crucial question, to which I’ll return soon: When our innate mental processes--which Scott and cognitive scientists generally think of as mechanically implemented heuristics or neurofunctions--are turned on themselves, without the benefit of scientific testing, is their output in any way useful?

It’s worth pointing out, I think, that strictly speaking, Kant would agree that outputs of innate mental processes are artifacts of those processes and even that they’re “subreptive” (misleading, deceptive). That’s why he distinguished between appearances and unknowable reality (phenomena and noumena). We can resort to the Matrix metaphor and say that we live in a world of superficial appearances that’s partly constructed by the way we’re built to think, by our modes of perception, primitive concepts, cognitive biases, intuitions, irrational leaps of faith, and so forth. Even when science investigates the real world, we’re bound to apply our innate thought forms to it to some extent and thus to humanize what’s alien to us, turning the noumena into phenomena. Scott says, though, that scientific knowledge of natural mechanisms isn’t significantly tainted by any such dubious, nonscientific biases, and so scientific self-knowledge ought to replace our more native floundering. The question, then, of whether our nonscientific attempts to know ourselves are at all competent, reliable, or otherwise worthwhile depends on what’s going to count as knowledge.

Is BBT Scientific or Philosophical?

Before I pursue those key questions, I want to clarify a couple more preliminary issues. Scott chastises me for engaging in mere philosophical speculation, whereas what the critic of BBT needs, he thinks, is scientific evidence “that accurate metacognition is not only computationally possible” but “also probable.” This raises the question of whether BBT itself is scientific or philosophical. I think the answer is that it’s both. The scientific part of BBT is the hypothesis of how certain heuristics work in the mind. In his summary article, The Crux, he lists four theses that make up BBT. I’d say that the first two, which he calls trivial, are the scientific ones. They are that cognition is thoroughly heuristic and that metacognition and cognition are continuous, which is to say that the difference between the inner and the outer environments isn’t Cartesian or metaphysically substantial and therefore that we can know about either in only similar ways. But BBT also has a philosophical side, which has to do with its interpretation of those scientific facts. So the other two theses, which he says are more controversial, are the philosophical ones. These are that “Metacognitive intuitions are the artifact of severe informatic and heuristic constraints. Metacognitive accuracy is impossible” and that “Metacognitive intuitions only loosely constrain neural fact. There are far more ways for neural facts to contradict our metacognitive intuitions than otherwise.” These philosophical propositions and their implications are the ones that are consistent with scientism, that imply that folk psychology is full of errors, and so forth.

Now, I defer to Scott and to cognitive science in general when it comes to showing how our mental processes are implemented by neural mechanisms. A disagreement on those grounds would indeed call for scientific evidence, but my article doesn’t take issue with that side of BBT. The disagreement is on the philosophical issues, and here scientific evidence isn’t likely to be decisive one way or the other. The question of how we should interpret intuitive self-knowledge, given that the mind is a natural entity, is going to turn on analysis of concepts like “knowledge” and on whether new concepts can be developed partly by philosophical artistry (speculation) to wrap our mind around the kind of conflicting evidence that makes for a philosophical problem in the first place. So the fact that BBT has a scientific side doesn’t mean that a legitimate criticism of BBT must be scientific. When philosophy’s on the table, we play by philosophy’s rules. Scott will say there are no such rules, according to BBT, because in so far as philosophy is nonscientific, philosophers appeal to intuitions and intuitions are systematically misleading. But if I’m right, that half of BBT is philosophical, we’re back to the questions of self-refuting scientism and of transcendentalism.

Mechanisms and Biofunctions

There’s one last preliminary issue I want to address. Scott talks a lot about mechanisms, but instead of defining the word he wants to leave the matter open to encompass future scientific work, since he derives the term from cognitive science. A mechanism isn’t just a string of concatenated events or even just any causal relation; rather, a mechanism is a series of systematically coordinated causal relations that form something like a machine. In biology, the machines are built by natural selection. A machine has parts that work in tandem to produce the whole system. Scott tells me he endorses the philosophical work on mechanisms by William Bechtel and others called the New Mechanists, and they define “mechanism” in terms of a hierarchy of capacities deriving from the structural features of a system’s components, which capacities together produce some phenomenon that’s explained in such mechanistic terms. So a biological machine is really just an assemblage of mechanisms within mechanisms, whose capacities or functions are carried out because of the physical properties of the parts and sub-parts of the system and because of how those parts are organized.

If this is the kind of mechanism that’s relevant to BBT, though, I think there’s a problem with Scott’s rhetorical question, “And if its mechanisms doing all the work, then what work, if any, does normativity qua normativity do?” The problem is that if semantics, normativity, and the entirety of our intuitive self-image derive from neuromechanisms doing all the work, there’s no mechanistic basis for saying that these results of their work are bogus or mere artifacts in any pejorative sense. Consider a printed page containing text but also smudges or other extra marks which are artifacts of flaws somewhere in the copier’s mechanisms. The distinction here between the mechanistic function and the artifact, or between the function and the malfunction, is based on the presumed intention which causes the copier to be built in the first place. We want text, not the smudges.

And all we have in the case of biological mechanisms are natural selection, genetic drift, and the like. So the question becomes one of whether the so-called illusions, subreptions, or artifacts of the manifest image, namely semantic, normative, and other such judgments, have an evolutionary role. If so, they’re legitimate functions of our neural mechanisms, not malfunctions, whatever some of their accidental harms to us might be. Even if these functions are exaptations, or the results of trial-and-error tinkerings by our snooping forebears, there’s no mechanistic basis for favouring some biological effects over others, as long as the effects are mechanically produced. (This is just Terence Blake’s point about pluralism.) Certainly, a semantic (truth-centered) or a normative reason for saying we should approve of how the brain processes external stimuli, but disapprove of the brain’s processing of itself, is off the table for anyone who thinks those judgments are bunk.

Now, Scott maintains that BBT shows how our neural mechanisms break down when applied to themselves. The brain is blind to itself and that’s why we shouldn’t trust what the brain says about itself, by itself. This is why intuitions are neural malfunctions, outputs which lead us astray from the reality of human nature. The reality is that, contrary to popular opinion, we’re machines (mechanical systems), not people. Our intuitive self-image is a distortion caused by the overreach of our cognitive faculties. These faculties evolved to process sensory information, which is why the sensory connections that take up so much space in the brain, especially the visual ones, end in sense organs that point outward, at the part of the environment that contains food and potential mates and threats, leaving us in the dark about the self behind the curtain. At best, though, this means that the intuitive self-image isn’t likely factual, or as Scott says, “accurate.” Still, that image can have other uses, in which case I see no mechanistic reason for casting aspersions on this product of neural mechanisms. And the transcendentalist may be able to live with that.

Postmodern Relativism

What I’m saying is that a mechanistic view of the self doesn’t preclude a transcendental interpretation of the manifest image. A transcendentalist would be happy to concede that our innate mental faculties are naturally produced. We evolved to think as we generally do about ourselves and about everything else. That past is full of accidents, including genetic mutations which are instrumental to natural selection. So if the mind is a hierarchy of heuristics running on neural modules and those heuristics mechanically, reliably produce the manifest image when turned on themselves without the benefit of scientific oversight, so be it, says the transcendentalist. That’s just a scientific confirmation of what we already know, which is that the naïve view of ourselves is stable.

But is this intuitive self-image True? Are we really as free, rational, conscious, or precious as we like to think we are? This is like asking a slave of the matrix why she doesn’t give up the world of illusion and start living in the real world. The reason she doesn’t is that her brain is plugged into one world and not the other. If you pulled that plug, inserted it into a different world-generating machine, and filled her head with a qualitatively different set of experiences, she might come to think that her time in the matrix was indeed shallow. But as long as we’re bound by our present hardware limitations, including the fact that the brain is natively blind to its mechanical nature, we’re going to treat the so-called illusion of our intuitive self-image as real enough for practical purposes, just as the people who are hardwired to perceive the matrix will do their best to live in the terms set by that program. This is all just standard transcendental philosophy.

However, Scott will say that science is in the business of unplugging us from the world of illusions, of showing us what the movie calls “the desert of the real” (taking that phrase from the postmodern philosopher Jean Baudrillard). We bounce back and forth between two worlds, switching between the commonsense and the scientific views of the self. The latter shows us the mechanisms hard at work producing the intuitions we take for granted, and also tells us why our intuitions are mere caricatures compared to the scientific masterpiece. When we personalize our mechanical systems, we sketch a low-resolution caricature that distorts our inner reality, at best. Science alone tells us what we really are and that threatens the naïve viewpoint, because we can no longer hide from science. The AI archons are coming to drag us out of the matrix, to horrify us with a vision that contradicts our feel-good myths, so that we’ll have to choose to accept science’s dehumanizing theories or to lie to ourselves.The existential apocalypse is coming, because the folks with white coats and pocket protectors are coming for our ego and when they get hold of it, undoing the enchantment we’ve cast, with their quantificational incantations, we’ll be saddled with the Buddhist’s quandary, but without the Buddhist’s training. That is, we’ll be nowhere, detached from our naïve image of ourselves, forced to keep telling the lies but unable to take them seriously because of what our ancestors long ago condemned us to see, when they cursed us by taking reason too far.

I think a postmodern transcendentalist would respond by wondering why we should surrender so readily to the scientific worldview. That worldview too is mechanically produced, the result of certain rational methods that socially build on our evolved heuristics (our curiosity, creativity, caution, and so on). The notion that a scientific theory is semantically True or normatively excellent, according to epistemic, aesthetic, or pragmatic ideals, is quite irrelevant from BBT’s mechanistic viewpoint. So why should we believe that we’re mechanisms and not people? If we’re interested in just the facts, we’re begging the question, since the bare facts are relevant only from the scientific perspective. Likewise, if we’re interested in what feels right, in what’s good for society or in what preserves our sanity, we’re presupposing judgments of relevance that are made with the intuitive self-image already in mind, thus begging the question against the mechanistic worldview (see this article for more on the issue of the choice between worldviews). And so we wind up with postmodern relativism and antirealism. There’s no reality, but only constructed worldviews and our job is to play by the rules of each, depending on which game we choose to play or on which questions we prefer to ask.

I’m not an antirealist, but before I explain how I think we should look at the matter, I think we can use this opportunity to reframe the point about scientism. Scientism becomes the contention that “Why?” questions can be replaced by “How?” ones, that the mechanistic, naturalistic worldview eclipses the intuitive, philosophical and religious ones, that there’s really only one game in town. In SASK, I say that the only way to coherently express this point is to turn scientism into a value-neutral prediction about the relevant probabilities. For one reason or another, we may indeed end up no longer asking “Why?” questions. But as to whether philosophical, religious, aesthetic, or moral questions are really mechanistic ones, this might be like asking whether the rules of Angry Birds reduce to those of Monopoly. 

Are Intuitions Epistemically Competent?

This brings me to the analysis of “knowledge.” Whether the manifest image counts as knowledge depends on what we mean by that word. Is knowledge the ability to map the world, to mentally represent natural mechanisms with the equivalent of blueprints, thus allowing the knower to reengineer the mechanisms, to have power over the world? This is a stereotype of the technoscientific conception of knowledge. It assumes pragmatic ideals as well as a semantic view of truth, so not even this conception is available from the mechanistic perspective. In addition, we could add social, aesthetic, or even existential ideals to our epistemology, which I’ll say a little more about in a moment. At any rate, a mechanist would have to tell just an impersonal, evolutionary story about certain neural functions that have strictly adaptive value in that they enable the replication of genes from one generation to the next. These functions wouldn’t map the world in any magical way, but would receive information from the environment, process the signals, and respond in ways that protect the genes. That would be the main mechanical role of knowledge.

But now the mechanist faces an awkward question: What if the intuitive self-image is needed for the fulfillment of that mechanical role? In particular, what if most people have to lie to themselves, personifying their mechanical identity, to stand being alive in nature? What if our ancestors embarked on the project of knowing themselves, hitting upon the kludge of the intuitive self-image, on the basis of paltry evidence, because they got too smart for their good and needed myths to delay, at least, the existential apocalypse that afflicts those who step out of the matrix to behold the world’s horrible undeadness? The point here is that although a strictly rational conception of knowledge may do for certain purposes, in the big picture knowledge has a nonrational side. Intuitively, the beliefs that are known must be true but also justified, meaning that we must have reasons to show others that we’re entitled to that belief, that it wasn’t a lucky guess and that we’ve fulfilled our social and philosophical obligations as truth-seekers. In mechanistic terms, your processing of the environmental signals includes your sending of signals to other information-processors (i.e. truth-seeking people), to reassure them that your channels for processing the data are functional. Either way, when it comes to evaluating the intuitive and the mechanistic self-images, we may find that these images are themselves instrumentallyrelated. To fulfill our evolutionary function, which is what’s mainly relevant to someone with the mechanistic mindset, we may need our myths, intuitions, and speculations to mitigate the damage done to us by our relatively high intelligence.

Scott says I seem to want “intentionality to be both necessary and magic, to belong to this family of things that for reasons never made clear simply cannot be mechanically explained--or in other words, natural.” This isn’t so at all. Presumably, everything can be mechanically explained and thus naturalized. But this doesn’t mean “How?” questions replace “Why?” ones or that science is the only game in town. To be the sort of creatures BBT and cognitive science say we are, we may have to invent a counterfactual world, a new game in which we’re obliged to tell ourselves noble lies to survive. The manifest image is our matrix. In its scientific capacity, BBT explains how the intuitive self-image is mechanically produced, but the philosophy of BBT is scientistic and so Scott downplays the potential for our intuitions to have advantages as well as drawbacks. He talks about how Western philosophy has been muddled for centuries, because philosophers have been led astray by intuitions. But perhaps philosophy is like the American structure of government: strategically divided to disempower the masses and the demagogues who control them, to prevent tyranny. Perhaps as Leo Strauss said, philosophy has esoteric as well as exoteric functions, the latter being to tell noble lies to those who prefer to be happy rather than eternally skeptical, to reserve enlightenment for the tragic heroes who can withstand the angst that’s the air breathed outside the matrix. Perhaps much postmodern Western philosophy functions now as a brake on science, to obscure the naturalistic worldview and to reassure the masses that it’s all just fun and games so they can go back to being happy, productive citizens.

So science tells us the facts, but if you’re aware only of the facts, you don’t know what’s going on, even given just a mechanistic conception of knowledge, because such a cursed machine that doesn’t entertain any commonsense or politically correct delusions will more than likely be unable to fall in love, have children, or hold down a job. That’s why scientists usually leave their mechanistic worldview at the office. The intuitions, myths, speculations, and cognitive biases are needed for the clever mammals that we are to function properly in evolutionary terms, which are precisely the mechanistic terms taken to be fundamental by the naturalist. So the naturalist can’t afford to dismiss semantics, normativity, and the rest of the intuitive self-image: the latter is needed as one of the causes drawn up by the mechanist’s quasi-blueprint of the cognitive mechanism. Only if we’re largely irrational in our estimation of what we are, will we act as predicted by cognitive science. Our mechanisms will function properly only if we often vegetate, turning to the matrix which is just the world of inner hallucinations we inevitably perceive when we direct our mental processes back onto themselves. When we introspect we don’t find the neural mechanisms, but we’re skilled at socialization so we easily personify our inner life, interpreting the way our thoughts hang together, in folk psychological terms. 

I say that the majority may need the illusions to survive, but tragic heroes may also need to cope with their precarious position in the limbo between the intuitive and mechanistic worldviews. These brave or foolish few appreciate that there’s no magic and that our intuitions about our inner nature are fanciful. But they’re also transcendentalists, meaning that they appreciate the absurdity and the horror of undead nature (of a world that mindlessly creates machines), and thus also the potential of certain mechanisms to behave strangely, say, by producing fictions to escape from that reality.

Where does BBT stand in this context? Again, I question only its philosophical interpretations and indeed only some of them. I agree that scientific knowledge of our mechanical identity may generate such cognitive dissonance that not even the slumbering masses can hold onto their illusions for long. In that case, we’ll enter a posthuman world, psychologically speaking, which is more or less beyond the event horizon. I agree also that the intuitive self-image, as it mesmerizes the more unreflective folks, is an uninspiring and indeed pathetic lie. As I say, the lie may be deemed noble if the alternative is apocalypse, but the reason I think ordinary and theological folk psychologies are rather pathetic differs from Scott’s. Scott compares the intuitive self-image to the mechanistic one and finds the former wanting on scientistic grounds, whereas I condemn existentially inauthentic self-images in contrast to authentic ones. The trick is not to lie so completely to yourself that you get carried away with your fictions, lose all humility, and start a wildly irrational religion based on embarrassing conceits. Instead, the respectable way to con yourself is to heroically occupy the space between the intuitive and the naturalistic self-images, to have them both in your mind at the same time, using the impersonal one to check your delusions of grandeur, but feeding off of the speculative one to sublimate your horror and angst, thus getting by in the existential game, which is yet a third perspective that synthesizes the other two in the way I’ve just outlined.

So are intuitions theoretically competent? If this question is about whether intuitions compete well on scientific grounds, producing knowledge of the facts, of natural mechanisms and so forth, the answer is surely no. But again, this is like asking why someone who’s playing Angry Birds isn’t simultaneously doing well at playing Monopoly. Moreover, in so far as scientific theories are ideal, and so “theoretical competence” means just “the ability to do what science does,” the question is loaded in this context, which is why I speak more generally of “epistemic competence” in this section’s title. The transcendentalist says only that we have knowably innate ways of thinking, not that when we think in those ways, that thinking puts us in touch with the facts; this is to say that some of our ways of thinking, or some aspects of knowledge, may not be empirical or scientific. We have a nonrational side that makes us mammals rather than just fact-recording computers. If we know that introspection is like a funhouse mirror that distorts our mechanical nature, presenting us only with an illusion of a unified, personal self, this still leaves us with epistemically relevant common ground. The point isn’t that our intuitive self-image is accurate or factual, but that it’s universal and stable precisely because it’s mechanically built into us, like a niche that’s bound to be filled.

The real questions, then, are whether intuitions are competent at doing something and if so, what that might be. Mechanically speaking, what can intuitions do? And what does science do, for that matter, once science itself is naturalized? As I said, the best answer to the latter question is an evolutionary story about how science allows us to dominate the planet and thus to preserve our genes. And as I said, intuitions may have their instrumental role in that very mechanical relationship between information-processors like us and the hostile environment. We need a matrix to vegetate, to distract ourselves so that we can efficiently perform our natural functions. But I think there’s another natural function of our intuitive self-image: the existential one of helping us to overcome natural horrors by more or less ascetically rebelling against them. The rules of that existential game would be largely aesthetic.

And now Scott might say: “These nonscientific games are irrelevant and foolish, they’re out of touch with the facts, and all that’s worth talking about and knowing are the mechanisms. There are no aesthetic ideals and existential or religious goals of rebellion against nature are as preposterous as the premodern myths disposed of by naturalistic science.” That’s how things look initially from the mechanistic mindset, but my point has been that this mindset does undermine itself if it’s interpreted scientistically, on the philosophical level, since then the mechanist has to start thinking like a transcendentalist, asking whether the matrix of illusions and follies is needed instrumentallyfor the sake of our evolutionary function. Well, if the mechanist must entertain a conformist instrument, why not the existentialist’s rebellious one? If the intuitive self-image is needed to fulfill our mechanical role in nature, what’s needed to break free of the matrix and to survive as posthumans in the desert of the real? If we’re advocating a mechanistic worldview, we’ve got to think instrumentally rather than semantically or normatively, meaning that we’ve got to think of how to engineer efficient machines and mechanical relationships, based on the physical capacities of the available parts. Instrumentalism is thus the bridge between the mechanist and the transcendentalist, and once we start thinking instrumentally about our capacities, we can naturalize semantics, normativity, and existential aesthetics in BBT’s manner, by seeing them as mechanically produced fictions, but as necessary or perhaps useful ones.

Related Posts: