Source: D. Little, "Causal explanation in the social sciences" (link)
How can we distinguish between causal mechanisms and extended causal processes? Is the difference merely a pragmatic one, or is there some reason to expect that mechanisms should be compact and unitary in their workings? Is the children's story leading from the "want of the nail" to the loss of the kingdom a description of an extended mechanism or a contingent causal process?
A causal mechanism is (i) a particular configuration of conditions and processes that (ii) always or normally leads from one set of conditions to an outcome (iii) through the properties and powers of the events and entities in the domain of concern.This captures the core idea presented in the Machamer-Darden-Craver (MDC) definition of a causal mechanism (link):
Mechanisms are entities and activities organized such that they are productive of regular changes from start or set-up to finish or termination conditions. (3)There is also an ontological side of the concept of a mechanism -- the idea that there is a substrate that makes the mechanism work. By referring to a nexus between I and O as a "mechanism" we presume that there is some underlying ontology that makes the observed regularity a "necessary" one: given how the world works, the input I brings about events that lead to output O. In evolutionary biology it is the specifics of an ecology conjoined with natural selection. In the social world it is the empirical situation of the actor and the social and natural environment in which he/she acts.
So mechanisms reflect regularities of input and output. In this respect they correspond to pocket-sized social regularities: observed and sometimes theoretically grounded conveyances from one set of circumstances to another set of circumstances. Take free riding as a mechanism arising within circumstances of collective action:
Now consider the mechanism described in social psychology as "stereotype threat" (link):
When a group of individuals confront a potential gain in public goods that can be attained only through effective and non-enforcible collective action, enough individuals will choose to be free riders to ensure the good is not achieved at the level desired by all members of the group.
This states a regularity (conditioned by ceteris paribus clauses): groups of independent individuals are commonly incapable of effective collective action. And it is grounded in a theory of the actor; rational individuals who pay attention to private costs and benefits but not public costs and benefits can be predicted to engage in free riding.
When subjects are exposed to signs of negative stereotypes of their group with respect to a given kind of performance, the average performance of the group declines.This is a mechanism that can be identified in a number of different settings, both observational and experimental; and it can be combined with other mechanisms to bring about complex results. The substrate here is a set of hypothetical cognitive structures through which individuals process tasks and influence each other.
Now consider an instance of concatenation. Suppose we are interested in military mistakes -- weighty decisions that look in hindsight to be surprisingly poor given the facts available to the decision makers at the time. Our theory of the case may involve three separate mechanisms that interfere with good reasoning: stereotype threat, inordinate hierarchicalism, and the effects of agenda setting. These are independent social cognitive mechanisms that impair group decision making. And our theory of the case may attempt to document the workings of each on the eventual outcome and the ways they aggregated to the observed decision.
Are mechanisms thought to be simple, or can we consider composite mechanisms -- mechanisms composed of two or more simpler mechanisms? Our definition above required that a mechanism should link I to O with a sufficiently high probability to count as "likely". This puts a practical limit on the degree to which simple mechanisms can be composed into composite mechanisms. Take the sequential case: iMj (prob=.90) and jNk (prob=.90). Then let V be the sequential composite mechanism "M then N". Then we have iVk (prob=.81). The probability of the final endstate given the initial starting condition drops with each additional mechanism that we insert into the composite mechanism. So eventually concatenation will bring the probability of an antecedent leading to a consequence below the threshold of likelihood required by the definition of a mechanism.
McAdam, Tarrow and Tilly refer to a concatenation of mechanisms in a concrete instance as a process, not a higher-level mechanism. The reason for this, it would seem, is that processes are highly contingent in their workings precisely because they incorporate multiple mechanisms in series and parallel, all of whose causal properties are probabilistic. So there is no reason to expect that processes describe reliable associations between beginnings and endings.
So this implies that mechanisms should be conceived at a fairly low level of compositionality: to preserve the likelihood of association between antecedent and consequent, we need to identify fairly proximate mechanisms with predictable effects. This doesn't mean that a mechanism has little or no internal structure; rather, it implies that the internal structure of a mechanism fits together in such a way as to bring about a strong correlation between cause and effect. The mechanism of stereotype threat mentioned above presumably corresponds to a complex set of functionings within the human cognitive system. The net effect, however, is a strong correlation between cause (expressing a stereotype about performance to an individual) and effect (suppressing the level of performance of the individual).