Introduction
Some philosophers search for the mark of the cognitive (MOC): a set of individually necessary and jointly sufficient conditions defining cognition (Adams, 2019; Adams & Aizawa, 2001; Rowlands, 2009, 2010). They claim the MOC is necessary to allow cognitive science to develop correctly. Should cognitive science investigate distributed brain-body-world systems, as argued by the extended mind thesis (Walter, 2010; Wheeler, 2010, 2019)? The answer depends on whether such brain-body-world systems qualify as cognitive systems. Are botany and microbiology parts of cognitive science? Again, the answer depends on whether plants and bacteria qualify as cognitive systems (cf. Adams, 2010, 2018). And to know whether these systems qualify as cognitive, we need to know the MOC.
Here, I claim that these philosophers search in vain: at least at present no MOC can be provided. In §2, I examine the literature concerning the MOC. I identify (some of) the reasons motivating the search - and thus (some of) the desiderata the MOC should satisfy (§2.1) - and highlight an important tension in the literature (§2.2). Anticipating, the tension is that whereas the reasons motivating the search suggest the MOC should capture a scientific (or even natural) kind, the role naive intuitions play suggests philosophers are actually after our intuitive notion of cognition. I then tease apart these two projects, and argue that, at least as things stand now, both projects are bound to fail. In §3, I claim that our intuitive notion of cognition (if it exists), cannot be captured by a MOC; and, even if it were to be captured by a MOC, it wouldn’t satisfy the desiderata motivating the search. In §4, I claim that, as things stand, we cannot identify a MOC convincingly capturing cognition as a scientific kind. This is due to the way in which cognitive science is fragmented into numerous research traditions, each suggesting (at least implicitly) a MOC. Since the MOCs thus suggested are often mutually exclusive, we must choose one. Yet, since all these research traditions seem equally worthy of pursuit, we lack any principled reason to privilege a MOC over the others. §5 considers some objections to, and consequences of, my claim. §6 briefly closes the paper.
Searching for the mark of the cognitive
Some desiderata…
We can all tell apart paradigmatic instances of cognition (e.g. remembering) from paradigmatically non-cognitive processes (e.g. sneezing, cf Adams, 2019). Why, then, should we seek the MOC? This is an important question. Answering it in a clear manner makes explicit what we want the MOC to do, thereby identifying the desiderata it must satisfy. Following Akagi (Akagi, 2018; Akagi, 2016), I identify three motivations fueling the search; thus, three desiderata the MOC should satisfy. I focus on these three only because they’re sufficient for my arguments in §3-§4 to work. I don’t want to suggest my list is complete. Likely, there are other reasons to seek the MOC (and so, other desiderata in addition to the ones I will consider here).
Motivation #1: As a whole, cognitive science has expanded, and partially shifted, its focus away from “higher thought”, towards skilled sensorimotor interactions (Clark, 2001; Dennett, 1987). Early in its development, cognitive science was mainly interested in “high-level”, perhaps exclusively human, phenomena. Early AI researchers, for example, were interested in making computers able to play checkers (Samuel, 1967). They were sure that their procedures held the key to thought: indeed, they thought that computers in the ’60s, whilst unable to move and perceive appropriately, were able to think (Selfridge & Neisser, 1960). The zeitgeist seems now inverted: AI researchers focus on sensorimotor interactions (e.g. Tani, 2016), and the consensus seems to be that whilst computers might perceive like us, they definitely don’t think like us (Mitchell, 2019). And whilst “higher thought” is still an explanandum of cognitive science, the emphasis often is now placed on its sensorimotor roots - for instance highlighting the number of ways in which the cortical structures for “higher thought” depend on the ones in charge of our sensorimotor couplings (Anderson, 2014; Barsalou, 1999; Cisek & Hayden, 2022). Sensorimotor interactions precede “higher cognition” not just in phylogeny and ontogeny, but now also in the order of explanation.
Note how such a shift in focus generates worries concerning the distribution of cognition. Only humans (and some computers) play checkers. Only humans (and perhaps some computers) understand natural languages. If these are the central cases of cognition, then cognizers are relatively few: some mammals, maybe some computers. In contrast, if the central cases of cognition consist in some sensorimotor interaction, the number of cognizers is higher, including all multicellular animals, and arguably simple robots (Braitenberg, 1984), plants (Calvo Garzon, 2007) and single celled organisms (Lyon, 2015). Maybe even some planetary scale processes could be construed as cognitive processes (Frank et al., 2022). So, who’s in? Which systems should cognitive science study? The MOC should enable us to answer. It should give us an extensionally adequate definition of cognition. Hence the first desideratum.
Desideratum #1: The MOC should be an extensionally adequate definition of cognition: i.e. a set of individually necessary and jointly sufficient conditions the satisfaction of which identifies all and only cognitive systems (or states, or processes)
This seems an important desideratum, whose centrality is greatly emphasized in the literature concerning extended cognition (Adams, 2010; Adams & Aizawa, 2001, 2008; Rowlands, 2009, 2010).
Motivation #2: Cognitive science is extremely fragmented. Not only the paradigmatic explananda of cognitive science have changed, the explanantia have changed too, and dramatically so. Yet, “change” might not be the right word - it might suggest a gradual maturation. But, that’s not what one sees when looking at the history of cognitive science. Rather, one sees the splintering of a (relatively well defined) research tradition into a myriad of different and competing research traditions, each rhetorically presenting itself as a “Kuhnian revolution” replacing all other research traditions and letting cognitive science run free from the shackles of ignorance.1 Notice that here I am using “research tradition” technically, to name a (fairly well-defined) set of theoretical assumptions, modeling tools, experimental procedures and other research practices a group of scientists use to investigate a set of phenomena of interest (Laudan, 1977, p. 81).
Here’s a (simplified, popular and whiggish) history of cognitive science.2 It all began in the ’50s with the cognitive revolution: a multidisciplinary enterprise guided by an operative definition of cognition as symbolic (digital) computation (Newell & Simon, 1976). Then came the connectionist revolution. Connectionists proposed new computational models loosely inspired by the cerebral cortex (Rumelhart & PDP group, 1986), inadvertently redefining cognition as subsymbolic computation (Churchland, 1992). As these models grew in complexity, cognitive scientists discovered they were often better off using a different branch of math to deal with them; namely dynamical systems theory. Hence the dynamicist revolution: computation faded into the background while cognition became the swirl of activity of a self-organizing system (Thelen & Smith, 1994; Van Gelder, 1995). This system which might, but need not, be identical with the brain, as the “4E” revolution quickly claimed (Clark, 1997). The discovery that such a swirl of activity is a form of Bayesian inference (Parr et al., 2022) caused another revolution, accompanied by an appropriate redefinition of cognition as inferential prediction (Corcoran et al., 2020; Kiverstein & Sims, 2021). In parallel, “old” computational ideas have been revamped by the cognitive neuroscience revolution (Boone & Piccinini, 2016).
The above are all different and competing research traditions equally worthy of pursuit. They are different, for they all endorse different sets of theoretical assumptions, use different models and modeling techniques, and resort to different explanatory strategies abiding to different explanatory standards (Lamb & Chemero, 2018; Piccinini, 2020). They compete, for they aim (or, at least, publicly declare to aim) at explaining the same thing - namely cognitive processes.3 And they are all equally pursuit-worthy, at least to the extent that none of them is obviously false and they are all able to generate results counting as genuine progress within the boundaries of the tradition. But which is right? The MOC should help us answer. By telling us what cognition is, it should identify a scientific (perhaps natural) kind supporting relevant scientific generalizations and principles (Adams & Aizawa, 2001, 2008; Buckner, 2015; Newen, 2015). And by so doing it will point us towards the research traditions(s) to pursue:
Desideratum #2: The MOC should allow us to identify which research tradition(s) are worth pursuing in the study of cognition
Motivation #3: Disciplinary boundary disputes. All the research traditions mentioned above agree in construing cognitive science as a multidisciplinary enterprise. But the agreement stops here; for, which disciplines should be allowed to take part in the enterprise is a hotly debated matter. Sure, “classic” cognitive science had some clear ideas - clearly represented by the “cognitive hexagon”. Cognitive science was construed as a multidisciplinary enterprise animated by philosophy, psychology, neuroscience, linguistics, anthropology and computer science/A.I. (Howard, 1987). Yet these ideas hardly translated into practice (Núñez et al., 2019): the contribution of anthropology was modest, and “classic” cognitive science was not exactly keen on neuroscience: indeed, “classicism” is often mocked as the view that the best way to study the mind is to systematically ignore the brain (Fodor, 1999).
As “classic” cognitive science splintered (as sketched above), new disciplines were put in contact with, and included in, the forming research traditions. These include: engineering (Pfeifer & Bongard, 2008) material science (McGivern, 2019; Tripaldi, 2022), physics and complex system science (Ernst, 1978; Kelso, 1995), plant biology (Calvo Garzon, 2007), microbiology (Yakura, 2018) archeology (Malafuris, 2013) and more. Are all of them rightful contributors to cognitive science? This question is important to answer for at least two reasons. First, the regularities and generalization about cognition that cognitive science will discover depend largely on which individual disciplines constitute it. As the number of disciplines constituting cognitive science grows, so does the number of systems cognitive science studies, pushing us towards minimalistic, behavior-based, principles and generalizations (cf. Lyon, 2005; Sims, 2021). Conversely, a cognitive science largely dominated by human psychology will yield demanding, concept-oriented, principles and generalizations (Adams, 2016; Adams & Aizawa, 2008). Secondly, it is important to determine the disciplinary boundaries of cognitive science to use our material and intellectual resources correctly. To make the point bluntly: if microbiology was “in”, we should create reliable informational channels connecting microbiologists to, say, psychologists and linguists/psycholinguists, allowing them to share ideas, models, methods of inquiry and results. This isn’t easily done, nor is it something that can be done for free. It will require intellectual elaboration and monetary funds. These are limited resources, which we shouldn't waste. Hence the third desideratum:
Desideratum #3: The MOC should determine the disciplinary boundaries of cognitive science, allowing us to allot our intellectual and non-intellectual resources in an appropriate manner4
A few words about these desiderata. First, as said above, I don’t presume my list is complete. There may be other desiderata in addition to these. Secondly, I don’t assume their satisfaction is an all-or-nothing affair: a proposed MOC A may satisfy one desideratum better than another proposed MOC B. Thus, these three desiderata (and others, if the list gets expanded) may function as a metric to determine which proposed MOC to accept (and when we should “drop” a proposed MOC for a competitor). Lastly, notice that all these desiderata indicate that the MOC should capture a genuine scientific kind; that is, a kind supporting the genuine generalization and principles of a science of cognition. In fact, extensional adequacy, explanatory power and the capacity of defining the boundaries of a scientific endeavor all seem to be properties of a theoretical term naming a genuine scientific kind (see also Adams & Aizawa, 2001; Wheeler, 2010, 2019). The MOC should thus define a theoretical term, used in a theoretical/scientific context (like “energy” in physics), rather than a folk term, used in everyday discourse (like “energy” when we say we woke up full of energy).
… and a tension
And yet, the shadow of the folk looms large over the MOC, generating a tension. To feel it, consider the following three features of the search for the MOC.
Feature #1: The appeal to (more or less commonsensical) intuitions5 is rampant (Elpidorou, 2013). Examples abound. Bermúdez (2014, p. 415) and Shapiro (2013, p. 363) simply assert that cognition must involve representations, stating they cannot see how it could be otherwise. Adams & Garrison (2013) do the exact same thing when they state that personal-level reasons are necessary for cognition. Similarly, Aizawa (2017, p. 16) claims - basically without argument - that cognition must in a sense be centrally unified; that is, that a cognitive agent cannot be built out of the interaction of special purpose mechanisms.
In all these cases, philosophers rely on their intuitions to indicate an individually necessary condition constituting the MOC. This typically isn’t how we go about searching for scientific kinds. Indeed, our intuitions often stood in the way of us discovering genuine scientific kinds. Our intuitions clumped together jadeite and nephrite as Jade. Dante’s intuition told him that the sun is a planet (Inferno, canto I) and that to move upwards from the center of the earth one must turn 180° (Inferno, canto XXXIV). Alchemists found it compelling to think that nitric acid and hydrochloride were species of water (called Aqua Regalia and Aqua Fortis, (Cleland, 2012). Yet, when it comes to cognition, the care these examples invite seems to get thrown out of the window.
Now, one could perhaps adopt a broadly hermeneutical standpoint, arguing naive intuitions (and all sorts of biases) are always informing our scientific practice. They lie in the background, silently skewing our research in certain directions. True. Yet notice the intuitions above do not lurk in the background. They are stated in the main text of the papers. Their influence is upfront and direct. For they play an essential role in the philosophical literature on the MOC, which brings us to the second feature.
Feature #2: These intuitions fly in the face (and are often intended to counteract) well-established and pursue-worthy research traditions. To continue with the examples above: Bermudez and Shapiro deem representation necessary despite the presence and successes of anti-representationalist research traditions in cognitive science (Beer, 1995, 2000). Aizawa takes a “central processor” to be necessary, despite the successes of the massive modularity research tradition (Carruthers, 2006). Adams and Garrison’s case is even more puzzling: it seems to me that no research tradition in cognitive science even mentions personal-level reasons! Examples proliferate easily: as Chemero (2009, ch. 1) notices, arguments of that sort are fairly common in cognitive science, and indeed pre-date the whole debate on the MOC. Thus Fodor & Pylyshyn (1988), finding it intuitive that all cognition must be systematic, claimed that artificial neural networks are how-possibly models depicting the implementation of (independently studied and characterized) cognitive capacities. Earlier still, Searle (1980) purported to show the untenability of an entire research tradition with a thought experiment; that is, appealing to our intuitive reactions to an imaginary scenario. Closer to us, the “dark room” argument against predictive processing views of cognition is based on the intuitive idea that, if all our brain tries to do is to predict the incoming inputs as accurately as possible, our brains (and thus, we) should crave very predictable and boring environments. But we clearly don’t crave them, so predictive processing must be wrong (Sims, 2017; Smith et al., 2022).
Notice that this widespread appeal to intuition is far from common. Typically, science silences our pretheoretical intuitions. We wouldn’t, for example, trust our intuitions when it comes to discussing matter and energy. And, typically, we would not leverage our pretheoretical intuitions about matter and energy to attack a research tradition in physics.
Feature #3: The pertinence of certain scientific findings is openly contested. Notice: the findings themselves are uncontested. No one claims that a certain experiment never happened, or that such-and-such an observation was not really made, or that certain data have been “rigged” to favor a specific research tradition. To the contrary, the purely factual aspect of discoveries and findings is (typically) left uncontested. What is contested is that such discoveries and findings give us insight about what cognition is and/or how it operates. Bluntly: it is contested whether such findings matter for understanding cognition.
Some examples to clarify. If Fodor & Pylyshyn (1988) are right, then the study of cognition can ignore artificial neural networks and other neurocomputational models. For, strictly speaking, these models stay silent on what cognition is and how cognitive capacities operate. They only illuminate how various cognitive processes may be implemented. Similarly, if inspired by Searle (1980, 1984) one concludes non-derived content is a necessary ingredient of cognition, then one must conclude that various lines of inquiry concerning human-artifact interactions do not really illuminate cognition. Sensory substitution devices (Bach-y-Rita & Kercel, 2003; Eriksson, 2018) may be clinically relevant, and studying how tools are used in problem solving may have interesting anthropological or pedagogical implications (Bocanegra et al., 2019; Risko & Gilbert, 2016). Yet, if non-derived content is necessary, none of these two lines of inquiry sheds light on cognition. They shed light on something else - perhaps in the immediate vicinity of cognition.
Now, unless one thinks that our intuition is tailored to capture scientific kinds (a highly unconvincing position, see Akins, 1996; Churchland, 1995), we should regard these features as generating an important tension in the search for the MOC. On the one hand, the motivation for the search, and thus the desiderata the MOC is called to satisfy, suggest that the MOC should define a technical term capturing a scientific kind. On the other hand, the widespread to intuition and the fact that intuition are taken to have the same epistemic standing of scientific result, to the point that they can challenge their evidential status, suggest the MOC is aimed to capture something different; namely what we’d normally call “cognition” in our everyday lives. The MOC would thus elaborate upon, and make explicit, an important piece of our “manifest image”.6
Whilst both legitimate, the two projects are clearly regulated by different epistemic norms and standards.7 I thus propose to disentangle them, and consider them separately. So, how are the prospects of these projects?
Folksy cognition and its MOC
Consider first the project of providing a MOC capturing the “everyday” notion of cognition. Such a MOC aims to define a folk notion - i.e., it aims to define what the layperson thinks cognition is. The prospects of this project appear extremely grim.
First: “the layperson” is an abstraction. People are different, and intuitions vary. Intuitions about cognition vary across cultures: Trovato & Eyssel (2017), for example, provide data indicating that Italian and Japanese highschool students attribute mental properties to androids differently - including some paradigmatically cognitive properties, such as the capacity to plan and act accordingly. Indeed, it appears that Italian high school students are much more prone to ascribe mental and cognitive properties to artifacts than their Japanese counterparts. Now, on the fairly uncontroversial assumption that high school students' ascriptions are “folk” ascriptions, the data suggest that the folk concept of cognition differs cross-culturally. Intuitions about cognition also vary within cultural groups. Well-educated westerners provide all sorts of definitions of cognition (cf. Bayne et al., 2019, for a sample). Swiss people are very divided on what counts as cognitive: 44% of Swiss think robots are genuinely intelligent and 56% think they are not (Arras & Cerqui, 2005). Almost a 50/50 split.
Perhaps one could argue that a MOC could be found by looking at smaller cultural subdivisions. Whereas Swiss people in general have diverging intuitions, perhaps the Swiss of a single Canton (or of a single city) have more uniform intuitions. This may be the case, and, as far as I can see, there is no data suggesting otherwise.8 But even if this were the case, there would still be reasons to think such a folk conception of cognition will not be nicely captured by a MOC. As hinted at above, MOCs are sets of individually necessary and jointly sufficient conditions. And the appeal to intuition is often used to impose necessary conditions. Yet, most likely, our folk conception(s) of cognition will not provide us individually necessary conditions. For one thing, attempts at capturing ordinary concepts by sets of individually necessary and jointly sufficient conditions have traditionally been met by a volley of counterexamples (Fodor, 1981) And whilst some concepts can be spelled out in that way (e.g. x is a triangle iff x has exactly three sides and exactly three edges) it is typically easy to do so, and the definitions provided are uncontested (Machery, 2011). Surely this isn’t the case with cognition (cf §2).
Further, our psychological theories of concepts suggest that “folksy and intuitive” concepts can hardly be adequately captured by sets of individually necessary and jointly sufficient conditions, as they do not seem to include any individually necessary condition (cf. Machery, 2009, Ch. 4). According to prototype theory (Rosch & Mervis, 1975) concepts are representations of statistically typical features of a class of items. According to exemplar theory (Medin & Schaffer, 1978) they represent instead individual members of that class. In both cases, no feature of the concept is individually necessary to categorize an item in a class, what matters is instead the overall similarity between item and concept. Other views of concepts are surely possible (e.g. Barsalou, 1999; Murphy & Medin, 1985), but these views do not mention individually necessary conditions either. Thus, our currently most credited theories of concepts collectively suggest that our concepts are not constituted by individually necessary features. This gives us solid grounds to think that our “folksy” concept of cognition cannot be adequately captured by a MOC consisting of individually necessary features.
Worse still, even if such a MOC were provided, it would most likely not satisfy the desiderata listed above. The search for the MOC is driven by the proliferation of research traditions in cognitive science and the consequent uncertainty about the scientific kind cognition (Akagi, 2018; Rupert, 2013). Yet, our folksy intuitions are not tailored to the discovery of scientific kinds and the definition of technical terms. Sure, our folk notion of the cognitive does pick up a clump of interesting phenomena (Ramsey, 2015).9 But these are the explananda, not the explanantia, of the cognitive sciences (and perhaps only some of the explananda). And the relevant explanantia actually populating contemporary cognitive science are far removed from our “folk” conception of the mind. For example, we “folksy conceive” vision as a single process, while it most likely consists of at least two different sets of processes (Goodale & Milner, 1992). Our folksy kind “memory” has been subdivided in a myriad of ways (working memory, semantic memory, procedural memory, long term memory, etc). In general, it seems false that our folk psychological categories identify the relevant explanatory building blocks of the mind sciences (Buzsáki, 2019, ch. 1; Churchland, 1981; Pessoa et al., 2021). Indeed, if our “folksy intuitive” conceptions about the mind provide us with the right explanantia, it would be very hard to make sense of the history of psychology - why did it take so long to become a real science?
Perhaps I have been unfair. Of course providing a MOC capturing our commonsensical notion of cognition would do little to aid cognitive science - but it does not need to. Haven’t I conceded that much at the end of §2.2, when I suggested disentangling two different projects; the first aimed at a MOC spelling out a scientific kind, the second aimed at a MOC spelling out our folksy intuitions? Not quite. Whilst I have distinguished these two projects, I have not suggested that they should satisfy different desiderata. Indeed, calls to intuitive MOCs are typically responsive to the desiderata listed in §2.1. For example, Adams and Garrison aim at saving cognitive science from the embarrassment of not knowing what cognition is (Adams & Garrison, 2013, p. 340), Searle’s (1980) Chinese Room wants to identify the right research tradition for artificial intelligence (i.e. weak AI), Fodor & Pylyshyn (1988) were interested in determining the role of neurocomputational models and Aizawa (2017) aimed at evaluating “4E” cognitive science. So, the desiderata these proposed MOCs (or parts thereof) are called to satisfy still are the ones highlighted in §2.1, and pointing out that they fail to satisfy them is a fair piece of criticism.
Summarizing: it is not clear whether there is a single folk notion of cognition for the MOC to capture. And even if it were, our best theories of concepts suggest it would not be accurately captured by a set of individually necessary and jointly sufficient conditions. And even if it were so captured, it would not satisfy the relevant desiderata motivating the search. So, at present, the prospects of finding a MOC capturing our folk notion of cognition aren’t rosy. But what about a MOC capturing a scientific kind?
Scientific cognition and its MOC
Suppose the MOC should now define the technical/theoretical term “cognition”. Surely the definition cannot be stipulative. If cognition really is a scientific kind, we want to discover - rather than to decide - its extension (Desideratum #1). And we also want to discover (rather than decide) which are the methodologies and disciplines that probe cognition the best (Desideratum #3). Thus, we should reject stipulative definitions of cognition, or calls to substitute “cognition” with some other ad-hoc, crisply defined, term (e.g. Keijzer, 2020).10
Now, a great way to discover what cognition is, is via a dedicated scientific endeavor; namely, cognitive science. And here lies the rub: we’d like the relevant MOC to come out of cognitive science, but cognitive science is fragmented in many competing research traditions, at least implicitly suggesting different MOCs. Indeed, it is precisely because cognitive science is so fragmented that some feel the need for the MOC in the first place (§2).
Notice: crucially, at least some research traditions into which cognitive science is currently splintered implicitly define mutually exclusive MOCs. This prevents us from adopting a form of “happy” pluralism according to which cognition itself is so multifaceted and complex that each of these candidate MOCs is partially correct.11 Cognition, complex as it may be, cannot have contradictory properties. Yet some MOCs point precisely towards such contradictory properties. For example, methodological solipsists take cognition to be, in an important sense, environment-independent (Chomsky, 1995; Fodor, 1980). If the solipsist is right, then ecological psychologists (Chemero, 2009), enactivists (e.g. Hurley, 2001) and even externalistically minded connectionists (Clark, 1993) must be wrong, for they all take cognition to be essentially environment-dependent.
One might try to save that form of “happy” pluralism via inclusive disjunctions: cognition is as the solipsist describes, or as the enactivist describes, or as the connectionist describes, etc. But this falls short of the relevant desiderata. Maybe this procedure could yield the true extension of cognition as desideratum #1 wishes (though this is actually extremely doubtful). But it surely won’t reveal which research tradition(s) is(are) worth pursuing (desidertaum #2). And, arguably, it tells us little (if anything) about the disciplinary boundaries of cognitive science (desideratum #3). It’s hard to see how it could be used to determine, say, whether microbiology or hematology are parts of cognitive science.12
Once could contend that a MOC created via inclusive disjunctions as hinted above actually satisfies the desiderata. It satisfies desideratum #1 because it gives us the true extension of cognition: everything cognitive scientists study. It satisfies desideratum #2, because it tells us which research traditions to pursue: namely, all of them. And it satisfies desideratum #3 because it tells us the disciplinary boundary of cognitive science: these boundaries include all disciplines one might use to study cognition. Yet, it seems to me that arguing in this way leads the MOC searcher to a pyrrhic victory (at best). After all, the MOC thus provided makes no difference to the current state of cognitive science. The boons the MOC should deliver are brought about in name only.
Since that sort of “happy” pluralism is not an option, we must choose. How? Choosing arbitrarily would amount to stipulating a MOC. So, we need some principled reason to choose a MOC (or at least few mutually consistent ones) over the others.
Perhaps our choice could be based on our best pieces of empirical evidence. That’s how Einstein prevailed over Newton. Why can’t Gibson prevail over Gregory the same way? Yet, it is hard to see how empirical evidence could decide for one of the many research traditions (and associated MOCs) over any other. For, as highlighted in §2.2, what counts as evidence concerning cognition is itself contested. Further, it seems that (almost) any piece of evidence can be used to support any MOC. Consider one of the coarsest divisions in cognitive science; namely the one between representationalist and anti-representationalist research traditions. The former claim cognition requires representations; the latter claim it doesn’t. Whilst some think the debate is solved just by noticing that we cannot explain every interesting piece of behavior only in stimulus response terms (Churchland, 2002), things are in no way that simple. For, clearly, anti-representationalists are not mad: they do hold that internal states of all sorts matter in the explanation of behaviors, and they do hold that nomically relevant tracking relations hold between these states and external targets. Yet they do deny that the former represent the latter in any relevant sense (Orlandi, 2014). And, contra (Thomson & Piccinini, 2018) we cannot simply “take a peek” inside cognitive systems to see whether representations are tokened in there. For, it is easy to interpret (bona fide) non-representational states in representational terms (Bechtel, 1998; Shapiro, 2013), as well as “deflating” (bona fide) genuine representations as mere causal mediators (Facchin, 2021; Ramsey, 2007)—even when the cognitive system we’re looking at is the brain (cf. Gessell et al., 2021; Kriegeskorte & Kievit, 2013; Ritchie et al., 2019).
Maybe, then, clever reasoning will succeed where the appeal to evidence fails. We could design sophisticated arguments showing that one, or more, research tradition(s) ought to be abandoned. Chomsky managed to identify one such argument against behaviorism, and there seems to be no reason as to why, say, enactivism should be immune to such arguments. So we could search the MOC by elimination: narrowing down the set of research traditions (and thus candidate MOCs) down to one, or few mutually consistent ones.
Whilst viable in principle, this way of proceeding likely won’t be viable in practice. Even Chomsky’s famous arguments against behaviorism failed to force a wholesale abandonment of behaviorism (Staddon, 2021). Minsky’s and Papert’s (1969) analysis, whilst rigorous and on the point, (thankfully!) failed to force a wholesale abandonment of connectionism. And the arguments offered by Chomsky, Minsky and Papert are not just strong and well constructed: they are (and have been) persuasive. They impacted the day-to-day research practice of numerous cognitive scientists, and had a sizable impact on cognitive science. Most other arguments aimed at motivating the abandonment of a specific research tradition are neither as strong nor as persuasive as these ones (Chemero, 2009, p. ch.1). This suggests such a process of elimination is very hard, if not impossible, to translate into practice.
And even if it were translated into practice, it might not be translated successfully. Even a single research tradition can generate multiple MOCs, for the individual disciplines within that tradition would still pull the MOC in different directions. For example, microbiology and plant science often focus on the way in which (comparatively simple) biological systems cope with its immediate environment, focusing on relatively small-scale sensorimotor interactions (Baluvška & Levin, 2016; Lyon, 2015). These disciplines - actually, their philosophical spokespeople - push for fairly minimal and liberal MOCs, which can be easily applied to the system they are interested in studying (Duijn et al., 2006; Lyon, 2005). But robotics and AI push for more restrictive and demanding MOCs (cf. Webb’s piece in Bayne et al., 2019; Nolfi, 2002; Tani, 2007, 2016; Webb, 2006). It’s not hard to understand why: they know that comically simple systems can skillfully interact with the environment (Braitenberg, 1984). So they favor demanding MOCs justifying their claim that (certain) robots and computers really cognize.13
The general point is nicely exemplified by the exchange between Corcoran et al. (2020) and Kiverstein & Sims (2021). Both candidate MOCs “came out of” the same research tradition; namely Active Inference. According to this tradition, cognition is best studied deploying a complex set of modeling tools allowing us to construe cognitive activity as a self-organizing process whereby a system brings about sensory states consistent with (and confirming) its own prolonged existence through time (Parr et al., 2022). Whilst Corcoran and colleagues and Kiverstein and Sims agree on that much, they still propose different MOCs. And these differences matter given the desiderata highlighted above. In short, Corcoran and colleagues suggest that cognition is a rather sophisticated form of counterfactual inference which is not universally possessed by living systems. Conversely, Kiverstein and Sims suggest that cognition is a form of anticipation all (or almost all) living systems exhibit. So, they disagree concerning the extension of “cognition” (Desideratum #1). They also disagree on the disciplinary boundaries of cognitive science (Desideratum #3): whereas Kiverstein and Sims suggest that biological sciences are en masse part of cognitive science, Corcoran and colleagues resist the suggestion.
So, appeals to empirical evidence and “pure” reasoning will not yield a MOC. Why, then, don’t we ask history? Let the research traditions develop and compete. Some will blossom, some won’t, and at the end they will deliver a coherent picture of cognition. Whilst I am generally sympathetic to this suggestion, I also have some reservations about this sort of “wait and see” strategy. One important reason to be wary of this “wait and see” approach, I think, is that it presupposes that cognitive science will develop in a way that will lead us towards a single MOC. This may happen in various ways. On an extreme, it might happen in a “selectionist” fashion: the best research tradition will (eventually) win the day, driving its competitors to extinction and imposing a single MOC. On the other extreme, different research traditions may “fuse” by downplaying their differences and/or develop towards a common position, ending up providing a single MOC. Between these two extremes, all sorts of intermediate developments are possible (e.g. maybe two research traditions ta and tb will “fuse” generating tab which will then prevail in a purely “selectionist” manner). Be as it may, the presupposition that somehow cognitive science will develop so as to provide a single MOC does not seem to be particularly well-justified.
Consider first the prospects of a “selectionist” development. Some potent reasons as to why such a development is unlikely have been reviewed at length just a few paragraphs above: neither empirical evidence nor arguments seem able, both in principle and in practice, to cause any research tradition to “go extinct”. Further, even if a single research tradition were “selected over” its competitors, it could still fail to articulate a single MOC: indeed, currently many individual research traditions provide more than one MOC. So, it seems unlikely that one single MOC will be established through these “selectionist” means.
What, then about the other extreme, the “fusion” of various different research traditions? This development seems unlikely too. Different research traditions make mutually exclusive claims. For example, whereas classical cognitivists think that cognition is computational (Fodor, 1975), enactivists think that our biological nature prevents cognition from being computational in any sense (Di Paolo & Barandarian, 2016). Bayesian psychology casts perception as a form of inference (Rescorla, 2013), but ecological psychologists claim that perception cannot be understood as a kind of inference (Gibson, 1979). It’s hard, to say the least, to see how these different research traditions may “fuse” in a coherent manner. A research tradition cannot be both computationalist and anti-computationalist, inferentialist and anti-inferentialist. These are mutually exclusive theoretical stances, which cannot coexist or be “fused” together. When it comes to these matters, no “fusion” seems possible: something has to go. To be clear, this is not to deny that some research traditions might “fuse” in a coherent manner, perhaps because, in the grand scheme of things, their differences are relatively minor.14 But such “fusion friendly” research traditions seem to be the exception, rather than the rule. Hence the likelihood of cognitive science marching towards a single MOC by progressive “fusions” is extremely low.
One might perhaps object that I am overemphasizing the differences between research traditions. The objector makes a fair point: there is some important intellectual work suggesting that it might be possible to coherently “fuse” prima facie mutually inconsistent research traditions. For instance, Villalobos and Dewhurst (2017, 2018) have tried to build some bridges connecting computationalism and enactivism.15 Weinberger & Allen (2022) have recently argued that dynamical models of cognition may be less inimical to computational models than initially supposed. Whilst “syncretic” works of this sort are still, to my knowledge, few and far between, it is important to explicitly acknowledge their existence here, as they seem to be counterexamples to my claim that certain research traditions might be too theoretically different to fuse coherently.
The counterexample is on point. And I don’t want to pose as a fortune teller: it could be the case that some day all the different research traditions of current cognitive science will fuse in one single research tradition yielding a single MOC. Further, it could be that a single research tradition will be “selected over” all of its competitors. None of these two developments is impossible. So, it is possible to adopt a “wait and see” approach to the MOC.
At this point, however, it is worth highlighting the tension between the “wait and see” strategy and the current project of searching for the MOC. For, a “wait and see” approach is attractive only if one is willing to assume that the development of cognitive science not only can, but also will in fact, develop correctly in the absence of a MOC (cf Allen, 2017).16 Indeed, a “wait and see” approach makes sense only if one thinks that (a) worries about which research traditions should be pursued (desideratum #2) and (b) worries about the disciplinary boundaries of cognitive science (desideratum #3) will eventually take care of themselves if given enough time. But if one thinks that (a) and (b) will eventually take care of themselves, then one won’t be motivated to search for a MOC in the first place - or, at least, not by the worries discussed in §2.1. Simplifying to the extreme: if “wait and see” approaches are right, then we have little reason to search for the MOC right now. And, if one is motivated to search for the MOC right now, one cannot do so by waiting for cognitive science to take care of itself. So, even conceding that prima facie mutually exclusive research traditions can coherently and productively “fuse” (or that a single research tradition might eventually triumph), there would still be a pragmatic contradiction in searching for the MOC and adopting a “wait and see” approach.
Importantly, no argument in this whole section entails that a MOC will not be provided. My arguments are, for the most part, based on the current splintered state of cognitive science. So, as I noticed above, if cognitive science can be re-unified (and the pull of various individual disciplines for different MOCs is somehow dealt with), then perhaps a MOC defining the scientific kind cognition may be provided. Ultimately, then, time will have the last word; we should wait and see what it will say. And, as I have argued just above, adopting this “wait and see” attitude forces us to at least pro tempore abandon our search for the MOC.
Where from here?
Tying things up: §2 highlighted some of the motivations behind the search for the MOC, as well as the desiderata a MOC should satisfy (§2.1). It also highlighted a tension in the current search for the MOC, which is due to the massive role played by intuitions (§2.2). Hence the proposal of distinguishing two different projects: (i) that of providing a MOC capturing the “folksy” notion of cognition and (ii) that of providing a MOC capturing a scientific kind. I then moved a variety of considerations to claim that, at least at things stand now, none of these two projects can be carried out successfully (§3-§4). Am I right? and if yes, what would follow?
On the “am I right” point, I want to address a foreseeable objection. The objection is this: asking for individually necessary and jointly sufficient conditions is setting an unreasonably high bar. That sort of definitionism is dead for good (Taylor & Vickers, 2016). And currently popular accounts of scientific concepts in no way focus on individually necessary and jointly sufficient conditions. According to a first popular account, scientific terms capturing genuine natural kinds pick up homeostatic property clusters; that is the mechanism in virtue of which a number or relevant properties (in this case, cognitive ones) cluster together (Boyd, 1991). Buckner (2015) proposed a MOC of this kind, and his account is safe from many of the problems I raised. According to a second popular account, scientific concepts are patchworks - they consist in a richly interconnected series of domain-specific and specific-empirical-technique involving uses of a term (Haueis, 2021). While no “patchwork MOC” has been provided yet, it might be provided, and may be a serious alternative to more “definitionist” MOCs.
In reply, notice that the request for individually necessary and jointly sufficient conditions does not come from me (Adams & Aizawa, 2001; Rowlands, 2009, 2010; Walter & Kästner, 2012). So, while I agree that the bar may be too high, it is not a bar I am setting. Notice, importantly, that the request for individually necessary and jointly sufficient conditions is non-trivially related to the reasons motivating the search for the MOC and the desiderata I have examined in (§2.1). Only classic definitions of cognition (i.e. a set of individually necessary and jointly sufficient conditions) identify all and only the instances of cognition (desideratum #1). A cluster-based, or even prototype-based (Newen, 2015), approach would leave a “gray area” of uncertain cases. But precisely for this reason, such proposed MOCs will not crisply determine which research traditions are worth pursuing and which individual disciplines will be relevant to our cognitive scientific endeavor (desiderata #2 and #3). Similarly, a patchwork account would be a descriptive account capturing how the world “cognition” is used in various different scientific contexts. Being descriptive, it won’t tell us what we should do to do good cognitive science: hence it will be silent on which research tradition we ought to follow, and on which individual disciplines we ought to practice (desiderata #2 and #3). Similarly, it will tell us how “cognition” is used in current cognitive science, rather than when its current usage is correct (desideratum #1). Thus, it seems that cluster- and patchwork-based approaches are not viable alternatives to a “definitionist” MOC - at least, if desiderata #1 to #3 set the goals of one’s search.
This means that if one does not take meeting these desiderata as the endpoint of one’s own search, one is free to go for a patchwork- or cluster-based MOC. Importantly, however, since these desiderata are non-trivially connected with one’s motivation to search for the MOC, this means that one’s reasons to search for the MOC will have to be different too. Given the difference in motivation, then, it might be smart to construe that kind of project as an altogether different project - to construe it as the search for the MOC*17 rather than the MOC. Importantly, this paper is silent on the prospects of the search for a MOC*. So, as far as I am concerned, searching for MOCs* might be an important and valuable endeavor. My only recommendation when it comes to MOCs* is to keep them as distinct as possible from the MOC, clarifying that MOCs* are supposed to accomplish different epistemic tasks and thus that they are responsive to different desiderata.
One could further object my proposal of separating the search for the MOC from the search for the MOC* is not really coherent with an observation I made in §2.1; namely that the reasons to search for the MOC which motivate the adoption of desiderata #1 to #3 are not exhaustive. I have claimed that other reasons could motivate the search for the MOC too. Doesn’t this assertion run counter my proposal of teasing apart the search for the MOC from the search for the MOC* based on the different reasons motivating the two searches? No, it doesn’t - or at least, not necessarily: there’s a reading of §2.1 according to which no contradiction arises (and that reading, of course, is the intended one, as indicated in §2). In §2.1 I conceded that there may be other reasons to search for the MOC in addition to the one I examined. Hence, I conceded that desiderata #1 to #3 may be supplemented by desiderata #4 to #n, each corresponding to the reasons that motivate one to search for the MOC in addition to the one §2.1 focused on. Notice that, in this case, the reasons corresponding to desiderata #1 to #3 would continue to motivate the search for the MOC. Yet, clearly, the very same reasons cannot motivate the search for a MOC*: else, a MOC* would still be required to satisfy desiderata #1 to #3, and so patchwork- and cluster-based approaches would still not be viable. Hence, the different reasons that motivate the search for the MOC* (whatever they may be) must be conceived of as an alternative, rather than an addition, to the reasons examined in §2.1.
On the “what follows” point, opinions vary. Some think the absence of a MOC does not matter (Clark, 2008) Others, instead, paint apocalyptic scenarios. Adams and Aizawa (2008, pp. 79–83; Aizawa, 2017), in particular, contend that, absent a MOC, we are all drawn towards a nasty form of operationalism. And that is undesirable for a wide number of reasons. First, it allows us to identify cognition only in reference to some paradigmatic cognitive processing, without knowing what it really is. Secondly, it leads us to over-attribute cognition. Many outcomes of cognitive processes can be brought about by non-cognitive means. Lastly, operationalism leaves the door open to the return of behaviorism, and surely no one wants behaviorism to return, right?18
This pessimism is unjustified. Behaviorism is not returning, partially because it never left, and partially because it does not seem to be gaining popularity. Anti-representationalism might be gaining popularity, but anti-representationalism is not behaviorism. Reading (Anderson, 2014; Beer, 2000; Chemero, 2009; Kelso, 1995) and others, one does not find any reference to classical or operant conditioning, stimulus response chains, or Skinner boxes. Moreover, the absence of a MOC does nothing, as far as I can see, to support behaviorism. On the one hand, the absence of a MOC in no way encourages us to try and explain behavior exclusively in terms of stimulus-response chains. Indeed, the absence of a MOC is entirely compatible with an adoption of computationalism for purely pragmatic reasons (Von Neumann, 1958). Whilst the truth (or appropriateness) of a behavioristic MOC would entail the truth (or appropriateness) of behaviorism, the absence of a MOC doesn’t. Indeed, it's entirely unclear how the absence of a MOC would support a research tradition over any other research tradition.
Moreover, the charge of operationalism is surely overblown (see also Rupert, 2013). Operationalism is typically understood as the view that the meaning of theoretical terms consists in observations/measurement outputs. According to operationalism, a statement such as “the temperature of the substance a is x” means roughly “you will read x if you probe a with a thermometer”.19 Operationalism is a view on the semantics of theoretical terms. Such a semantic hypothesis is surely not entailed by our inability to define cognition.
One may worry that my claim is pushing for mysterianism about cognition. And since mysterianism is bad because it amounts to giving up in our explanatory attempts (Dennett, 1991), my claim ought to be rejected. Now, mysterianism is the claim that although our phenomenology is physical, our cognitive architecture is wired in a way such that we just cannot understand how it metaphysically depends on the brain (McGinn, 1989). This thesis can be easily applied to cognition. According to the mysterianist about cognition, cognition is a physical phenomenon, but, due to some feature of our cognitive architecture, we cannot figure out how it metaphysically depends on the brain. But I am not pushing for mysterianism in any way. For one thing, nothing in my argument entails that we cannot explain how various cognitive processes relate to various physical systems and processes. Understanding how a connectionist model works20, for instance, allows us to understand how a neural network may systematically relate inputs and outputs, and how connections can collectively store a system’s memories. There is nothing mysterious or reason-defying in the workings of connectionist models. Sure, the inner workings of complex artificial neural networks with billions of parameters might not be “intuitively graspable” as the simple, three-layered feedforward networks of the 80’s—but there are techniques to track the inner goings-on of such networks (Olah et al., 2018), and, at any rate, there is no special mystery concerning how billions of parameters may “store” the complex statistical models governing the functioning of these networks. And in fact, notice that anti-connectionists do not claim that connectionist models fail in making intelligible how cognition “pops out” of the firing of a bunch of interconnected (artificial) neurons. They may object that the models are too static, simple and biologically implausible, or that they are models of implementation rather than cognition properly understood. But, at least to my knowledge, no one denies that such models provide at least an how-possibly explanation of how matter and cognition relates. This gives rise to an important disanalogy between consciousness and cognition21, which significantly deflates the charge of mysterianism. Further, my arguments about our inability to define cognition do not depend in any way on the contingent features of our cognitive architecture.22 Indeed, if my arguments are correct, in order to find a MOC we do not need a genetic mutation altering our cognitive architectures—we only need to more or less significantly alter the way in which we practice cognitive science. So, I am in no way suggesting giving up on our explanatory endeavors. To the contrary, my claim suggests we should keep trying to ameliorate our scientific practices (at least, if we care about providing the MOC).
Lastly, I wish to point out that my conclusion is compatible with the view that “cognition” (and the mentalistic lexicon more generally, see (Dennett, 1991; Schwitzgebel, 2021) is vague. To be clear: I will not argue that cognition (or the mentalistic lexicon more generally) is vague - at least, not here. Nor do I claim that my conclusion provides an argument in favor of cognition (or the broader mentalistic lexicon) being vague. I am just claiming that my conclusion is compatible with these forms of vagueness. My aim here is just that of highlighting something interesting for future use.
That being said, my conclusion seems immediately compatible with an epistemicist stance on vagueness (Williamson, 1994). Take a vague term t. The epistemicist claims that: (a) everything is either determinately within the extension of t or not; (b) there’s a sharp and clear cut division between ts and not ts and (c) we’ve no idea where such a division is, and so we’re unable to say, for every x, whether x is t or not t. Note the problem is epistemic: there’s a real clear cut division out there—we simply do not know where. It’s intuitive to think my argument licenses a straightforwardly epistemicist conclusion. If I am right, we can’t—at least for the time being—say what cognition is because of our epistemic standing: cognitive science is just too fragmented to allow us to define cognition. I also suspect such an epistemicist conclusion will be appealing to many: indeed, it seems to me that the MOC searcher must be “contingently epistemicist” about cognition. Searching for the MOC seems to presuppose (a*) that every state, process or system is definitely either cognitive or not cognitive, (b*) that there’s a sharp division between the two and (c*) by ignoring what the MOC is, we ignore where that division is (d*) that our ignorance is contingent and can be dispelled by finding the MOC. Endorsing (a*) to (c*) makes the MOC seeker an epistemicist about cognition. The addition of (d*) clarifies why this epistemicism is contingent: we can dispel the vagueness around “cognition” by finding the MOC.
But does my argument support an epistemicism about cognition (or the mentalistic lexicon more generally)? I doubt it. For my arguments here have been blissfully neutral on (a*)—nothing of what I have argued presupposes or entails that every state, process or system is definitely either cognitive or non-cognitive. Nor, to be extremely clear, does anything I have argued for here presuppose or entail the falsity of (a*). Thus, in order for the arguments provided here to support an epistemicist stance about cognition one must supplement them with a compelling argument for (a*). As things stand, I know of no such argument—further, I must confess I have some troubles even imagining what such an argument might look like. But of course, my ignorance and my lack of imagination are not arguments against (a*). Here I want to leave the truth value of (a*) entirely undetermined. For what I want to highlight is something quite different; namely, that my arguments here support an epistemicist stance about cognition only if they’re supplemented with some compelling and independent reasons to accept (a*). This is important to notice, for it seems to place an important argumentative boulder on the shoulders of philosophers interested in searching for the MOC. For, if, as I have argued, philosophers searching for the MOC must be “contingently epistemicist” about cognition, then they owe us some reasons to accept (a*). And, as I have already noticed, these reasons can hardly be found in the current literature on the MOC.
Importantly, since my arguments here are neutral on (a*), they’re also compatible with its negation (and so, a fortiori, they don’t support epistemicism about cognition). Notice that, if the negation of (a*) were true, there would be at least one state, process or system that is neither definitely cognitive nor definitely non-cognitive. Vagueness would thus be a feature of cognition itself, rather than the by-product of our epistemic standing. I think such a view is attractive for several reasons. The falsity of (a*) would provide a plausible explanation accounting for the failures of proposed MOCs. The falsity of (a*) could also neatly explain why many of our scientific concepts end up identifying clusters of properties, or “patchworks” of connected uses that can always be extended to novel contexts (Waisman, 1968). Similarly, the falsity of (a*) would allow us to make sense of the fact that we seem to find cognitive - or at least cognitive-like - processes, systems and properties everywhere we look (Levin, 2022; Tripaldi, 2022; Yakura, 2018). Thus, I think there are several reasons to want (a*) to be false. And whilst I know of no direct argument to the effect that (a*) is false, I can at least imagine one.23 But, to repeat myself for the sake of clarity, here I don’t want to argue that (a*) is false. Hence, notice that my arguments here do not lend any inductive support to the claim that (a*) is false. Sure, the falsity of (a*) would neatly explain why any attempt at providing a MOC has thus far failed. Thus, one might be tempted to invoke an inference to the best explanation and conclude for the falsity of (a*). Yet, the fact that any attempt to provide a MOC has thus far failed is not uniquely explained by (a*) being false - it could be explained equally well by other factors (such as, for instance, the complexity of cognition itself, the relatively young age of psychology, or the current fragmentation of the mind sciences). Moreover, the falsity of (a*) licenses a conclusion far stronger than the one I have reached here. If (a*) is false, then there’s no MOC to be spelled out. But here I have not claimed that there is no MOC to be spelled out.24 I have only claimed that the MOC cannot be spelled out given the current fragmentation of the mind sciences. For these reasons, my conclusion does not license any inference to the best explanation to the falsity of (a*).
Summing up: the claim I have here defended is compatible with cognition being vague in one of the two ways seen above - yet, my conclusion does not support the claim that cognition is vague in any of the two senses above.
Conclusions
In this paper, I have examined the current literature on the MOC (§2). I have highlighted the desiderata that the MOC should satisfy (§2.1) as well as an important tension that pervades that literature (§2.2). To put it bluntly, it is not clear whether the MOC is supposed to capture the folk notion of cognition or the scientific one. I have claimed that no MOC capturing our folk notion of cognition can be provided (§3). For, there is likely no single, culturally stable notion of cognition. Further, even if there were such a notion, it could hardly be captured in a MOC, given that most of our folk notions cannot be captured by sets of individually necessary and jointly sufficient conditions. And, even if such a MOC were to be provided, it would fail to satisfy the relevant desiderata it is called to satisfy. Thus, I concluded that a MOC capturing our folk notion of cognition is useless at best. I have also claimed that a MOC capturing the scientific notion of cognition cannot be provided (§4). This is because, at present, cognitive science is splintered in many conflicting and contradictory research traditions. Since all these research traditions are equally worth pursuing, we are unable to identify a single notion of cognition for the MOC to capture. Lastly, I have defended my claims from a number of objections (§5).
In closing, I want to indicate some directions for future research. As I have noticed at the end of §5, the quest for the MOC is importantly tied to vagueness. MOC searchers must be “contingently epistemicist”, about cognition. Adversaries of MOC searchers may attack their endeavors by claiming that there are systems, states and/or processes that are neither definitely cognitive nor definitely non-cognitive. If this is correct, then then vagueness is definitely a new battleground for the “cognition wars”.
Acknowledgments
This paper has been presented at the 5th SILFS Postgraduate Conference in Milan, the AISC Midterm Conference 2022 in Parma and the Joint ESPP-SPP conference in Milan. I wish to thank the audience of all these conferences for their observations, they really improved the paper. A special thanks goes to (in random order) Marco Viola, Giacomo Zanotti, Bruno Cortesi and Arianna Beghetto for having read and commented upon various previous iterations of this manuscript. Lastly, I wish to thank the two anonymous reviewers of for their nice and profound comments.
References
As Steiner (2019) convincingly argues, the “Kuhnian rhetoric” does not capture the relevant conceptual changes in cognitive science. In his view, cognitive science (or, at least, the passage from “classic” to “embodied” cognitive science) is best described as the shift in the balance of power between two long-standing and competing research traditions. I agree, and I think the analysis should be expanded to all the supposed “revolutions” in cognitive science.↩︎
For a real history of cognitive science, see Boden (2008).↩︎
Admittedly, there have been some calls for integration (e.g. Eliasmith, 2013; Miłkowski et al., 2018) and/or statement that different approaches may be complementary (Kaplan & Bechtel, 2011). But these are not just few and far between, they also exhibit bias towards privileged research tradition and/or model of explanation. Thus, for example, Eliasmith suggests a sub-symbolic cognitive level below the symbolic one, and Kaplan and Bechtel think that the explanatory power of dynamical models depends on them being “mappable” on mechanistic explanations.↩︎
This might be partially redundant in respect to the second desideratum: knowing which research tradition is right will most likely tell us which disciplines constitute cognitive science and how to allocate resources. Yet it was worth making the point explicitly.↩︎
I will adopt a very unsophisticated view of intuitions: they are judgments we’re prone to make and report (if asked). Thus, for example, most westerners today share the intuition that the earth revolves around the sun, but very few westerners shared that intuition before the “scientific revolution”.↩︎
And, in fact, calls to intuition are not evenly distributed in the literature on the MOC: they are more frequently made by authors favoring “conservative”, human-centric views of cognition.↩︎
Scarantino (2012) seems to notice that two similar projects are also often entangled in affective science.↩︎
However, I suspect this is due to the fact that this hypothesis has never been tested, and so data lacks entirely.↩︎
Which (as noted at the beginning of §2) we typically identify in extension, by listing them.↩︎
Importantly, if Lyon (2019) is right, “minimal cognition”, being a stipulative term, would be in trouble too.↩︎
Importantly, however, this does not altogether exclude that we should be pluralist and let a thousand research traditions bloom (cf. Allen, 2017; Chemero, 2009). It only excludes (a) that we could “glue together” all these research traditions to obtain a single MOC and (b) that all the MOCs implicitly suggested cannot be accepted at the same time.↩︎
Notice that the MOC proposed by Akagi (2022) is something of this sort: it consists of a rigid structure of interconnected variables, each of which is able to assume a range of values. Each variable represents a locus of contention in regards to the definition of cognition, and each value represents a position actively engaged in the dispute. But, as Akagi notes, such a MOC does not yield us the extension of “cognition”, nor does it suggest which scientific endeavors are worth pursuing. What such a MOC does is capture in an orderly manner the extent of the disagreement concerning cognition. And that is Akagi’s primary purpose.↩︎
Note that here I have considered two disciplines that, in the current landscape of cognitive science, are often quite close and willing to cooperate with each other (Beer et al., 1997; Keijzer, 2001). Indeed, these disciplines form the backbone of strongly embodied, enactive approaches to cognition. And yet, they do not seem to agree on the MOC.↩︎
This seems to be happening right now to ecological psychology and enactivism (Baggs & Chemero, 2021; Chemero, 2009).↩︎
Yet, notably, these connections would require enactivists to abandon their anti-computational stance. As noted above when it comes to such fusions something must go.↩︎
It’s important to notice that such an approach might underplay a host of “sociological” factors that might impact the development of sciences for the better or the worse (e.g. distribution of funds).↩︎
What is the MOC*? I am leaving this issue purposefully undetermined. For now, we can simply think of the MOC* as a variable, ready to set a variety of values depending on the reasons motivating one’s search.↩︎
These last two worries are more evident in (Adams & Aizawa, 2001).↩︎
However, this interpretation of operationalism might not capture what psychologists do when they operationalize a term (i.e. providing a working definition, see Feest, 2005).↩︎
Here, I am using connectionism as an example. Analogous considerations hold for different research traditions.↩︎
Indeed, that “disanalogy” is precisely what distinguishes easy problems from the hard problem (Chalmers, 1997).↩︎
Not even the point about our folksy conception of cognition not specifying individually necessary conditions. For, that point was intended to hold regardless of the relevant conceptual format deployed, and thus regardless of the specific architecture relying on them.↩︎
If you want to imagine it too, read Schwitzgebel (2021) and substitute every occurrence of “consciousness” with “cognition”.↩︎
Which need not be a problem for cognitive science - just like the absence of a definition of life (Cleland, 2012; Machery, 2011) is not a problem for biology.↩︎