Functionalism
Functionalism is the claim that abstract descriptions, such as the algorithmic description in Marr's Levels of Analysis, fully determine the consciousness of a system. Functionalism is closely related to the claim that consciousness is substrate-independent.
QRI rejects functionalism. However, functionalism is a plurality view among philosophers[1] and likely more popular still among neuroscientists and AI researchers, although no analogous survey has been conducted in those fields.
Despite being commonly treated as a singular position, functionalism is compatible with both Illusionism and Consciousness Realism. The illusionist position typically treats the term "consciousness" as synonymous with the observable computational process in the brain (which differs from the definition used in this wiki). If one takes this view, functionalism becomes the claim that a computational process is entirely defined by its abstract description, which can be considered vacuously true or, at any rate, a matter of definition rather than fact. Thus, an illusionist understanding of functionalism is little different from regular illusionism. Conversely, a realist would typically consider consciousness conceptually distinct from the underlying biological process. In this case, functionalism becomes an additional, nontrivial postulate.
Alternative Definitions
The Stanford Encyclopedia of Philosophy defines Functionalism as "[...] the doctrine that what makes something a thought, desire, pain (or any other type of mental state) depends not on its internal constitution, but solely on its function, or the role it plays, in the cognitive system of which it is a part". While this definition is more commonly used, it has severe problems:
- It relies on the identification of mental objects. Since the definition is applied to parts of a system rather than the system as a whole, one must know what those parts are to use it. Conversely, if the definition were applied to an entire system, it would imply that the system's inner workings are irrelevant, yielding an analysis on Marr's input/output level corresponding to a literal form of logical behaviorism that most functionalists would reject.
- It is ambiguous with respect to the magnitude of difference that matters. Even if one has identified a relevant part of a system, it is unlikely that changing the internal constitution of the part will have zero physical effect on the remaining system. Conversely, it is plausible that such a change has no effect on an abstract description. E.g., replacing a transistor in a CPU with a functionally equivalent (but physically different) model will have a (small) effect on the physical level but no effect on the logical values it represents, i.e., on the abstract description modeling bits.
Furthermore, if one has identified such objects and a threshold for change, this would likely yield an abstract description that captures the function but not the internal structure of these objects. Thus, the common definition may be considered a special case of the definition in terms of abstractions that is preferred in this wiki. That said, defining functionalism in terms of abstractions has its own problems. As of today, QRI is not aware of a definition that is both fully rigorous and generally applicable.
The Positive Case
The arguments in favor of functionalism are too numerous to list here, but many of them can be viewed as variations of the same idea, which is that Dual-Aspect Monism seems to be suggestive of functionalism due to their shared emphasis on causality. Simply put, if consciousness has causal effect but is also an aspect of physical structures, then it stands to reason that it is defined in terms of its causal effect. Conversely, if a change to the internal structure of a system has a small effect on its behavior but a strong effect on its consciousness, it calls into question whether consciousness had causal efficacy in the first place.
The following sections present two arguments that elaborate on this intuition. Even though the first is logically flawed, it is included here due to its frequent usage.
The Simulated Brain Argument
The first argument purports to show that dual-aspect monism (A), physicalism (B), and the claim that the laws of physics are at least approximately computable (C) together imply substrate-independence and functionalism. (It's worth noting that both (A) and (B) are endorsed by QRI and are frequently assumed throughout this wiki.) It can be stated in three steps as follows (where qualifiers like "almost" and "approximately" are omitted for simplicity):
- Due to (B) and (C), a human brain can be perfectly simulated on a computer.
- Since the simulation is perfect, it is functionally identical to the original brain.
- Due to (A) and #2, the simulation has identical consciousness to the original brain.
Since the same argument can be applied to any substrate capable of running simulations, #3 may be considered proof of substrate-independence (since the change to any such substrate has no effect) and functionalism (since corresponding changes to the implementation-level description have no effect).
The primary flaw of the argument is the sleight of hand in step #2, i.e., the assertion that a brain and its simulation are "functionally identical". This claim is either unproven or ill-defined (since it is unclear whether the computations of a brain are amendable to an abstracted description at all). As a simple example, compare a physical soap bubble to a mathematical simulation of a soap bubble running on a digital computer. In the physical bubble, the spherical shape results as the equilibrium state of numerous forces acting on every particle on the bubble's surface. While the result of these forces are computed in the simulation, the forces themselves are absent, so on a physical level, the causal profiles of the bubble and the simulation are widely different. To the extent that analogous processes exist in the human brain, the same difference will hold in human brains and their simulations.
It's also worth noting that a non-conscious simulation of a brain would not be a philosophical zombie since a philosophical zombie is commonly defined as a system physically identical to a human. Thus, philosophical zombies require an equivalence on the implementation level, whereas a simulation merely provides equivalence on the input/output level.
The Neuron Swapping Argument
A variant of the simulated brain argument has been proposed by Eliezer Yudkowsky as part of his essays on rationality. Rather than simulating a brain, Eliezer suggested swapping out the neurons of a biological brain (quote from a fictional character in a Socrates Dialogue):
Albert: "Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules."
Formally, let (C') be the assumption (which is also mentioned in the essay) that the brain's physical causality is approximately captured by the sum of the local behaviors of its neurons. Now, the quote suggests a variation to the simulated brain argument that's based on (C') rather than (C):
- Due to (B) and (C'), a brain with swapped neurons behaves identically to the original brain (there's no physical difference due to (C'), and no non-physical component due to (B)).
- Since only the internal structure of neurons has been changed, the modified brain is identical under the abstract description that treats neurons as computational units and models their interactions (essentially the human connectome).
- Due to (A) and #2, the modified brain has identical consciousness to the original brain (this step is the same as before).
Since the modified argument no longer has the flaw discussed in the previous section, it becomes logically valid. In particular, it relies on gradual changes to an existing system that leave an explicitly identified abstract description unchanged, rather than on constructing a new system with a potentially widely different causal profile.
QRI's response to the neuron swapping argument is to dispute (C'). Thus, QRI predicts that replacing neurons with silicon counterparts that imitate local behavior will change the person's consciousness due to the change in the neuron's non-local causality. In particular, if the EM hypothesis is true, such a transformation would greatly affect the brain's electromagnetic field, thus degrading or erasing the conscious part of the brain, even while the unconscious part remains unaffected. Therefore, the result of the experiment would likely be that the human dies in both the behavioral and ontological sense, which preserves continuity between physical and consciousness effects. Alternatively if the new neurons preserve non-local effects as well, the argument goes through but doesn't imply functionalism since the substrate of consciousness has not been affected.
The Negative Case
As with the positive case, the arguments against realist functionalism are too numerous to list here. Furthermore, both of QRI's founders have written blog posts about the topic (see the first two links in the Resources section), so the following will only hint at the arguments that can be made.
The emphasis on causality is not unique to functionalism. Functionalism's emphasis on causality (i.e., on the "function" or "behavior" of objects) is often considered a strength of the principle. This consideration is legitimate for many alternatives functionalism competes with, especially those that appeal to the intrinsic nature of objects in a way that isn't reducible to causality. (Epiphenomenalism is an example.) In such cases, framing the debate as one about causality can be appropriate.
However, any proposal that is genuinely consistent with dual-aspect monism must deeply care about causality. The response of such a proposal to an apparent causal contradiction is not to deny or downplay causal effects, but to make an argument within the constraints imposed by causality. (QRI's response to the neuron swapping argument is an example.) Thus, when such a theory is compared to functionalism, the emphasis on causality cannot be a deciding factor since both proposals value it equally.
Instead, the difference is the spatial scale at which causality is examined. Functionalism tends to "round" causal effects at small spatial scales to discrete categories, e.g., when grouping voltage levels of a processor into logical 1s and 0s. Conversely, if no such rounding is permitted and causality has to be analyzed on the smallest scales, the claim that an object's contribution to consciousness is determined by its function becomes logically equivalent to the claim that it's determined by its physical structure since even adding a single electron changes an object's function. This is the most fundamental reason for the definition of functionalism given in this article.
Functions are not ontological primitives. If consciousness is to be fundamental (as implied by a realist view), it is a priori suspect for it to be based on high-level concepts like algorithms, functions, or computations, rather than on primitive concepts like the fields in quantum field theory. Rather than an independent argument, this point can also be viewed as the principle underlying the subsequent arguments in this list.
- Computation seems frame-dependent. If consciousness is to be frame-invariant, its physical substrate likely must be frame-invariant as well. However, computation (especially standard computation) appears to be frame-dependent since it relies on manipulating abstract codes (i.e., bit patterns) that require interpretation. Conversely, a frame-invariant view of computation (as is taken by Integrated Information Theory (IIT)) will likely have to identify consciousness everywhere (which is what IIT does).
- Counterfactuals. Any theory of consciousness can either hold that counterfactual behavior (i.e., what a system would do in response to inputs that are not present) matters or does not matter. Either option leads to a (largely distinct) set of problems. If counterfactuals don't matter, a frame-invariant view of computation becomes even harder to achieve since a single computation generally provides insufficient information to determine its meaning. On the other hand, if counterfactuals do matter (as is the case under IIT), it becomes more difficult to imagine a close coupling between the laws of consciousness, which take counterfactuals into account, and the laws of physics, which do not. However, such a coupling is mandated by dual-aspect monism.
Boundaries. One of the problems a theory of consciousness has to solve is drawing a boundary around the parts of a substrate that constitute a unified system. This is a difficult and perhaps impossible task in a functionalist ontology since there are no natural boundaries in abstract descriptions of arbitrary substrates.
While accepting vs. rejecting functionalism is usually viewed as a choice about how to interpret conscious systems, it can also affect which systems are conscious at all. Functionalism dictates that every substrate supports consciousness, whereas a non-functionalist theory could hold that some or most substrates cannot support consciousness at all. Consequently, the boundary problem may be easier for non-functionalist approaches.
It's worth pointing out that none of points #2-#5 apply to an illusionist understanding of functionalism, which neither claims that consciousness is fundamental nor frame-invariant and hence doesn't require sharp boundaries.
Resources
- Against functionalism: why I think the Foundational Research Institute should rethink its approach – A blog post by QRI co-founder Michael Edward Johnson.
- Digital Computers Will Remain Unconscious Until They Recruit Physical Fields for Holistic Computing Using Well-Defined Topological Boundaries – a blog post by QRI president and fellow co-founder Andrés Gómez Emilsson.
- Digital Sentience: Can Digital Computers Ever "Wake Up"? – a video from Andrés on his YouTube channel.
- Taking Monism Seriously – another blog post by Michael on dual-aspect monism, which underlies much of the discussion in this article.
References
- ↑ Bourget, D., Chalmers, D. J. (2023). Philosophers on philosophy: The 2020 philpapers survey. Philosophers' Imprint, 23(1).