Integrated Information Theory

From qri
Jump to navigation Jump to search

Integrated Information Theory (IIT) is a highly formalized theory of consciousness primarily developed by Giulio Tononi. IIT was first proposed in 2004[1] but has undergone continuous development since. This article will cover the most recent version, titled IIT 4.0, which was published in 2023[2] and will be referred to as "the paper" throughout.

IIT has been included in the QRI lineages in honor of being "the first full-stack paradigm for formalizing consciousness". A recurring theme in QRI's appreciation of IIT is that it refers primarily to what IIT attempts to do, whereas how it does it is often considered more of a negative example. The central instance of this is IIT's goal of constructing a mathematical object that precisely corresponds to a system's qualia, which is a goal that QRI has embraced fully, even though the approaches chosen by QRI vs. IIT in pursuit of this goal have little overlap.

Layman Explanation

You may have heard that the brain "computes with neurons". Each neuron can take on a different level of electrical charge, and they communicate with each other primarily through explicit physical connections. Thus, one can view the entire brain as a graph in which each node represents a neuron. In such a graph, the physical structure of a neuron is abstracted away; each node remembers only a single number corresponding to the charge of the neuron it represents.

Integrated Information Theory (IIT) is based on a generalization of this concept. Given any system, it identifies the relevant units and models their possible states and the interactions between them. Since the elements themselves are usually very small, IIT primarily studies groups of such elements. For such a group to be conscious, it must

  • contain information, i.e., knowing about the current state of the group must tell you something about its past and future; and
  • be integrated, i.e., all of its components must affect each other.

For groups that meet both conditions (plus some more complicated ones), IIT offers precise mathematical formulas to calculate different objects and quantities. These calculations result in something called a ("Big Phi")-structure, which is a highly complex object whose structure is supposed to correspond precisely to the consciousness of that group.

It's worth emphasizing that IIT applies this model to every system, including systems that aren't intelligent or even sentient. For example, an iron bar could be modeled as a graph of atoms, a company could be modeled as a graph of people, and so on. Since IIT holds that a system's degree of consciousness increases with connectivity, consciousness may have an implicit bias toward intelligent systems, but the formulas applied to compute consciousness are the same in every case. Furthermore, while there is a rule called the Exclusion Postulate that avoids overlapping conscious systems (e.g., if individual ants in an anthill are conscious, then the anthill itself cannot also be conscious, and vice-versa), this rule does not prevent consciousness from occurring in non-sentient systems. (See the "Implications" section for further discussion.)

IIT has often been criticized for a lack of justification for its math. While some of the quantities have similarities to measures from regular Information Theory, the -structure as a whole is an entirely novel mathematical object with no established significance. In the paper, this approach is justified primarily by reference to the properties of consciousness (see next section) and the fact that they are implemented by the math. Conversely, IIT says very little about how the -structure corresponds to the consciousness of a system. If viewed through the lens of the taxonomy of consciousness problems proposed by QRI co-founder Michael Edward Johnson, IIT proposes solutions for all three problems in the Math section but doesn't address the Translation Problem.

Themes and Principles

IIT is based on a set of explicit assumptions and guidelines, and there are other principles implicit in its approach. The following will list a selection; page numbers indicate the relevant sections in the paper.

  • Existence. IIT holds that consciousness exists and is fundamental (p. 3). It thus proposes a realist view of consciousness.
  • Information. IIT asserts that consciousness is about information, and that information is about "tak[ing] and mak[ing] a difference" (p. 4).
  • Integration. IIT emphasizes the importance of binding, noting that "Experience is unitary: it is a whole, irreducible to separate experiences" (p. 3). This axiom is reflected in the math by computing the degree to which the system loses information when decomposed into subsystems.
  • Abstraction. As mentioned in the Layman section, IIT takes as the basis of its analysis a description of a system in which physical details are hidden (p. 12).
  • Intrinsicality. IIT holds that "existence should be evaluated from the intrinsic perspective of an entity—what exists for the entity itself, not from the perspective of an external observer" (p. 7). As a consequence, measures of information concern themselves with the effect of each subset of components on itself rather than on the surrounding system.
  • Complete Formalization. IIT provides the formulas to compute a mathematical object called the -structure, and it claims that "an experience is identical to the -structure of an intrinsic entity: every property of the experience should be accounted for by a corresponding property of the -structure, with no additional ingredients" (p. 28). It thus endorses (and in fact, was the primary inspiration for) Qualia Formalism.
  • Exclusion. If several overlapping systems seem to have integrated information, then only the one with the largest amount of integrated information exists. This principle is called the "Exclusion Postulate" (pp. 18-19).

The paper makes frequent references to these and other principles throughout its presentation of the formulas. Thus, the math can be viewed as an implementation of these ideas, which is also a framing explicitly suggested in the paper (p. 8).

Technical Explanation

The following sections will attempt to explain IIT in a way that is faithful to its qualitative aspects (including the principles listed above) despite omitting a sizeable portion of the math. Rather than presenting the information in a bottom-up manner as is done in the paper, the sections will follow an irregular order that includes frequent references to future sections. This is done to give the reader an immediate understanding of what the concepts mean, without having to grok the details first.

Setting

A toy example of a system that could be analyzed by IIT.

The starting point for the analysis of IIT is a system composed of a finite number of components, each of which has a set of possible states. The state space of the entire system is given by the set of all combinations of states from its components.

The figure to the right depicts a sample system of components, each of which has possible states symbolized by colors. (The entire system has possible states, among them the one shown in the picture.) Explicit connections are not shown, but spatial distance is meant to act as a proxy, with close elements influencing each other more than distant elements.

The entire system is assumed to obey a transition probability matrix. Such a matrix determines, for any two states and of , the probability that will transition from to in any one step. Note that a deterministic system corresponds to a special kind of matrix in which all transition probabilities are either or .

From Candidate Systems to Complexes

Four possible candidate systems with . Note that real values of are very difficult to compute even for toy systems, which is why the values shown here are simply made up and meant only for illustrative purposes.

Given a subset of components in a system, IIT defines a quantity called its system integrated information, denoted . This quantity is a measure of the degree to which the subset has information above and beyond its parts (see the next section for details). Any subset that does have nonzero system integrated information (i.e., any with ) is called a candidate system.

In general, a system can have exponentially many such candidate systems. If all of them were conscious, a large system could have an exponential number of overlapping subsystems with ontological existence. To avoid this conclusion, IIT has a rule called the Exclusion Postulate, which defines an additional criterion for candidate systems to be conscious, at which point they're called complexes. Put simply, the Exclusion Postulate states that a candidate system is a complex only if there exists no overlapping complex with higher . Thus, the candidate system with highest is a complex, and no system that overlaps is one. Then, the candidate system that has highest among the candidate systems that have no overlap with is a complex, and so on. This construction guarantees that every system decomposes into a set of non-overlapping complexes.

The figure to the right shows four candidate systems and their respective system integrated information. (Note that while contains more information than , this information is less integrated due to the weaker connections between its components, hence .) Since the systems other than all overlap (i.e., ) and has maximal , both and fail to be complexes. Conversely, , despite having the lowest system integrated information of the four, might be a complex. Whether it really is one depends on the additional candidate systems that aren't shown in the figure.

Computing the Integration in Integrated Information

As a first reduction, system integrated information is defined as the minimum of the integrated cause information and the integrated effect information (i.e., ). Here, measures how much information the state of a subset contains about the system's prior state and measures measures the same for its future state. This separate treatment of cause and effect is a recurring theme in IIT, and it's the reason why it implies that a purely feed-forward neural network exhibits no qualia at all. However, both and already measure integrated information.

Since their treatment is roughly analogous, we will consider integrated effect information only. The full formula that computes for a given subset is complex, but its key component is the term

Here, is a measure for the effect information, which is computed with respect to a particular follow-up state (see the next section for details). Conversely, computes the identical quantity except for a decomposed version of the system as indicated by . Thus, a decomposition that doesn't lose any information has , so the full term becomes since .

In the calculations above, depends heavily on the choice of the decomposition . The criterion for choosing the decomposition that counts is complex, but the idea is to choose the one that leads to the smallest information loss in general, i.e., not relative to the system's current state. (See formula (22) in the paper for the precise formula and pp. 16-17 for the full treatment of integration.)

Quantifying Information

The concepts discussed in the previous section rely on having a measure of information. This measure is operationalized as the extent to which knowing the state of the subset in the present restricts its state in the past and future. In line with the principle of intrinsicality, only the effect on itself is considered; the information that reveals about the remaining system does not play a role in the formulas. Consequently, the formal quantities are called intrinsic cause information and intrinsic effect information . As of before, they are computed separately for past and future, but we will consider only intrinsic effect information. Formula (4) in the paper describes how is computed for a current state and a "possible effect state" :

Here, is the conditional probability that results as the next state, given that the current state is . Thus, increases as the transition probability increases. Conversely, the term is the generic probability that the system would transition to state .

The state from the previous section is defined as the effect state that maximizes intrinsic effect information, i.e., . In fact, the value of itself does not occur in subsequent calculations, so it matters only as the selection criterion for .

The possible effect states are allowed to specify only a subset of components in . This fact introduces a tradeoff between the term , which decreases exponentially as specifies a larger number of components, and the term , which increases additively as specifies a larger number of components. This tradeoff makes it so that the effect state that maximizes is likely to be a compromise between specifying only one component and specifying all of them.

It's worth noting that the math in this particular section implements several choices that may be surprising to readers. For example, intrinsic information depends only on the transition probability to the most likely state, which means that it doesn't matter how far the probability is spread out across the remaining states. E.g., if states and both have transition probability to their most likely successor state, but has probability 0.4 on the second most likely successor vs. probability 0.1 for , this fact wouldn't have any impact on the of and . It also does not matter whether the most likely successor state is the one that is, in fact, reached next in the physical system.

Unfolding the Causal Structure of Complexes

While complexes are the entities that exhibit consciousness according to IIT, there is a great deal more math involved in computing the mathematical object that precisely corresponds to the qualia of such objects (pp. 19-28 in the paper). This object is called a cause-effect structure, or ("Big phi")-structure and is considerably more complex than a scalar. Since the computation of -structures is less integral to an understanding of IIT than the math covered so far, only a highly abbreviated description will be presented here.

Given a complex , the subsequent computations resemble those already performed to identify as a complex. As before, the formulas relate to the causal effects of . To do this, one considers mechanisms and cause and effect purviews , and the degree to which the state of influences the states of and in the past and future, respectively. Once again, a mechanism ultimately selects specific cause and effect purviews, and once again, one can quantify the irreducibility of mechanisms by considering disintegrating partitions , analogous to the decompositions discussed earlier. Next, one quantifies the overlap between different mechanisms and their corresponding purviews through concepts called relations and face purviews. Finally, the -structures are defined as the set of all such relations (subject to certain conditions) combined with the mechanisms and their respective purviews.

As of before, these formulas concern themselves only with the effects that has on itself rather than on its environment. This principle is stated explicitly in the following quote (p. 28), which also makes explicit that IIT's conception of consciousness is frame-invariant:

From the intrinsic perspective, what truly exists is a complex with all its causal powers unfolded—an intrinsic entity that exists for itself, absolutely, rather than relative to an external observer.

Summary

In summary, computing the complexes of a system and their corresponding -structures requires the following steps:

  • For each subset , compute the states and that maximize intrinsic cause and effect information and , respectively.
  • Compute the degree of integration by considering all possible decompositions of and computing the loss of information for each. The decomposition for which information loss is minimal in general determines the values of integrated cause information and integrated effect information .
  • Compute the system integrated information as the minimum of cause and effect integrated information, . Each subset with is called a candidate system.
  • Given a list of all candidate systems and their values of , identify those candidate systems as complexes for which there exists no overlapping complex with higher system integrated information. (There might exist an overlapping candidate system with higher if that candidate system is itself barred from being a complex.)
  • For all complexes, unfold the cause-effect structure through a series of steps that roughly resemble the ones above. The union of all the relevant objects computed in this step becomes the -structure.

Implications of IIT

Due to its high degree of formalization, IIT demonstrates many of the implications of taking functionalism to its logical conclusion. These implications generally do not become apparent in less formalized proposals. Four of them are discussed in the following:

  1. IIT solves the frame-dependence problem of functionalism by asserting that its formulas apply on every spatial scale. A result of this solution is that consciousness appears everywhere, including in inanimate objects like rocks.
  2. IIT's Exclusion Postulate implies that entities can pop in and out of existence based on potentially small changes in the underlying substrate. A country or company could, in principle, attain consciousness, at which point the Exclusion Postulate holds that the people comprising it cease to be conscious themselves. In practice, this particular example might not be possible since it would require the integration between humans to exceed that of neurons within a brain, but analogous effects occur at smaller scales whenever the level of connectedness between sub-components of a larger system is increased. Formally, if systems and are initially such that and one gradually adds connections between and , there will come a point when leading to a sudden discontinuity in the exhibited qualia wherein and cease to be conscious and becomes conscious instead.
  3. Due to its inclusive formalism, IIT postulates no inherent link between consciousness and intelligence.
  4. According to Tononi himself, IIT ascribes very little consciousness to digital computers.[3] Tononi and Koch justify this claim as follows:

[...] Of course, the physical computer that is running the simulation is just as real as the brain. However, according to the principles of IIT, one should analyse its real physical components—identify elements, say transistors, define their cause–effect repertoires, find concepts, complexes and determine the spatio-temporal scale at which reaches a maximum. In that case, we suspect that the computer would likely not form a large complex of high , but break down into many mini-complexes of low . This is due to the small fan-in and fan-out of digital circuitry (figure 5c), which is likely to yield maximum cause–effect power at the fast temporal scale of the computer clock.

From the perspective of QRI, the second implication is the most problematic one as it violates Dual-Aspect Monism. Conversely, the third and fourth implications would likely be considered problematic by other functionalists. QRI co-founder Michael Edward Johnson has written the following in PrincipiaQualia (written before the release of IIT 4.0):[4]

IIT is an odd hybrid which sits near the half-way mark between physicalism and computationalism: computationalists hold their nose at it since they see it as too physicalist & realist about consciousness, whereas physicalists also hold their nose as they see it as too computationalist. However, it is a Schelling Point for discussion as the most mature theory of consciousness we have, and I believe it suffers from the same core flaws as any computational theory of consciousness would, so we use its example to critique computationalism by proxy.

Indeed, the classification of IIT as functionalist is nontrivial, and Tononi himself may not view it that way. However, it is functionalist according to the precise definition used in this wiki since its formalism is, in general, based on an abstracted model of physics.

A complete assessment of the extent to which these implications are due to IIT in particular vs. functionalism in general is outside the scope of this article. However, it is worth noting that at least the Exclusion Postulate is difficult to replace without accepting an exponential number of overlapping entities. The Exclusion Postulate also has not changed since IIT 3.0.

Finally, it should be noted that the basic approach of modeling the brain as a set of communicating neurons can be called into question. In particular, it is disputed by the Electromagnetic Hypothesis, which states that the substrate of consciousness is a part of the electromagnetic field.

References

  1. Tononi, G. (2004). An information integration theory of consciousness. BMC Neuroscience, 5, 42. https://doi.org/10.1186/1471-2202-5-42
  2. Albantakis, L., Barbosa, L., Findlay, G., Grasso, M., Haun, A. M., Marshall, W., Mayner, W. G. P., Zaeemzadeh, A., Boly, M., Juel, B. E., Sasai, S., Fujii, K., David, I., Hendren, J., Lang, J. P., & Tononi, G. (2023). Integrated information theory (IIT) 4.0: Formulating the properties of phenomenal existence in physical terms. PLOS Computational Biology, 19(10), e1011465. https://doi.org/10.1371/journal.pcbi.1011465
  3. Tononi, G., & Koch, C. (2015). Consciousness: here, there and everywhere? Philosophical Transactions of the Royal Society B: Biological Sciences, 370(1668). https://doi.org/10.1098/rstb.2014.0167
  4. Johnson, M. E. (2016). Principia Qualia. Retrieved from https://opentheory.net/2016/11/principia-qualia, p.62.