Wolfram cellular automaton #110, and its asymmetry graph lead to complexity. Complexity is a delicate balance between order and disorder. Starting with random inputs, cellular automaton #110 quickly organizes itself into a complex pattern, resulting in a decrease in entropy.
Subjective consciousness may be related to an analogy between corticothalamic information flow in the brain, and information flow in the asymmetry graph of Wolfram cellular automaton #110. Both result in a decrease in entropy, and emergent complexity. This model entails a dynamic ultra small-world network connectivity, and is consistent with Tononi's Integrated Information Theory of Consciousness.
The Cellular Automaton Interpretation of Consciousness
Asymmetry graphs (1, 2, 3) (A-graphs) can be used to classify and illustrate information flow at all scales in Wolfram cellular automata (4). Rule #110 (class 4) produces complexity. Rule #30 (class 3) produces randomness (5). Figure 1 illustrates the hill-valley-hill configuration of the A-graph of rule #110, and the single-hill A-graph of rule #30. The purpose of this paper is to show that information flow in the corticothalamic system is analogous to the information flow in the asymmetry graph of rule #110.
Abbildung in dieser Leseprobe nicht enthalten
Figure 1
Information flow beneath the single ‘hill’ of the A-graph of rule #30 is uncontrolled; therefore, information flows together into a jumble leading to intrinsic randomness. Similarly, beneath the first hill (H1) of rule #110, information at shorter distances flows together into a jumble, but the valley (V) between the two hills of rule #110 indicates less information flow at intermediate distances, thus forming a ‘membrane’ or partial barrier to the information flow rate ‘surrounding’ the structures that were formed at shorter distances beneath the first hill. Information is prevented from flowing outward from these structures, thereby preserving these structures so they stand out against the background.
The second hill (H2) of the A-graph of rule #110 allows information flow at greater distances among these preserved structures so that one sees random collisions of these structures. Thus, complexity arises due to random collisions under the second hill among random structures formed under the first hill, these structures having been maintained by the valley of low information flow between the hills. Note in Figure 1, that if ‘V’ moves upward, the restriction of information flow at intermediate distances is gradually abolished, and the A-graph begins to resemble the A-graph of rule #30, a rule producing randomness. Information flow is similar at all scales.
Complexity can be viewed as macroscopically-partitioned randomness such that partitioning allows our perception and analysis to observe what we call complexity. In a general sense, complexity is a form of randomness. Complexity is an emergent phenomenon in systems that have short and long-range interconnections, but relatively few medium-range interconnections.
It is important to note that, given random initial values, cellular automaton #110 quickly organizes itself into a complex pattern, i.e., its entropy is decreased. Complexity represents a decrease in entropy.
Tononi’s integrated information theory of consciousness (6) proposes that subjective consciousness can be measured by Φ (Phi), the quantity of information integration. But, how does the brain integrate information, and how does information integration explain subjective consciousness—the ‘mind-body’ problem?
The key idea in the ‘mind-body’ problem is that subjective phenomena are essentially different from objective processes. David Chalmers (7) believes there is an intrinsic ‘explanatory gap’ between objective computations of the brain, and subjective consciousness. John Searle(8) maintains (Chinese Room Argument) that one cannot derive a subjective ontology from an epistemically objective process; that is, a semantic cannot be obtained from the syntax of a computer program. At best, a computer is a zombie, devoid of any internal subjective experience even though it might pass the Turing test.
Corticothalamic information flow is thought to be related to conscious states (9, 10, 11). Figure 2 illustrates the idea that corticothalamic information flow may be analogous (isomorphic) to the information flow seen in the A-graph of rule #110. If this is the case, then both systems produce emergent complexity by similar means. In graph theory, an isomorphism of graphs G and H is a bijection (a bidirectional mapping) between the vertex sets of G and H sf : V ( G ) → V ( H ) {\displaystyle f\colon V(G)\to V(H)\,\!} sss uch that any two vertices u and v of G are adjacent in G if and only if ƒ(u) and ƒ(v) are adjacent in H. This kind of bijection is commonly described as an ‘edge-preserving bijection’ in accordance with the general notion of isomorphism being a structure-preserving bijection.
As shown in Figure 1, the A-graph of cellular automaton #110 is a model of a dynamical system which produces complexity through random collisions in H2 of structures formed in H1, these structures maintained by restriction of information flow at intermediate distances (V).
Analogously (Figure 2), computational complexity in the brain may arise through random collisions in the thalamus (H2) of computational structures, i.e., neuronal assemblies formed in the cortex (H1) where these cortical structures are limited in size and maintained by restriction of information flow at intermediate distances by inhibitory synapses in the cortex (V). This suggests a dynamical small-world computational system.
Abbildung in dieser Leseprobe nicht enthalten
Figure 2
In Figure 2, neuronal assemblies (fractal computational structures—circles) are formed in the cortex due to external and internal inputs. The size of these assemblies is controlled by inhibitory synapses (‘V’). These fractal assemblies are sent to the thalamus (H2) where they are mapped and collide randomly (squares). These mapped thalamic computational structures are sent back to the cortex (corticothalamic loop) where they stimulate additional fractal neuronal assemblies which, in turn, are sent back to the thalamus, completing the corticothalamic loop. Repeated loops (Dynamical Hierarchical Mapping) result in self-referential fractal computational structures that entail a myriad of mapped information.
Fractals can exhibit exact or statistical similarity at all scales. The formation rule of a fractal, references the entire fractal. Consequently, fractals admit of a brief description, and are self-referential. Let us define the term ‘brief descriptor’ to indicate a description of a fractal using a relatively small amount of information. If the brain processes a fractal computational structure resulting in a ‘brief descriptor,’ then we say that the fractal computational structure has been mapped.
In terms of these definitions, we can restate the operation of the corticothalamic loop:
- Dynamic fractal neuronal assemblies, formed throughout the cortex (H1) as the result of external and internal inputs, are limited in size by cortical inhibitory synapses (V).
- These assemblies are sent to the thalamus (H2), where they are mapped and collide randomly with one another.
Repeated corticothalamic loops result in dynamic hierarchical mapping, thereby forming a fractal dynamic computational structure which:
- has integrated information;
- has fewer bits than the information it represents;
- can be represented as a shape in multidimensional space;
- is self-referential, and;
- is complex—has lower entropy than the information it represents.
Mapping in the thalamus can be considered a process that computes a brief description of the most current neuronal assembly, and any previously-attached, mapped neuronal assemblies.
The overall structure resulting from this repeating thalamic loop is hierarchical, and represents a mapping of random collisions of myriads of neuronal assemblies from the cortex. It is an overarching, minimal description of massive amounts of information, a bundling into a whole that is self-referential, integrated, complex, and has fewer bits of information, and lower entropy than the information that it represents.
A moment’s reflection suggests that this system might entail our notion of the conscious ‘I.’ Moreover, it can be seen that the conscious ‘I’ may be the result rather than a primary cause of these processes, suggesting, therefore, that the subjective ‘I’ is a posteriori. Consciousness is not something standing outside the brain, looking on. It is, instead, a useful illusion allowing a quick and reasonably accurate ‘summary’ assessment of the external world at any given instant and over longer periods of time. Consciousness probably appeared early in evolution.
The principle of computational equivalence (4) (PCE) states that past a fairly low level of complexity, all complex systems are equivalent; that is, beyond a certain base level of complexity, there is no hierarchy of complexity—all things are equally complex. Additionally, the principle of computational irreducibility (4) (PCI) indicates that a complex cellular automaton such as #110 represents its simplest expression, namely that there is no ‘shortcut’ or equation that can exactly predict the future state of the cellular automaton merely by ‘plugging in’ a future time (tfuture). Instead, one has to ‘run’ the automaton to observe its output. Thus, even though a complex cellular automaton is deterministic, the outcome is a priori indeterminate.
If the proposals in this paper are the case, then the PCI might also govern free will, which may be both determinate, yet a priori indeterminate. The moral and legal ramifications are obvious.
Several questions are suggested: Can these mapped structures be represented by a multi-dimensional ‘consciousness field,’—i.e., a tensor field? Do inputs to the brain distort the tensor field in a way that is analogous to the way mass distorts the gravitational field? Is the tensor field invariant under appropriate transformations? Is it possible to transform one consciousness into another? Can we know what it is like to experience another consciousness?
The formation of complexity is a key process in nature, important in the development of galaxies, life, intelligence, and consciousness.
If, because of the Principle of Computational Equivalence (PCE) (4), cellular automaton #110 and its A-graph are universal models of the way complexity forms, then the formation of complexity throughout the universe is not only common, but inevitable.
References
1. Goldberg, M. ‘Complexity.’ Telicom XY.19, September 2002, pp 44-48. Journal of the International Society for Philosophical Enquiry, ISSN: 1087-6456.
2. Goldberg, M. ‘Complexity and Randomness,’ Telicom XVI.3- June/July 2003 pp 65-67. Journal of the International Society for Philosophical Enquiry, ISSN: 1087-6456.
3. Goldberg, M. Classification of Cellular Automata Using Asymmetry Graphs. GRIN v341659, 2016.
4. Wolfram, S., A New Kind of Science pp 488-489, ©2002 Wolfram Media Inc. Wolfram, Stephen, ‘A New Kind of Science.’ Wolfram Media, Inc., May 14, 2002. ISBN: 1-57955-008-8.
5. Goldberg, M. ‘Complexity Arising from Entropy Acting on Asymmetric Substrates.’ Telicom XII.22 June/July 1998, pp 38-40. Journal of the International Society for Philosophical Enquiry, ISSN: 1087-6456.
6. Tononi, G., From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0 Published: May 08, 2014 DOI: 10.1371/journal.pcbi.1003588 Featured in PLOS Collections.
7. Chalmers D., Levine, J., 1983, “Materialism and qualia: the explanatory gap”. Pacific
Philosophical Quarterly, 64: 354-361. David Chalmers, ‘Facing Up to the Problem of Consciousness’, JCS, 2 (3), 1995, pp. 200-19.
8. Searle, John, Talks @ Google, ‘Youtube,’ Dec. 3, 2015. A discussion of consciousness, subjective ontology, epistemic objectivity, semantics, and syntax. ALSO Wikipedia—Chinese Room, a thought experiment.
9. Llinas, R. ‘Consciousness and the Corticothalamic Loop,’ International Congress Series, Volume 1250, October 2003, Pages 409-416. Department of Physiology and Neuroscience, New York University Medical School, 550 First Avenue, New York, NY 10016, USA.
10. Alkire, M, Miller, J. General Anesthesia and the Neural Correlates of Consciousness. Prog. Brain Res. 2005; 150: 229-44.
Frequently asked questions
What is the main idea of "The Cellular Automaton Interpretation of Consciousness"?
The paper proposes that information flow in the corticothalamic system, which is related to conscious states, is analogous to the information flow in the asymmetry graph of Wolfram's rule #110 cellular automaton. This suggests that both systems produce complexity through similar mechanisms of random collisions and information flow restriction.
What are asymmetry graphs (A-graphs) and how are they used in this context?
Asymmetry graphs are used to classify and illustrate information flow in Wolfram cellular automata. They help visualize how information flows at different scales and distinguish between rules that produce complexity (like rule #110) and randomness (like rule #30).
How does rule #110 contribute to the understanding of complexity?
Rule #110, a class 4 cellular automaton, is a model for understanding how complexity arises. Its A-graph shows a 'hill-valley-hill' configuration, indicating that information flow is partitioned. The first hill generates structures, the valley restricts information flow to maintain these structures, and the second hill allows random collisions among them, leading to emergent complexity.
What is the role of the corticothalamic system in consciousness according to the text?
The text suggests that corticothalamic information flow is crucial for conscious states. It posits that this flow is isomorphic to the information flow in the A-graph of rule #110, where neuronal assemblies formed in the cortex (H1) are limited in size by inhibitory synapses (V) and then sent to the thalamus (H2) for random mapping and collisions, creating a self-referential, complex system that underlies the conscious 'I'.
What are the key components and their functions in the proposed model of consciousness?
Key components include:
- Cortex (H1): Forms neuronal assemblies (fractal computational structures) due to external and internal inputs.
- Inhibitory Synapses (V): Control the size of these assemblies in the cortex, restricting information flow.
- Thalamus (H2): Receives neuronal assemblies from the cortex, maps them, and allows them to collide randomly, creating new, mapped structures.
How does the concept of 'mapping' relate to consciousness?
Mapping, specifically in the thalamus, is considered a process that computes a brief description of neuronal assemblies. Repeated loops result in dynamic hierarchical mapping, forming a structure that has integrated information, is self-referential, and represents a minimal description of massive amounts of information, which the author suggests is akin to the conscious "I."
What is the Principle of Computational Equivalence (PCE) and Principle of Computational Irreducibility (PCI) and how do they apply?
The PCE suggests that past a certain level of complexity, all complex systems are equivalent. The PCI implies that complex systems, like cellular automaton #110, cannot be simplified or have their future state predicted without actually 'running' the system. The author postulates if #110 and its A-graph are models of complexity as the PCE suggests, then it is not only common throughout the universe, but inevitable. Further, if the proposals are correct then the PCI may govern free will which may be both determined and yet a priori indeterminant.
How does the paper address the 'mind-body' problem?
The paper addresses the 'mind-body' problem by suggesting that subjective consciousness is an emergent property of the complex information processing in the corticothalamic system, analogous to complexity arising from simple rules in cellular automata. The conscious 'I' is a result of this complex mapping, a useful illusion, rather than a primary cause.
What are the implications of this model for free will and consciousness research?
The model suggests that free will, like complex cellular automata, might be determinate yet a priori indeterminate. This has moral and legal ramifications. It also opens avenues for research, such as exploring multi-dimensional 'consciousness fields,' tensor field representations, and the possibility of transforming or understanding different conscious experiences.
- Quote paper
- Marshall Goldberg (Author), 2016, The Cellular Automaton Interpretation of Consciousness, Munich, GRIN Verlag, https://www.hausarbeiten.de/document/345688