Logic of Representation and Information

Classical reality is described in terms of objects and things and their mutual relationships. On the other hand, in the case of quantum reality, the collapse of the state in an interaction assigns a unique position to the observer. These two disparate views are based on different logics of representation. In this paper, we first summarize the early evolution of these ideas and then go beyond the implicit dependence of the quantum theory framework on the mathematical apparatus of calculus and vector spaces, by delving one layer deeper to an information-theoretic understanding of symbol representation. We examine some epistemic implications of the fact that, mathematically, e -symbol representation is optimal and 3 symbols are more efficient that 2 symbols, and this optimality leads to the idea that space itself is e -dimensional, and not 3-dimensional. We also discuss the principle of veiled nonlocality as a way to understand the split between the observer and the physical process.


Introduction
Physics is the search for a mathematical basis for the observed phenomena. But the mathematical structure often comes with implicit entities and unstated assumptions about reality. For example, in classical mechanics, the observer's sentience lies outside the explanatory power of the theory and space and time are considered absolute. In general, abstract entities of a theory may be mapped to intuitive notions in different ways, leading to divergent interpretations. Broadly speaking, these interpretations fall into the epistemic and the ontic categories. In the epistemic view, one is speaking of the knowledge obtained from the experiment without going into the ultimate nature of reality, whereas in the ontic view one is describing reality as a particular assemblage of objects, together with their mutual relationships, and evolution in time. These two represent fundamentally different intuitions in physics and they have a prehistory that goes back to ancient thought.
The Copenhagen Interpretation (CI) (Bohr, 1958;von Neumann, 1955) of quantum mechanics is an epistemic understanding in which the observer remains outside the experiment, thus dividing the world into the domains of physical objects and that of observers. Although, it has long been the dominant view amongst quantum theorists, many have felt disquiet that it brings in consciousness as an element of the interpretation, and, consequently, several alternative interpretations have been advanced. Of these, the ontic MWI (Many Worlds Interpretation) (Zurek, 2003) seeks to explain the observer as a separate physical system also governed by the laws of quantum mechanics.
The ontic understanding of reality becomes problematic when one speaks of information, fundamental to many theories in physics, since it involves properties of ensembles or wholes that cannot be reduced to a field, which is essential for a theory based on local variables. Although, neuroscience accepts the doctrine of the identity of brain and mind, and sees the mind emerge from the complexity of the interconnections, others assert that there is no logical reason why complexity should lead to such a new phenomenon. The situation is complicated by the fact that there is no specific neural correlate of consciousness (Zeki, 2003). Furthermore, seeing the universe as a machine leaves out participants who have agency.
In the epistemic Copenhagen Interpretation, the physical universe is separated into two parts: the first part is the system being observed, and the second part is the human observing agent, together with the instruments. The agent is therefore an extended entity described in mental terms that includes not only his apparatus but also instructions to colleagues on how to set up the instruments and report on their observations. The orthodox version of CI excludes causal interaction between mind and body; in this view, mental and physical phenomena are two aspects of the same reality like two sides of a coin (von Neumann, 1955;Kafatos and Nadeau, 2000;Kak, 2016;Kak, 2018a).
The Heisenberg cut (also called the von Neumann cut) is the hypothetical interface between quantum events and the observer's information, knowledge, or awareness. Below the cut everything is governed by the state or wave function, whereas above the cut one must use classical description. Operationally, it is a dualist position, where there is a fundamental divide between observers and objects. The placement of the cut between the subject and the object depends on the nature of the interaction between the two.
The arbitrariness of the cut has come in for criticism and spurred the development of other interpretations, but it is a device for aggregating the effects of the mind or minds associated with the observational regime and it appears to be a reasonable way to separate the inanimate from the animate especially since the brain itself may be viewed as a machine.
In the ontic view of the wave function there is no collapse of the wave function, and the interaction is described in terms of decoherence, which occurs when states interact with the environment (Penrose, 1994;Zurek, 2003). By the process of decoherence, the system makes transition from a pure state to a mixture of states that the observer ends up measuring. The problem of collapse of the wave function is sidestepped by speaking of interaction between different subsystems. But since the entire universe is also a quantum system, the question of how this whole system splits into independent subsystems arises (Kak, 2015). Operationally, the split into subsystems is to an extent the choice of the observer and it serves about the same function as the Heisenberg cut of CI. Note that the ontic view has no place for minds, which can at best be taken as traces of mathematical operations, thus ruling out agency.
Other less popular interpretations include various hidden variable theories (e.g. Bohm, 1980), consistent histories, modified dynamics, and the transactional interpretation (Penrose, 2004;Jaeger, 2009;Kastner, 2013). Another idea is to assume that quantum mechanics is incomplete and there is a yet-to-be-discovered deeper theory that agrees with it in the microscopic world, and with the classical theory in the macroscopic world.
The principle of least action (PLA), according to which Nature is optimal and thrifty, has played a key role in the axiomatization of physical theories (Stöltzner, 2003, Feynman andHibbs, 1965). Using this principle, a true dynamical trajectory of a system is found by computing all trajectories that the system could possibly take, computing the action for each of these trajectories, and selecting the one with the least action as the true trajectory (Feynman, 1985). PLA is about dynamics and its complementary aspect is that of representation, which comes in even before dynamics is considered. The representation can be concerning structure or data or both and it comes into physical reasoning both in logic and also counting.
When it comes to dynamics, some see the logic of choosing between two alternatives, made famous in Wheeler's memorable phrase it from bit (Wheeler, 1990), as the bedrock upon which physical reasoning rests. There are many proposals that information in terms of bits (one-intwo) is the fundamental basis of physical reality, and that the universe is a digital machine (e.g. Zurek, 2009), but others argue that this claim is not consistent with quantum mechanics (Penrose, 1989). Complicating this picture is the reality that counting is at the basis of observations, but binary choices are not optimal (Kak, 2018), and space itself may be viewed as being e-dimensional (Kak 2020d;Kak, 2020e).
In this paper, we first present the historical antecedents to the rival ideas of epistemic and ontic nature of reality. Then we review the argument that shows that three-way logic is superior to the two-way one and then examine its implications for the understanding of experimental results related to the three-slit experiment. The principle of veiled nonlocality is discussed as a way to understand the split between the observer and the physical process, and it is shown how an ontic scheme cannot be the basis of effective evolution. By considering how information is obtained from a quantum system, we argue that consciousness is not computable, which means that it cannot be generated by machines.

A brief historical excursus
The first person to use the term "physics" in the sense it is understood now is Aristotle (384-322 BCE). Motivated perhaps by biology, he conflated change in biological and physical domains. He defined motion as the actuality of a potentiality, which is strictly true only for the "motion" of a living organism. Motion defined this way requires the assumption of an absolute frame and other imaginary schema. He gave example of four types of change, namely change in substance, quality, quantity, and place.
Aristotle's ontic system was extremely influential for nearly 2,000 years in the intellectual life of the Western world for he was embraced by the orthodoxy in both Christianity and Islam. As physics is fundamental to cosmology, his thoughts had a profound impact on the history of Western science (Ackrill, 1981).
According to Aristotle, the sun, the moon, planets and stars are embedded in perfectly concentric crystal spheres that rotate at fixed rates. The celestial spheres are made up of the element ether which supports uniform circular motion. He took the terrestrial objects to be composed of four other elements that rise or fall. The earth, the heaviest element, and water, fall toward the center of the universe; hence the earth and the oceans constitute our planet. At the opposite end the lightest elements, air and fire, rise up and away from the center, and this led to the geocentric model of the solar system. As we know, this set the stage for a challenge to this model by Copernicus and Galileo and the subsequent rise of classical physics. But the system that replaced Aristotle's was also an ontic one.
A few centuries before Aristotle, around 600 BCE, Kaṇāda in India anticipated two of the three laws of motion, and he attempted to create a formal system that includes space, time, matter, as well as observers (Dasgupta, 1992). Kaṇāda's impressive assertion is that "all that is knowable is based on motion" gives centrality to physics in the understanding of the universe.
In the text called the Vaiśeṣika Sūtra (Kak, 2016b), Kaṇāda defines categories for space-timematter and for attributes related to perception by sentient agents. He asserts that these six categories are sufficient to describe everything in the universe from concrete matter to the abstract atom.
The six categories are: substance, quality, motion, universal, particular, and inherence (which relates to observation). The first three of these have objective existence and the last three are a product of the observer's interaction. Universals are recurrent generic properties in substances, qualities, and motions. The mind is not an empty slate; the very constitution of the mind provides some knowledge of the nature of the world.
Of the six categories, the basic one is that of substance and the other five categories are the ones that the mind associates with the substance. Thus observers belong to the system in an integral fashion. If there were no sentient beings in the universe then there would be no need for these categories.
There are nine classes of substances, some of which are non-atomic, some atomic, and others all-pervasive. The non-atomic ground is provided by the three substances of ether, space, and time, which are unitary and indestructible; a further four, earth (P), water (Ap), fire (T), and air (V) are atomic composed of indivisible, and indestructible atoms; the eighth is the self or consciousness, which is omnipresent and eternal; and, lastly, the ninth, is the mind, which has atomic dimensions. The atoms are abstractions that should not be confused with the elements of the same name.
Why are there only four kinds of basic atoms? Consider gold in its solid form; its mass derives principally from the P atoms. When it is heated, it becomes a liquid and therefore there should be another kind of atom already in gold which makes it possible for it to take the liquid form and this is Ap. When heated further it burns and this is when the T atoms get manifested. When heated further, it loses its mass ever so slightly, and this is due to the loss of the V atoms.
It is significant that consciousness is listed before mind, suggesting that it is the medium through which mind's apprehensions are received. By including consciousness and mind in his scheme where all the properties of matter are to be obtained by the study of motion and qualities of matter, Kaṇāda put unique emphasis on the observation process and thus the scheme is fundamentally epistemic. A related tradition called Vedanta, in which the observer plays a fundamental role, influenced Schrödinger in his development of wave mechanics (Moore, 1989).

Three-way choice and counting
Since coding of coordinates is fundamental to physical theory and a part of what the experimenter chooses to do, let's consider numerical data or coordinate locations, to an arbitrary base r. Given that the probability of the use of the r symbols may rightly be taken to be the same and equal to 1/ , the information associated with each symbol is log . The efficiency of the coding scheme per symbol, E(r), is (Kak, 2020a): To find the value of r for which the above expression is maximal we take its derivative with respect to r and equate that to zero. This yields the condition that ln = 1, from which we conclude that the optimal base is e, with E(e)= 0.368 nats or 0.531 bits. Table 1 below provides E(r) for some of the values of r that range from 2 to 10. The efficiency is quite close to the maximum for r=3 (a value of 0.528 bits as compared to the optimal value for e which is 0.531 bits), with the next best value coming at the bases 2 and 4 (where it is 0.500 bits). The efficiency at r = 3 is superior to that at r = 2 by 5.6%. After this the values decline monotonically as shown in Figure 1.
The optimal value e is not convenient for representation of integers, and since conceptual schemes use integers, this may be a reason why these schemes appear to fall short. The basis used in physics may be related to the choices made in terms of alternatives, in which, assuming linearity, the influence of one is added to that of another. Examples of this are the two-body problem of classical physics and the two-slit experiment of quantum theory. In classical physics, the general version of the two-body problem can be reduced to a pair of onebody problems, allowing it to be solved completely. By contrast, the three-body problem (and, more generally, the n-body problem) cannot be solved in terms of first integrals, except in special cases. Paralleling this, one could ask if the three-slit variant to the double-slit experiment provides new understanding since the three paths may be taken to correspond to 3 digits of the ternary system.
It is noteworthy that the fundamental constants of physics and a host of other natural data follow the Newcomb-Benford law on first digits (Hill, 1998) according to which the leading digit d (d ∈ {1, ..., r-1}) for number to the base r, r ≥ 2, occurs with probability as a logarithmic function: This may be derived by summing together many uniform random variables with varying support (Hill, 1995). The first row of Table 2 lists these frequencies for base 10, the rest of the data on the first digits related to different phenomena is adapted from (Sambridge, 2010).
The mathematical explanation is that of counting that stops at different counts and the probability distribution is obtained by summing over all such counts. This notion may also be seen as the basis of self-similarity in natural phenomena.
Since the first digit phenomenon is true for a variety of natural phenomena including fundamental physical constants, one may conclude that counting is a foundational attribute of reality. But unlike the large difference of information that was found between the use of bases 2 and 3, the corresponding difference in optimality for counting for these two bases is not particularly large (Hayes, 2001). Geomagnetic field 28.9 17.7 13.3 9.4 8.1 6.9 6.1 5.1 4.5 Geomagnetic Reversals 32.3 19.4 13.9 11.8 5. The three-slit experiment Let's consider particles with the wave function ( , ). According to the Born rule (e.g. Penrose, 1994;Sinha et al, 2015), which is a fundamental axiom of quantum mechanics, the probability density to find a particle at position and at time is given by In the two-slit experiment, if A ( ) and B ( ) represent the wavefunction associated with slits A and B at the location , then the wave function for both slits open is a superposition of the different paths given by AB ( ) = A ( ) + B ( ) . Let A = | A | @ and B = | B | @ be the probabilities with the slits A and B open.
The probability of detection, AB , is: where AB is the interference term which may be written as For three slits, A, B, and C, if we only consider pair-wise interference, the probability ABF is given by: The interference term is: This expression is entirely described by probabilities involving only one and two slits. Any contribution from higher-order interference terms (i.e., a path involving the three slits) is quantified by the Sorkin parameter (Sorkin, 1994): which should be identically zero if only the direct paths through the three individual slits are considered. Sinha et al. (2010) evaluated ε experimentally in a three-slit experiment with photons and found the magnitude of three-path interference to less than 10 −2 of the expected two-path interference, thus ruling out third-and higher-order interference. But recent experiments have found (Sawant et al, 2014) that the Sorkin parameter is a non-zero quantity which indicates that the three-way behavior cannot be directly obtained by adding different two-way results.
This non-zeroness has been seen as a consequence of the non-classical paths such as loops through the slits but it can also be ascribed to the use of the 3-way representation that cannot be obtained by simply adding together various 2-way representations (Quach, 2017).
Considering the significance of this problem, further investigation is needed to determine whether its origin is in the limitations of the underlying logic.

Dimensionality of space
The dimension of a mathematical space is the minimum number of coordinates, or bins, needed to specify any point within it. Mathematicians have defined dimension of a space to be the least integer n for which every point has arbitrarily small neighborhoods whose boundaries have dimensions less than n. Space is taken to be three-dimensional in physics for that seems the most obvious thing to do, and the idea of the four dimensions of spacetime is that events are not absolutely defined spatially and temporally, but rather are relative to the motion of the observer.
The three-dimensionality of the common intuition in terms of length, breadth, and height is a consequence of our experience, so it is imperative to put it to test. We know that under certain conditions, it becomes necessary to go beyond three to noninteger dimensions, which are used in various branches of science to explain confinement in certain systems, the emergence of scale invariant and fractal phenomena such as turbulence, human physiology, medicine, and neuroscience, and engineered networks. The most commonly used measures are the boxcounting, information, packing, and the Hausdorff dimensions. These measures are useful indices to describe sets, and it appears reassuring that for ordinary geometric shapes, they equal the familiar Euclidean or topological dimension. In general, these dimensions appear to measure the apparent density of the given set, and in conformity with commonsense a noninteger space is less dense than the one with the immediately higher integer (Kak, 2020b).
In some theories, noninteger dimensions are introduced through dimensional regularization. Since Gaussian integrals may be written as a d-dimensional product of a single dimensional Gaussian integral, one can generalize them to noninteger dimensions. The continuous squeezing of a three-dimensional space into a two-dimensional space also generates intermediate scenarios with noninteger dimensions. They are introduced elsewhere to address the origins of self-similar structures, where generally the Hausdorff dimension is used to fit the experimental data (Kak, 2020c).
In a system with the subject/object divide, the two sides may have descriptions that are formally incompatible with each other. From an epistemological perspective, the complementarity principle provides a way to declare both such descriptions as two sides of the same reality.
Since the notion of dimension applies to physical reality at all conceivable scales, one must consider both the objective view in terms of the physical location of things within the space, and the subjective view in terms of the motions of the objects as defined by an underlying dynamic that leads to the locations.
The complementarity that we consider here is between two views: (i) the noninteger space sits within the container of the ceiling integer space (e.g. e-dimensional space sits within the 3dimensional space), and (ii) noninteger space is characterized by a continual shrinking of the metrical relationships between objects. As an application of this idea the discrepancy between the early and late universe estimates of the Hubble constant may be explained away if one recognizes that the early universe model provides the true e-dimensional estimate whereas the late universe model imposes a 3-dimensional gloss on the measurement (Kak, 2020d).

The observation process
Einstein, Podolsky, and Rosen (EPR) (1935) argued from the philosophical position of realism that quantum mechanics could not be considered a complete theory since for an entangled pair of particles that are far apart from each other, the measurement on one causes the second to change its wavefunction. This influence projected nonlocally across what could be a vast distance is against our commonsense expectation of a physical process. Although EPR did not use the phrase "nonlocality" in their paper, it is clear that this feature of quantum mechanics was behind their assertion that it could not be a complete theory. As a realist one cannot accept that a measurement at one location, which in principle is arbitrarily far from the second location, can influence an object at this second location.
Bell considered the question if the property being measured was fixed at the time the pair of entangled particles was produced or whether the random collapse took place at the time of measurement after the particles had separated. He showed that if measurements were made independently in three different directions, the constraints on probability for the quantum case were different than for the classical case (Bell, 1988). Bell's theorem is taken to mean that quantum theory cannot be mimicked by introducing a set of objective local "hidden" variables. It follows that any classical theory advanced in place of quantum mechanics will be nonlocal.
Those who approach the subject from the position of realism offer the possibility that the collapse of the state function of two remotely situated entangled objects will leave some trace in terms of local process explanations that will call for a theory that goes beyond current quantum theory, for if a signal travels faster than the speed of light then there is a frame in which it travels backwards in time.
The complementarity of aspects, such as wave and particle, may be seen as a consequence of the ability of the observer to make different kind of choices. The particle view is the one imposed on reality by the mind governed by a classical mode (Kafatos and Nadeau, 2000). Nonlocality is a significant issue when one takes the particle picture, together with local interaction, to be the underlying reality.
Language underpins our constructions of reality both in the apprehension of our intuitions and the interpretation of mathematical theories. The issue of knowability is complex: even though terms are defined precisely in a mathematical theory, one still needs to interpret the variables in an experiment to connect them to theory.
Evolution in quantum theory proceeds in two different ways: first, by the unitary evolution of the Schrödinger equation; and second, by the non-unitary reduction of the wave function by the process of measurement. The results of the measurement depend on the questions posed by the measurement apparatus. The outcomes of the measurement are classical variables for they are well-defined values quite unlike the superpositions of the quantum variables (Zeh, 1970).
Another fundamental issue related to knowability is the limitation on what can be inferred about the system. According to CI, quantum mechanics only provides probabilities of observing values of measurement variables (Bell, 1988). From a philosophical view, its framework represents a computational procedure that allows us to obtain information from beneath a veil that covers reality (D'Espagnat, 2003).

Veiled nonlocality
The multiplicity of the interpretations of quantum mechanics is a consequence of the difficulty of conceiving of the same "object" in terms of the mutually contradictory intuitions of particle and wave in different situations. Since single objects also produce interference patterns in the double slit experiment (Tonomura et al, 1989), either some kind of a wave phenomenon is taken to be at the basis of particles that is masked by the mathematical formalism that requires complex probability amplitudes to be associated with individual objects or it is assumed there is a pilot wave accompanying the particles, but both these, as well as other approaches, lead to a variety of difficulties (Zeh, 2010). Last but not the least, in consideration of measurement, the observer is by himself --or in terms of the measurement equipment that may be considered his extension--both a part of the experimental system and outside of it (Wigner, 1967).
The idea that reality is veiled (d'Espagnat, 2003) is equivalent to the conception of a deeper implicate order (Bohm, 1980). I have modified this to a Principle of Veiled Nonlocality (PVN) (Kak, 2014) that explains why we don't have loophole-free experimental evidence in support of nonlocality in spite of much research effort. Although quantum mechanics is a nonlocal theory as expressed, for example, in the property of entanglement between remote particles, it is also consistent with the no-signaling theorem according to which no useful information can be sent to remote places. According to PVN, which is stronger assertion than the non-signaling theorem, the loopholes will never be closed and experimental verification of nonlocality that excludes local realistic explanations will not be found.
Veiled nonlocality does not imply a local realistic basis to quantum theory. Our mind operates by the classical picture and it uses whatever artifacts are necessary that make the classical picture of reality remain consistent. But it doesn't exclude the possibility that nonlocality can leave traces that can be measured.
Consider a known wavefunction that has been prepared in the laboratory. It has extension, that is, it is described across space and time. Since this extension collapses into a local attribute, what is the process by which this collapse takes place and might this be apprehended by our instruments? For entangled pair of particles that are far apart, we will have to postulate processes that travel faster than the speed of light, contradicting relativity. If instantaneous information transfer cannot take place, we can yet see correlations that can only be explained by means of nonlocal processes. PVN ensures that we will be able to advance local explanations for the process or ascribe the results to coincidence.
PVN can't be negated when the wavefunction of the objects is known for even for that case there are imponderables. Although one can prepare a quantum state precisely, in reality there exist problems related to initialization and lack of knowledge of the environment and decoherence that affects the state of the system in uncertain and uncontrollable ways. This means that uncertainty relations will continue to be central in the analysis of the evolution of the system.
In principle, macroscopic systems may also be quantum and we can't rule out that our conception of the universe and its laws and our analysis is a result of the limitations of the capacities of our cognitive systems (Gautam and Kak, 2013). Even if the classical agents of our mind are based on quantum collectives (Kak, 2016a), we cannot negate the fact that the elementary perceptions at the basis of our knowledge of this world have classical form. Some see the observer's consciousness playing a role in the transition from the quantum to the classical world (Wigner, 1967;Stapp, 2007), others find that problematic since consciousness cannot be defined in an objective sense.

CI versus MWI
Veiled nonlocality facilitates choice between the two main interpretations of the wavefunction: (i) it represents objective reality in a mathematical form, and (ii) although sometimes the wavefunction may describe the system fully, in the general case it only encodes information about the potential outcomes of the experiment together with their probabilities. If the wavefunction has ontological reality, then it is conceivable that an experiment will show the nonlocality, but if the wavefunction only encodes information about outcomes then it is unlikely that nonlocality will be revealed.
Within the mathematical framework, observable quantities are represented by Hermitian operators, and their possible values are the eigenvalues of these operators. The act of observation reduces the wavefunction to one of its component states and it is the outcome associated with the application of the measurement operator on the state. From the philosophical perspective the reduction of the wavefunction in a random fashion is a feature very different from that of its evolution although this jump may be viewed as not taking place in the physical world but rather in our knowledge of the system.
If CI is the view of the universe with the observer in the privileged position, then MWI is the outside realistic view. In CI the wavefunction represents the knowledge of the experimenter, whereas in MWI the wavefunction is the complete reality. If CI is the subjective view, MWI is the objective view.
The inside-out and the outside-in are like the complementary wave and particle viewpoints already considered in CI. The outside-in view of MWI might present a consistent picture but it means that observers are zombies and in this it is similar to a conception of reality as nothing more than a collection of things. In such a picture, there is no room for minds with agency and the whole universe operates as a giant computer with the choices solely determined by the environment.
The inside-out view of CI admits the possibility that somehow "free will" plays a role in the choice that emerges during a measurement (of course this narrative is relevant only in a limited set of possibilities). Since the cut between the classical and the quantum may be made at different points, it doesn't exclude the explanation that the outcomes of the experiment are a result of the decoherence caused by the environment.
The problem of measurement is linked with the idea of "free will". If the brain is viewed as a neural machine, its response to a stimulus is determined by its current internal state and therefore it cannot act acausally and exercise free will. Consciousness, if taken to be an emergent phenomenon, provides a false sense of agency by assigning its intervention an earlier time than is correct, as indicated by experiments. (Libet et al, 1983)  Many physicists, perhaps a majority, have an intuitive realistic worldview and consider a quantum state as a physical entity. Its value may not be known, but in principle they assume that the quantum state of a physical system is well defined. This view, as mentioned earlier in the chapter, leads to various paradoxes. When correctly used, quantum theory does not yield contradictory answers to a well posed question; it is only the misuse of quantum concepts, guided by a pseudorealistic philosophy, that leads to paradoxical results.
If MWI is the machine view of the universe, in CI the observer's subjective experience is evidence that agency exists and other sentient beings also possess it. The various differences in CI and MWI are summarized in Table 3.
Many points of difference between CI and MWI are of a philosophical nature without implications as far as physical experiments are concerned. In a universe only of "things" which excludes "being", the problem of "free will" is unsolvable and in a theoretical framework based on causality, "free will" is paradoxical.
From the perspective of sentient beings, objects are not primary; what is primary are mental feelings and perceptions. Newer theories of physics consider information as primary, but there can be no information unless there is consciousness that can recognize this information. This consciousness (or being) may exist in a plane different from regular spacetime.
Accepting mental states as primary building-blocks of reality leads to questions such as: Are the worlds of mind (consciousness) and matter separate? In the Vedantic model, consciousness is viewed as interpenetrating the material world, and guiding its evolution indirectly (Moore, 1989) in a manner similar to the Quantum Zeno Effect (Misra and Sudarshan, 1977).
In the ontic view of the wave function, decoherence causes the system to make transition from a pure state to a mixture of states that the observer is able to measure (Zurek, 2003;Kak, 2015). The process of decoherence in no way negates the CI picture, for it merely shifts the cut away in such a way that the system under observation and the measurement apparatus are on the same side. The experimenter is not describing reality ontologically; rather, he is obtaining knowledge about it and this depends on the nature of his interaction with the system. The knowledge informs his mind and consideration of this information creates a sense of overarching knowledge.
The idea of a computation determined by the environment is not, in itself, surprising. Consider ants in random motion on a mountainside so that some are able to reach the mountaintop. The fact of having reached the top is a consequence of the nature of the environment and it cannot be ascribed to the intention of the ant. It is in that sense that the environment reduces the dynamics of the system. But if the environment itself is random, no such discernible "computation" can occur.

Mind and computability
To the extent that machines used in experiments are physical systems, they are extensions of human agents. The agent is further associated with internal states and a cultural context. Although some physicists believe that all phenomena must be reducible to physics, others have suggested that one needs different kinds of descriptions. Popper and Eccles (1984) believed one needs three different worlds, each with its own language, to understand reality. In their classification, World 1 is the world of physical objects and states, World 2 is the world of subjective states such as thoughts, memories, and emotional states, and World 3 is knowledge in an objective sense. These worlds deal with outer sense, inner sense, and culture and spirit (or self), respectively. Eccles even argued that the self controls the brain.
Information requires an observer for whom the prior expectations are changed once a message has been received. The central idea here is that of computation of global states and an underlying structure of data within which the message must he examined. An observer cannot analyze data without models and expectations on measurements. The observer, therefore, must not only belong to Eccles Worlds 1 and 2, but also World 3. The observer must have complex stable structures that are entangled not only within its subsystems but also with the environment. This implies that the assumption that the environmental systems are randomly phased as assumed by Zurek is not valid. What is true of the external environment is also valid of the internal environment (structure) of the observer. One may assert that whatever is observed is predicated on the capacity of the cognitive systems to make such observation. One may also take particular characteristics of reality to be a consequence of the nature of the cognitive system of the human agent.
Although there are many ways to look at information, it is common to view scientific information or knowledge within the context of specific areas that allows us to understand relationships between variables, find laws, and make predictions, where some of the phenomena are deterministic and others are random. Another perspective is to consider the nature of the language in which knowledge is expressed. Different areas of science have their own specialized languages and across fields the languages are not always consistent. This inconsistency points to the gaps that exist in the understanding of reality.
The classical system can be observed since it can be isolated from other systems -and from the observer --due to the local nature of interactions. In contrast, since interactions in quantum mechanics are nonlocal, subsystems cannot be isolated within a system. Such isolation is essential for the observer to recognize his or her own identity that requires some knowledge of the environment in advance.
If systems cannot be completely separated before the observation begins, observation should be impossible. From another perspective, the need for global states (related to World 3 aspects of observation) in the observation process implies that the measurement of one system by another by unitarity alone cannot be reduced to the Turing machine formalism.
Let us consider the implications of viewing the brain as a machine consistent with the physicalist position that consciousness is either an emergent property or an epiphenomenon (Baars, 1997). One may further assume that computers can capture the abstract causal organization of other systems and thus of the brain (Gamez, 2008). Panpsychism, the view that mind is to be found everywhere, is another position that has recently become popular in academic circles (Koch, 2014), but it is too extravagant in associating mind even with a straw or a rock. Broadly speaking, both physicalism and panpsychism associate mind with matter, although they do it in different ways.
From a computational perspective, it is counter-intuitive that a small subset of activity in the brain in terms of abstract thought is able to capture the workings of the physical world. This happens even though brain function is accompanied by the reorganization of its very structures during learning and function (Kak, 2016a), which goes beyond the Turing machine model on which the von Neumann architecture of digital computers is broadly based (Penrose, 1989).
Organisms do not create representations of visual or other stimuli (Chemero, 2009) and store them in short-term and long-term memories. If experiences cannot be accessed, they cannot be described as data (whether digital or chemical), which is a requirement in processing by a computer. The processing by the brain must thus be non-computable by a machine, and the emulation of cognition by computer must therefore be different from how the brain does it.
The American psychologist James J. Gibson, showed years ago (Gibson, 1979) that brain processing is not passive like that of a computer, but an active process between the subject and the environment. When perceiving an object, we attend to its properties to see what we can do with it. The mind actively explores the environment to find elements that guide the performance of one action or the other. This information, called ecological information, is unique to the subject and it does not lend itself to the information processing model.
Another perspective on this is that the most mundane memory tasks involve multiple and very large areas of the brain (Gilboa et al., 2004). Skill learning and expertise involve reorganization and plastic changes (Chang, 2014), which cannot be mapped into a computer with fixed architecture.
Although learning cognitive tasks may require attention and concentration, once the learning is complete they can be performed literally automatically. The learning apparently leads to training of neural networks that convert the computational problem into one of classification and recognition. It should thus be possible to replicate the processing in a machine of tasks that involve computations in the neural circuitry of the brain. But raw awareness appears to be a phenomenon that is different from cognitive processing (A. Kak, et al., 2016).
Attempts at mimicking the workings of the brain in the design of conscious machines face formidable scientific and engineering obstacles. Architectures that copy models of brain function have been investigated (Haikonen, 2012). These architectures include distributive agents and the global workspace theory (GWT) (Baar, 1988;Shanahan, 2006). In the GWT separate parallel processes compete to place their information in the global workspace whose contents are broadcast to a multitude of receiving processes. Since globally broadcast messages can evoke actions in receiving processes throughout the network, the global workspace may be used to exercise executive control to perform voluntary actions. This roughly mimics some brain functions without addressing the nature of consciousness.
Postulating a physical substrate of consciousness to be at the basis of complex patterns of activity has been called the integrated information theory (IIT) (Tononi et al., 2016). But it amounts to correlations in physical processes that cannot by themselves be the origin of awareness. Complex causally connected behavioral patterns can also be seen in social networks, but they are properly analyzed within the framework of an ecological system without recourse to the concept of consciousness.
In the standard neuroscience view, mind emerges from the interoperation of the various modules in the brain and its behavior must be completely described by the corresponding brain function, even though there is no evidence for any specific neural correlate of consciousness. Counterintuitive characteristics of the mind may be ascribable to underlying quantum processes, but although quantum mechanics might indeed play a role in brain processes, there is no compelling reason to assume that quantum-ness is associated with the phenomenon of consciousness.
Reality is comprehended in consciousness, and not directly in terms of space, time and matter.
Consciousness is the doorway that shows us the world and makes self-knowledge possible and it is the source of creativity although it is constrained by the habits and limitations of the mind (Freeman, 1999). Although we are unable to define consciousness as a mathematical property of a system, the quality of its manifestation in a natural system depends on structure and different modes of processing. Consciousness and the material world complement each other and consciousness may influence material evolution as in the Quantum Zeno effect.
The statement that consciousness is the ground on which experience is evoked variously may appear to be a principle that is no different from that of gravity that works on objects differently depending upon their location. But this does not rule out the possibility that the structure of the mind makes it impossible to know reality completely.
We think of ourselves as being outside of the physical world. Even our conceptions of the universe are as if we are not a part of it, and in the words of Schrödinger: "We do not belong to this material world that science constructs for us. We are not in it; we are outside. We are only spectators. The reason why we believe that we are in it, that we belong to the picture, is that our bodies are in the picture. Our bodies belong to it." (Schrödinger, 1967) If this sense of being outside of the physical world is true, it would be impossible to emulate it by hardware and processing that is within the world. It also follows that it will not be a computational property of the physical elements that comprise the system. Since consciousness is global, it must be non-algorithmic.
Can consciousness be defined as the capacity to know with certainty that one is conscious? This appears to be a circular definition and it hinges on hard-to-define concepts such as "knowledge" and "belief". Consider HM, the guy who lost all new memory with the bilateral resection of medial temporal lobe and his ability to hold beliefs and knowledge was greatly impaired (Squire, 2009), yet, without doubt, he had all hallmarks of consciousness.
We can look for the non-computability of consciousness from its parallel to the unsolvability of the halting problem. Let us define "consciousness" as some privileged state of the mind that makes its processes halt (we don't bother to specify it beyond this description) and its contents registered (which is what we imply by awareness). Humans can get into the state of "awareness" at any time, which means that the earlier computation has halted, and this is irrespective of the initial state of the immediately preceding process. The exceptions to this are if a person is sleeping or unconscious as in coma. But such halting to arbitrary input is impossible from a computability point of view. Therefore, it follows that consciousness is not computable (Kak, 2019).
The consideration of information (or entropy) in physical theory, which is commonly done in many branches of physics, implies an unstated postulation of consciousness. Information cannot be reduced to local operations by any reductionist program. It requires the use of signs derived from global properties and the capacity to make choices which, in turn, implies agency. Such agency will be consistent with physical law only if does not involve the expenditure of energy. There are different ways interaction between matter and consciousness may be considered and these include the questions of information and that of dynamics.
As quantum mechanics, unlike classical physics, involves nonlocal correlations, the premise may be advanced that workings of consciousness that are associated with a brain must also have a nonlocal component (Kafatos and Kak, 2015). To seek the physical trace induced by consciousness associated with humans and other sentient subjects, one would need to look for probabilities that do not conform to local expectations.

Concluding remarks
In classical physics, the future of a system is completely known in principle, given the initial conditions and the assumption that the computations are made by an observer who is located outside of the system. For classical mechanics to be universally true, this requires that the behavior of the observer must also be viewed as being completely predictable which negates the idea of free will. Since the very act of observation requires making a choice, observers and the observation process lie outside of classical physics.
The situation is essentially the same in quantum mechanics for the unobserved system, which evolves deterministically by the Schrödinger equation. The consideration of observation as unfolding of the entanglement between the system and the apparatus (or the observer) into a statistical mixture of classical pointer states has circularity built into it since we expect the state to evolve preferentially into the classical states. Neither does it address the question why the system and the apparatus were separate at the beginning of the measurement given the universe itself has an evolving state function that overarches the evolution of the subsystems.
We argued that while collapse of the state function in quantum mechanics is best described as a nonlocal process, this nonlocality is veiled. This has important implications for quantum information processing and the development of quantum computers. In practical terms, it compels us to view that the wavefunction represents our lack of knowledge rather than the reality.
If the case is made that entanglement or nonlocality is at the basis of the superiority of quantum computing algorithms, then the effective veiling of this nonlocality will preclude its exploitation in any practical implementation scheme. This veiling will manifest itself in terms of noise and errors that will be impossible to control thus making effective computation unattainable.
Nonlocality lies outside the framework of classical science and, therefore, its actual occurrence is problematic in making sense of the world. But veiled nonlocality, which appears to be consistent with the Copenhagen Interpretation, saves us from this difficulty. It creates the space in which human agency sees its actions as consistent with classical explanations.
Considering the question of logical categories, we showed that absolute optimality is unachievable as it corresponds to a non-integer, irrational value. Since ternary categories are more efficient than binary ones, one may need to posit certain physical questions not in terms of "yes" or "no" but rather with an additional category. This third "indeterminate" category may be seen as the origin of non-classical aspects of reality.
Consciousness cannot intervene in physical law but it can change the probabilities in the evolution of quantum processes, without changing the dynamics, and this provides an explanation of how consciousness can be reconciled with the physical law. It appears to be a privileged state of the mind that makes brain processes halt and its contents registered at arbitrary time instants. Since such halting to arbitrary input is impossible from a computability point of view, we conclude that consciousness is not computable.