Orchestrated Objective Reduction ("ORCH OR")

Discussions about the possibility of consciousness, free will, spirits, deities, religions and so on, and how these might interact with time travel, the Big Bang, many worlds and so on.

Orchestrated Objective Reduction ("ORCH OR")

Postby d023n » Sun Mar 25, 2018 11:32 pm

Here are a couple of excerpts from the following paper by Sir Roger Penrose and Stuart Hameroff regarding this theory. The first explains the motivations behind the OR part, and the second explains the motivations behind the ORCH part, but I highly recommend reading the entire paper. At the end is a link to a 43 minute Youtube video of Stuart Hameroff summarizing the theory along with a few related ideas, including his complementary "Conscious Pilot" model, which is also quite intriguing.

CONSCIOUSNESS IN THE UNIVERSE AN UPDATED REVIEW OF THE "ORCH OR" THEORY

Stuart R. Hameroff, and Roger Penrose

14.4.3. The measurement problem and OR

The issue of why we don't directly perceive quantum superpositions is a manifestation of the measurement problem mentioned above. Put more precisely, the measurement problem is the conflict between the two fundamental procedures of quantum mechanics. One of these procedures, referred to as unitary evolution, denoted here by U, is the continuous deterministic evolution of the quantum state (i.e. of the wavefunction of the entire system) according to the fundamental Schrödinger equation. The other is the procedure that is adopted whenever a measurement of the system—or observation—is deemed to have taken place, where the quantum state is discontinuously and probabilistically replaced by another quantum state (referred to, technically, as an eigenstate of a mathematical operator that is taken to describe the measurement). This discontinuous jumping of the state is referred to as the reduction of the state (or the "collapse of the wavefunction"), and will be denoted here by the letter R. This conflict between U and R is what is encapsulated by the term "measurement problem" (but perhaps more accurately it may be referred to as "the measurement paradox") and its problematic nature is made manifest when we consider the measuring apparatus itself as a quantum entity, which is part of the entire quantum system consisting of the original system under observation together with this measuring apparatus. The apparatus is, after all, constructed out of the same type of quantum ingredients (electrons, photons, protons, neutrons etc.—or quarks and gluons etc.) as is the system under observation, so it ought to be subject also to the same quantum laws, these being described in terms of the continuous and deterministic U. How, then, can the discontinuous and probabilistic R come about as a result of the interaction (measurement) between two parts of the quantum system? This is the paradox faced by the measurement problem.

There are many ways that quantum physicists have attempted to come to terms with this conflict (Bell, 1966; Bohm, 1983; Rae, 2002; Polkinghorne, 2002; Penrose, 2004). In the early 20th century, the Danish physicist Niels Bohr, together with Werner Heisenberg, proposed the pragmatic "Copenhagen interpretation," according to which the wavefunction of a quantum system, evolving according to U, is not assigned any actual physical "reality," but is taken as basically providing the needed "book-keeping" so that eventually probability values can be assigned to the various possible outcomes of a quantum measurement. The measuring device itself is explicitly taken to behave classically and no account is taken of the fact that the device is ultimately built from quantum-level constituents. The probabilities are calculated, once the nature of the measuring device is known, from the state that the wavefunction has U-evolved to at the time of the measurement. The discontinuous "jump" that the wavefunction makes upon measurement, according to R, is attributed to the change in "knowledge" that the result of the measurement has on the observer. Since the wavefunction is not assigned physical reality, but is considered to refer merely to the observer's knowledge of the quantum system, the jumping is considered simply to reflect the jump in the observer's knowledge state, rather than in the quantum system under consideration.

Many physicists remain unhappy with such a point of view, however, and regard it largely as a "stop-gap," in order that progress can be made in applying the quantum formalism, without this progress being held up by a lack of a serious quantum ontology, which might provide a more complete picture of what is actually going on. One may ask, in particular, what it is about a measuring device that allows one to ignore the fact that it is itself made from quantum constituents and is permitted to be treated entirely classically. A good many proponents of the Copenhagen standpoint would take the view that while the physical measuring apparatus ought actually to be treated as a quantum system, and therefore part of an over-riding wavefunction evolving according to U, it would be the conscious observer, examining the readings on that device, who actually reduces the state, according to R, thereby assigning a physical reality to the particular observed alternative resulting from the measurement. Accordingly, before the intervention of the observer's consciousness, the various alternatives of the result of the measurement including the different states of the measuring apparatus would, in effect, still have to be treated as coexisting in superposition, in accordance with what would be the usual evolution according to U. In this way, the Copenhagen viewpoint puts consciousness outside science, and does not seriously address the ontological nature or physical role of superposition itself nor the question of how large quantum superpositions like Schrödinger's superposed alive and dead cat (see below) might actually become one thing or another.

A more extreme variant of this approach is the "multiple worlds hypothesis" of Everett (1957) in which each possibility in a superposition evolves to form its own universe, resulting in an infinite multitude of coexisting "parallel" worlds. The stream of consciousness of the observer is supposed somehow to "split," so that there is one in each of the worlds—at least in those worlds for which the observer remains alive and conscious. Each instance of the observer's consciousness experiences a separate independent world, and is not directly aware of any of the other worlds.

A more "down-to-earth" viewpoint is that of environmental decoherence, in which interaction of a superposition with its environment "erodes" quantum states, so that instead of a single wavefunction being used to describe the state, a more complicated entity is used, referred to as a density matrix. However, decoherence does not provide a consistent ontology for the reality of the world, in relation to the density matrix (see, for example, Penrose (1994), Secs. 29.3–29.6), and provides merely a pragmatic procedure. Moreover, it does not address the issue of how R might arise in isolated systems, nor the nature of isolation, in which an external "environment" would not be involved, nor does it tell us which part of a system is to be regarded as the 'environment' part, and it provides no limit to the size of that part which can remain subject to quantum superposition.

Still other approaches include various types of OR in which a specific objective threshold is proposed to cause quantum state reduction (Percival, 1994; Moroz et al., 1998; Ghirardi et al., 1986). The specific OR scheme that is used in Orch OR will be described below.

The quantum pioneer Erwin Schrödinger took pains to point out the difficulties that confront the U-evolution of a quantum system with his still-famous thought experiment called "Schrödinger's cat" (Schrödinger, 1935). Here, the fate of a cat in a box is determined by magnifying a quantum event (say the decay of a radioactive atom, within a specific time period that would provide a 50% probability of decay) to a macroscopic action which would kill the cat, so that according to Schrödinger's own U-evolution the cat would be in a quantum superposition of being both dead and alive at the same time. According to this perspective on the Copenhagen interpretation, if this U-evolution is maintained until the box is opened and the cat observed, then it would have to be the conscious human observing the cat that results in the cat becoming either dead or alive (unless, of course, the cat's own consciousness could be considered to have already served this purpose). Schrödinger intended to illustrate the absurdity of the direct applicability of the rules of quantum mechanics (including his own U-evolution) when applied at the level of a cat. Like Einstein, he regarded quantum mechanics as an incomplete theory, and his 'cat' provided an excellent example for emphasizing this incompleteness. There is a need for something to be done about quantum mechanics, irrespective of the issue of its relevance to consciousness.


14.5.1. Orch OR quantum computing in the brain

Penrose (1989, 1994) suggested that consciousness depends in some way on processes of the general nature of quantum computations occurring in the brain, these being terminated by some form of OR. Here the term "quantum computation" is being used in a loose sense, in which information is encoded in some discrete (not necessarily binary) physical form, and where the evolution is determined according to the U process (Schrödinger's equation). In the standard picture of quantum computers (Benioff, 1982; Deutsch, 1985; Feynman, 1986), information is represented not just as bits of either 1 or 0, but during the U process, also as quantum superposition of both 1 and 0 together (quantum bits or "qubits") where, moreover, large-scale entanglements among many qubits would also be involved. These entangled qubits would compute, in accordance with the Schrödinger equation, in order to enable complex and highly efficient potential parallel processing. As originally conceived, quantum computers would indeed act strictly in accordance with U, but at some point a measurement is made causing a quantum state reduction R (with some randomness normally introduced). Accordingly, the output is in the form of a definite state in terms of classical bits.

A proposal was made in Penrose (1989) that something analogous to quantum computing, proceeding by the Schrödinger equation without decoherence, could well be acting in the brain, but where, for conscious processes, this would have to terminate in accordance with some threshold for self-collapse by a form of non-computable OR. A quantum computation terminating by OR could thus be associated with consciousness. However, no plausible biological candidate for quantum computing in the brain had been available to him, as he was then unfamiliar with microtubules. Penrose and Hameroff teamed up in the early 1990s when, fortunately, the DP form of OR mechanism was then at hand to be applied in extending the microtubule-automata models for consciousness as had been developed by Hameroff and colleagues.

As described in Sec. 2.3, the most logical strategic site for coherent microtubule Orch OR and consciousness is in post-synaptic dendrites and soma (in which microtubules are uniquely arrayed and stabilized) during integration phases in integrate-and-fire brain neurons. Synaptic inputs could "orchestrate" tubulin states governed by quantum dipoles, leading to tubulin superposition in vast numbers of microtubules all involved quantum-coherently together in a large-scale quantum state, where entanglement and quantum computation takes place during integration. The termination, by OR, of this orchestrated quantum computation at the end of integration phases would select microtubule states which could then influence and regulate axonal firings, thus controlling conscious behavior. Quantum states in dendrites and soma of a particular neuron could entangle with microtubules in the dendritic tree of that neuron, and also in neighboring neurons via dendritic–dendritic (or dendritic–interneuron–dendritic) gap junctions, enabling quantum entanglement of superposed microtubule tubulins among many neurons (Fig. 1). This allows unity and binding of conscious content, and a large EG which reaches threshold (by τ ≈ ℏ/EG) quickly, such as at end-integration in EEG-relevant periods of time, e.g., τ = 0.5 s to τ = 10−2 s. In the Orch OR "beat frequency" proposal, we envisage that τ could be far briefer, e.g., 10−7 s, a time interval already shown by Bandyopadhyay’s group to sustain apparent quantum coherence in microtubules. In either case, or mixture of both, Orch OR provides a possible way to account for frequent moments of conscious awareness and choices governing conscious behavior.

Section 3 described microtubule automata, in which tubulins represent distinct information states interacting with neighbor states according to rules based on dipole couplings which can apply to either London force electric dipoles, or electron spin magnetic dipoles. These dipoles move atomic nuclei slightly (femtometers), and become quantum superpositioned (along with superpositioned atomic nuclei), entangled and perform quantum computation in a U process. In dendrites and soma of brain neurons, synaptic inputs could encode memory in alternating classical phases, thereby avoiding random environmental decoherence to "orchestrate" U quantum processes, enabling them to reach threshold at time τ for orchestrated objective reduction "Orch OR" by τ ≈ ℏ/EG. At that time, according to this proposal, a moment of conscious experience occurs, and tubulin states are selected which influence axonal firing, encode memory and regulate synaptic plasticity.

An Orch OR moment is shown schematically in Fig. 10. The top panel shows microtubule automata with (gray) superposition EG increasing over a period up to time τ, evolving deterministically and algorithmically by the Schrödinger equation (U) until threshold for OR by τ ≈ ℏ/EG is reached, at which time Orch OR occurs, accompanied by a moment of conscious experience. In the "beat frequency" modification of this proposal, these Orch OR events could occur on a faster timescale, for example in megahertz. Their far slower beat frequencies might then constitute conscious moments. The particular selection of conscious perceptions and choices would, according to standard quantum theory, involve an entirely random process, but according to Orch OR, the (objective) reduction could act to select specific states in accordance with some non-computational new physics (in line with suggestions made in Penrose (1989, 1994).

Figure 10 (middle) depicts alternative superposed space–time curvatures (Figs. 8 and 9) corresponding to the superpositions portrayed in MTs in the top of the figure, reaching threshold at the moment of OR and selecting one space–time. Figure 10 (bottom) shows a schematic of the same process.

The idea is that consciousness is associated with this (gravitational) OR process, but (see Sec. 4.5) occurs significantly only when (1) the alternatives are part of some highly organized cognitive structure capable of information processing, so that OR occurs in an extremely orchestrated form, with vast numbers of microtubules acting coherently, in order that there is sufficient mass displacement overall for the τ ≈ ℏ/EG criterion to be satisfied. (2) Interaction with environment must be avoided long enough during the U process evolution so strictly orchestrated components of the superposition reach OR threshold without too much randomness, and reflect a significant non-computable influence. Only then does a recognizably conscious Orch OR event takes place. On the other hand, we may consider that any individual occurrence of OR without orchestration would be a moment of random proto-consciousness lacking cognition and meaningful content.

We shall be seeing orchestrated OR in more detail shortly, together with its particular relevance to microtubules. In any case, we recognize that the experiential elements of proto-consciousness would be intimately tied in with the most primitive Planck-level ingredients of space–time geometry, these presumed "ingredients" being taken to be at the absurdly tiny level of 10−35 m and 10−43 s, a distance and a time of 20 orders of magnitude smaller than those of normal particle-physics scales and their most rapid processes, and they are smaller by far than biological scales and processes. These scales refer only to the normally extremely tiny differences in space–time geometry between different states in superposition, the separated states themselves being enormously larger. OR is deemed to take place when such tiny space–time differences reach the Planck level (roughly speaking). Owing to the extreme weakness of gravitational forces as compared with those of the chemical and electric forces of biology, the energy EG is liable to be far smaller than any energy that arises directly from biological processes.

OR acts effectively instantaneously as a choice between dynamical alternatives (a choice that is an integral part of the relevant quantum dynamics) and EG is not to be thought of as being in direct competition with any of the usual biological energies, as it plays a completely different role, supplying a needed energy uncertainty that then allows a choice to be made between the separated space–time geometries, rather than providing an actual energy that enters into any considerations of energy balance that would be of direct relevance to chemical or normal physical processes. This energy uncertainty is the key ingredient of the computation of the reduction time τ, and it is appropriate that this energy uncertainty is indeed far smaller than the energies that are normally under consideration with regard to chemical energy balance, etc. If it were not so, then there would be a danger of conflict with normal considerations of energy balance.

Nevertheless, the extreme weakness of gravity tells us there must be a considerable amount of material involved in the coherent mass displacement between superposed structures in order that τ can be small enough to be playing its necessary role in the relevant OR processes in the brain. These superposed structures should also process information and regulate neuronal physiology. According to Orch OR, microtubules are central to these structures, and some form of biological quantum computation in microtubules (perhaps in the more symmetrical A-lattice microtubules) would have to be involved to provide a subtle yet direct connection to Planck-scale geometry, leading eventually to discrete moments of actual conscious experience and choice. As described above, these are presumed to occur primarily in dendritic–somatic microtubules during integration phases in integrate-and-fire brain neurons, resulting in sequences of Orch OR conscious moments occurring within brain physiology, and able to regulate neuronal firings and behavior.


Dr. Stuart Hameroff - Quantum Consciousness and its Nature ... In Microtubules ? - Brief History.
User avatar
d023n
Dionian
 
Posts: 35
Joined: Sun Oct 01, 2017 10:16 pm

Re: Orchestrated Objective Reduction ("ORCH OR")

Postby d023n » Tue Mar 27, 2018 12:58 pm

If anyone is interested in this so far but also likes to mix philosophy in with their physics, Marcus Arvan has an extremely fascinating take on the idea that our universe is a simulated reality, which happens to fit quite well with Penrose's Objective Reduction theory. It is called the Peer-to-Peer Simulation Hypothesis.

The P2P hypothesis holds that we are living in a peer-to-peer networked computer simulation. Some computer simulations have a "dedicated" centeral server (a single computer running the simulation that all other computers access). However, peer-to-peer networked simulations have no central server. The "simulated reality" is simply a vast network of different computers (a "cloud") running the simulation in parallel


The Peer-to-Peer Simulation Hypothesis explains features of our world that otherwise have no known explanation. Physicists, to this very date, do not have any deep theory of why our world is quantum mechanical or relativistic. The equations of quantum mechanics and relativity merely reflect the fact that our world has these strange features. The Peer-to-Peer Simulation hypothesis provides the first unified explanation of why our world quantum mechanical and relativistic. It shows that "quantum mechanics" and "relativity" emerge naturally and inevitably from the purely computational structure of a peer-to-peer simulation.


Here is an excerpt from one of Arvan's papers.

A Unified Explanation of Quantum Phenomena? The Case for the Peer-to-Peer Simulation Hypothesis as an Interdisciplinary Research Program

Online computer simulations are by now familiar parts of our world. Computer scientists and videogame companies have created sophisticated simulated environments in which “players” can navigate and interact with one another online. These simulated environments often have, within them, functional analogues of the kinds of ordinary objects we interact with in our world: they have simulated rocks, simulated cars, simulated guns, simulated bullets etc. There are, however, two distinct types of online simulations: (1) “dedicated-server” simulations, and (2) peer-to-peer (P2P) simulations. Allow me to explain the difference. A dedicated server online simulation is one in which one computer on the network (the “dedicated server”) represents where objects are in the simulated environment (see Figure 1). Every object in a dedicated server simulation thus has determinate properties within the simulation, including determinate positions and velocities. Moreover, provided the other computers hooked up to the simulation interact with the dedicated server properly, each computer on the network will take the same measurements, measuring objects in the simulated environment as having precisely the properties (e.g. location, velocity, etc.) represented on the server.


A peer-to-peer (P2P) networked simulation, however, is very different. In a P2P network, no single computer anywhere on the network encodes where objects in the simulated environment “objectively” are. Rather, the simulated environment is comprised by the entire network of computers on the network, each of which takes independent measurements at every instant, measurements which, in turn, at every successive instant, alter the measurements that other computers on the network will make (see Figure 2). In other words, a P2P simulation simply is an array of computers networked together where (A) each computer simulates the environment in parallel to every other computer on the network, and (B) the totality of individual measurements of each machine on the network at any given instant represents “the simulated environment” in which all computers on the network “experience in common.”


The following is something I wrote on reddit a couple of weeks ago about the possible connection between Penrose's Objective Reduction theory and Arvan's P2P Simulation Hypothesis.

I have bolded the sentence that describes an effect that looks similar to Sir Roger Penrose's Objective Reduction idea, but from the outside, so to speak. The idea is that the sum of the instances of an object, such as an electron, that are spread around the peer-to-peer network running the simulation is what we on the inside would describe as a superposition. When the degree of divergence among these instances reaches a critical threshold, a single instance is selected to update all of the peers, an event that we on the inside would describe as a collapse, reduction, or measurement. Furthermore, individual instances of an object might interact differently with other objects before reaching the threshold, something that we on the inside would describe as entanglement, thereby accelerating the process of divergence and so hastening the moment of reduction.

Objective Reduction explains that this threshold is achieved, and so reduction occurs, when the product of the age of the superposition and its "self-energy" (a quantity that Penrose explains in his paper linked above) reaches Planck's constant; or the time until reduction is proportional to Planck's constant divided by the "self-energy" (t ~ h / E-sub-G). For example, immediately after the measurement of an electron, it once again begins to smear out into a superposition, and, because an electron has such a small mass, its superposition can spread extremely far across spacetime before its "self-energy" becomes great enough to reach the reduction threshold. However, if the electron superposition were to become entangled with another superposition, its time until reduction would then depend upon the new superpositon of both together. In fact, the new "self-energy" of the larger superposition might be enough to satisfy the threshold, causing everything involved to instantly reduce, and so selecting a definite state for the original electron from which it would begin the process all over again.

Of course, why the threshold behaves this way is almost certainly related to the fundamental programming and computational limitations of the peer-to-peer network itself. It seems sensible to say that the more information that is being processed, the slower the processing occurs, manifesting on the inside as time dilation effects, serving as an interesting way to explain why time for massive objects runs more slowly and implying that black holes are areas of the simulation that have frozen entirely. If superpositions were allowed to grow without limit, the network would quickly become overwhelmed with keeping track of all the increasingly divergent paths, and everything would inevitably freeze. The only way around this problem would be if there were simply more and more computational resources available for the peers, although this would seem to have problematic consequences for the appearance of time dilation effects. Effectively unlimited resources would actually be the situtation that the "many worlds" interpretation of quantum mechanics describes and would not be a peer-to-peer type simulation; it would just be a huge collection of the "dedicated server" type simulations spawning more and more "dedicated server" type simulations for every interaction. This could still work, but it would be insanely more complicated than the peer-to-peer setup.

All of this being said, positive experimental evidence for Objective Reduction still would not be "proof" that our universe is a simulation. To be quite honest, I think that the whole simulation idea isn't a matter of proof at all and is instead a matter of axiomatic construction. However, it does really look like the P2P idea could be a way for us to more efficiently simulate our own physics for whatever reasons we might want, and, if we did start to think about physics in the context of the simulation idea, we might be able to more effectively figure out things we otherwise wouldn't necessarily think to look for, things like how to manipulate the computational architecture in unintended ways along the same lines as the recently discovered Spectre and Meltdown vulnerabilities. Who knows, maybe seeing the idea of "existence" as necessarily computational or "simulated" in nature is the first step in actually understanding presently pointless philosophical topics like where everything comes from, what death might really mean, and things like like that. Heck, we might even figure out a way to get out of this universe.

d023n.
User avatar
d023n
Dionian
 
Posts: 35
Joined: Sun Oct 01, 2017 10:16 pm

Re: Orchestrated Objective Reduction ("ORCH OR")

Postby d023n » Fri Nov 30, 2018 11:49 pm

If the brain starts acting before the perception of conscious choice, does that mean that consciousness is just an illusion, or is retrocausality somehow a thing? Sir Roger Penrose has a fascinating and surprisingly simple idea about how it works. The TL;DR is at the bottom down there, but first, here is the relevant context.

So, the last 2 posts were back in March, but in April, the University of Arizona Center for Consciousness Studies held its Tuscon Biennial Science of Consciousness Conference (YouTube channel link), which "is an interdisciplinary conference aimed at rigorous and leading edge approaches to all aspects of the study of conscious experience. These include neuroscience, psychology, philosophy, cognitive science, artificial intelligence, molecular biology, medicine, quantum physics, and cosmology, as well as art, technology, and experiential and contemplative approaches. The conference is the largest and longest-running interdisciplinary gathering probing fundamental questions related to conscious experience." Yada yada yada..

I should have posted this here at the end of July when the YouTube channel fiiinally added the Plenary session where Sir Roger Penrose spoke about Orchestrated Objective Reduction, but here it is now: Why Algorithmic Systems Possess No Understanding (~35 minute talk, starting at the already queued up 01:04:38 mark).

However, the part about apparent retrocausality starts a little after the 1 hour 35 minute mark, and he pulls out his handy transparencies (he loves his transparencies) after about 60 seconds and then explains his idea until just after the 1 hour 41 minute mark.

---

Anyway, the TL;DR is this: (1) a wavefunction in the brain begins to spread out in an orchestrated manner by way of the isolated environment within neuronal microtubules, but still taking many possible, superposed paths; (2) the wavefunction reaches its threshold of objective reduction and reduces, as a moment of meaningful, human-level conscious experience (e.g. a choice); (3) and now, because only the single path which led to that choice remains in the actual history of the universe, it appears as if the universe made the choice way back at moment (1) instead of at moment (2).

In other words, wavefunctions collapse at their end according to all of the paths they represent, but leave a single path that appears in retrospect to have been selected from the start.

I highly suggest listening to how Penrose explains it though, if you haven't already.
User avatar
d023n
Dionian
 
Posts: 35
Joined: Sun Oct 01, 2017 10:16 pm


Return to Consciousness

Who is online

Users browsing this forum: No registered users and 8 guests