Reality is a chaotic flux of becoming that we turn into something intelligible by imposing categories such as thing, substance, permanence, object, cause, and attribute. The world is a mere fiction, constructed of fictitious entities. There is no perspective-independent "real world" and their cannot be any objective or absolute truths in the sense of beliefs or statements that correspond to "the way things really are," independent of human perceptions and categorizations. There no facts, only interpretations.
How do fields express their principles? Physicists use terms like photons, electrons, quarks, quantum wave functions, relativity, and energy conservation. Astronomers use terms like planets, stars, galaxies, Hubble shift, and black holes. Thermodynamicists use terms like entropy, first law, second law, and Carnot cycle. Biologists use terms like phylogeny, ontology, DNA, and enzymes. Each of these terms can be considered to be the thread of a story. The principles of a field are actually a set of interwoven stories about the structure and behavior of field elements, the fabric of the multiverse.
Peter J. Denning
On most days I read physics books while Linda knits beside me in our sunroom. I learn about how the universe was once constructed from a few simple rules as she constructs beautiful, multi-dimensional things from a hank of hair and a few simple linear instructions. Then, last weekend, I tried to join forces with a friend to try and explain to another somewhat deterministic friend how we conscious beings continue to knit the fabric of our reality by collapsing quantum probability states as we live our day to day lives. In using the word "determinism" I refer to the principle in classical mechanics that the values of dynamic variables of a system and of the forces acting on the system at a given time, completely determine the values of the variables at any later time. In other words, some creative force whether it was God or Isaac Newton started our universal clock and then placed us on its big stage to observe the rule driven processes and potentially learn to understand them. As children we were all tempted into and academically reinforced to this simple view of reality.
Quantum mechanics differs from Newtonian determinism in a number of striking ways (and to those brought up on Newtonian physics, baffling) ways. One is its characterization of things as waves, and of waves as probability distributions. In Newton’s world, every individual body occupies a determinate place. It is somewhere or other, and when we inquire as to its whereabouts, we are looking to see where it is. No body, moreover, can be in two places at once; this follows from the determinateness of its location, which follows from its individual identity, as this body rather than that one.
Quantum mechanics upsets all of these assumptions. In quantum mechanical terms, a body, strictly speaking, is not at a point in space, but is spread through the whole of space, though not with the same probability. On this view, it is wrong to say that a body is here, not there. What one should say instead is that it is here, with some probability, and there with some probability. When we look to see where it is, these probabilities are “converted to” an actuality. They “collapse” and become something definite—a body in a place. But before we fix the body’s location with one measuring tool or another, it is, strictly speaking everywhere, though not with the same probability.
The irrepressible rise of the Tea Party movement in the USA with its deterministic worldview threatens to invade and overtake all those areas of human activity that we associate with literature, culture, history, religion and the rest. Even now I remember the comfort of the feeling that a very learned person could know everything that was known and this delusion propelled me through many, many difficult books as I struggled to catch up as quickly as possible. I was well into my twenties before I discovered that there are more publications appearing every day than anyone could read in a lifetime and that there are more than 600,000 species of beetle. After a long recovery from this shock wave, I tried to develop a more discriminating idea of what should count as being known. By known, I now meant understood.
Understanding does not depend on knowing a lot of facts as such, but on having the right concepts, explanations and theories. One comparatively simple and comprehensible theory can cover an infinity of indigestible facts. In writing a paper called "The Ratification of Relativity" for an Introduction to Physics class in college, I learned that our best theory of planetary motion is Einstein's general theory of relativity, which early in the twentieth century superseded Newton's theories of gravity and motion. What makes the general theory of relativity so important is not that it can predict planetary motions a shade more accurately than Newton's can, but that it reveals and explains previously unsuspected aspects of reality, such as the curvature of space and time. This is typical of scientific explanation. Scientific theories explain the objects and phenomena of our experience in terms of an underlying reality which we do not experience directly. But the ability of a theory to explain what we experience is not its most valuable attribute. Its most valuable attribute is that it explains the fabric of reality itself.
In researching my paper on relativity I had learned that explanation is a strange form of food -- a larger portion is not necessarily harder to swallow. A theory may be superseded by a new theory which explains more, and is more accurate, but is also easier to understand, in which case the old theory becomes redundant, and we gain more understanding while needing to learn less than before. That is what happened when Nicolaus Copernicus's theory of the earth traveling around the sun superseded the complex Ptolemaic system which had placed the earth at the center of the universe. Or the new theory may be a unification of two old ones, giving us more of an understanding than using the old ones side by side, as happened when Michael Faraday and James Clerk Maxwell unified the theories of electricity and magnetism into a single theory of electromagnetism. More indirectly, better explanations in any subject tend to improve the techniques, concepts and language with which we are trying to understand other subjects, and so our knowledge as a whole, while increasing, can become structurally more amenable to being understood.
To be fair I must try to distinguish understanding from mere knowing. We always know when we do not understand something, even when we can accurately describe and predict it (for instance the course of a known disease of unknown origin), and we know when an explanation helps us understand it better. Roughly speaking knowing is about 'why' and understanding is about 'what'; about the inner workings of things; about how things really are, not just how they appear to be; about what must be so, rather than what merely happens to be so; about laws of nature rather than rules of thumb. They are also about coherence, elegance, and simplicity, as opposed to arbitrariness and complexity, though none of these things are really easy to define. But in any case, understanding is one of the higher functions of the human mind and brain, and a unique one. Many other physical systems, such as animals' brains, computers and other machines, can assimilate facts and act upon them. But at present we know of nothing that is capable of understanding an explanation -- or of wanting one in the first place -- other than the human mind. Every discovery of a new explanation depends on the uniquely human function of creative thought.
Although I no longer think that it is possible to understand everything that is, I do think that it is possible for more of us to understand everything that is understood because that depends upon the structure of our knowledge and its wonderfully increasing simplicity. In turn, the structure of our knowledge -- whether it is expressible in theories that fit together as a comprehensible whole -- does depend on what the fabric of reality is like. If knowledge itself is to continue its open-ended growth, and if we are nevertheless heading toward a state in which one person could understand everything that is understood, then the depth of our theories must continue to grow fast enough to make this possible. That can happen only if the fabric of reality itself is highly unified, so that more and more of it can be understood as our knowledge grows. If that happens, then eventually our theories will become so general that they will become a single theory of a unified fabric of reality. This theory will still not explain every aspect of reality; that is unobtainable by the current evolution of humankind. But it will encompass all known explanations, and will apply to the whole fabric of reality in so far as it is understood.
I have written about how I stumbled into my understanding of one of the two deepest theories in physics during my sophomore year in college. The second, quantum theory, is even deeper and I am still in the process of assimilating many of its components. These two theories provide the detailed explanatory and formal framework within which all other theories in modern physics are expressed, and they contain overarching physical principles to which all other theories conform. A unification of general relativity and quantum theory -- to give a quantum theory of gravity -- has been a major quest of theoretical physicists for several decades. Quantum theory, like relativity, provides a revolutionary new mode of explanation of physical reality. The reason why quantum theory is the deeper of the two lies more outside physics than within it, for its ramifications are very wide, extending far beyond physics -- and even beyond science itself as it is normally conceived. Quantum Theory is one of the four main strands from which the fabric of the universe is knitted and I will try to write about it first because I know even less about the other three strands which are Epistemology, Evolution, and Information Theory.
Niels Bohr said, "Anyone who is not shocked by Quantum Theory has not understood it". For reasons that I hope to cover under the heading of Evolution; our eye is one of our most important observational instruments for perceiving reality. The thread of information perceived by the human retina is light. Light seems easy enough to understand but let’s imagine a flashlight in an otherwise dark room with distant walls made of totally absorbent matte black walls. Invisibility is one of the most straight forward properties of light so we would see only total blackness until the flashlight is turned directly into our eyes. If the flashlight were then slowly pulled backward away from our eyes, the reflector would appear ever smaller. At a distance of approximately ten thousand kilometers the human observer would see nothing. If the observer were a frog, and the flashlight kept moving further away, the moment in which the frog lost sight of the light would never occur. Instead the frog would see the light begin to flicker. The flickers would come at irregular intervals that would become longer as the frog moved away. At a distance of one hundred million kilometers from the flashlight, the frog would see on average only one flicker of light per day, but that flicker would be as bright as any observed at any distance. Frogs retinal cells are sensitive to individual particles of light while ours only respond to a machine gun spray of many particles within a threshold of time.
This property of appearing only in lumps of discrete sizes is called quantization. An individual lump, such as a photon, is called a quantum (plural quanta). Quantum Theory gets its name from this property, which it attributes to all measurable physical quantities -- not just to things like the amount of light, or the mass of gold, which are quantized because while they appear to be continuous, are really made of particles. Our flashlight beam looks continuous only because every second it pours about a hundred trillion photons into an eye that looks into it. Next we repeat our experiment with the flashlight by intercepting and narrowing its beam by successively passing it through tiny pinholes onto a screen. What we are trying to determine is how ductile is the light -- how fine a thread can it be drawn into? It turns out that gold can be drawn into threads one ten-thousandth of an inch thick, but long before the pinholes become as small as a millimeter or so in diameter, the light begins to spread and 'fray'. Instead of passing through the holes in straight lines, it refuses to be confined and spreads out after each hole into the diffraction patterns associated with waves interfering with themselves. What matters for our present purposes is that light does bend. This means that shadows in general need not look like silhouettes of the objects that cast them. Suppose that we now adjust our flashlight so that instead of producing a hundred trillion photons per second, it shoots only one photon at a time. What would our frog, observing from the screen, see? If it is true that what interferes with each photon is other photons, then shouldn't the interference be lessened when the photons are very sparse? Should it not cease altogether when there is only one photon passing through the apparatus at any one time? We might still expect penumbras, since a photon might be capable of changing course when passing through a slit (perhaps by striking a glancing blow at the edge), But what we surely could not observe at any place on the screen, such as X that receives photons when two slits are open, but which goes dark when two more are opened.
In the first figure to the right you can see a beam of photons passing through a single slit and the resulting pattern on a screen. In the second figure, photons are passing through a second slit revealing an interference pattern that is more than the sum of the two slits' photon intensities. The pattern comes from a squaring of the amplitudes of the two photon waves so that the phase difference creates the observed interference pattern. So we have the state of a photon particle described by a wave function giving its amplitude as a function of position and time. This wave function gives us the particle's amplitude, not intensity: when we want to find the intensity of the wave function, we must square it. The intensity of a wave is what's equal to the probability that the particle will be at that position at that time. That's how quantum theory converts issues of momentum and position into probabilities: by using a wave function, whose square tells us the probability density that a particle will occupy a particular position or have a particular momentum. Furthermore the Heisenberg uncertainty principle says that there's an inherent uncertainty in the relationship between this position and momentum so that the more accurately we know the position of a particle, the less accurately we know the momentum, and vice versa. Quantum physics, unlike classical physics, is completely undeterministic. You can never know the precise position and momentum of a particle at any one time. You can only give the probabilities of these linked movements.
"The Universe is Splitting, Every Planck-time (10 E-43 seconds) into Billions of Parallel Universes"
When a photon passes through one of the slits of our apparatus, something interferes with it, deflecting it in a way that depends on what other slits are open. These mysterious interfering entities have passed through some of the other slits; the interfering entities behave exactly like photons but they cannot be detected except for their interference on other photons. If you momentarily will allow me to label these two types of photon as a tangible and a shadow we can infer that each tangible has an accompanying retinue of shadow photons, and that when a photon passes through one of the slits, some shadowing probability states pass through the other slits. Since different interference patterns appear when we cut slits at other places in the screen, provided that they are within the beam, shadow photons must be arriving all over the illuminated part of the screen whenever a tangible photon arrives. Therefore there are many more shadow photons than tangible ones. How many? Experiments cannot put an upper boundary on the number, but they do set a lower bound. In a laboratory the largest area that has been illuminated with a laser might be a square meter, and the smallest manageable size for the pinholes might be a thousandth of a millimeter. So there are about one trillion possible pinhole locations on the screen. Therefore there must be at least a trillion shadow photons accompanying each tangible one.
What this means is that we can infer the existence of a seething, prodigiously complicated, hidden world of shadow photons. They travel at the speed of light, bounce off mirrors, are refracted by lenses, and are stopped by opaque barriers or filters of the wrong color. Yet they do not trigger even the most sensitive detectors. The only thing in the universe that a shadow photon can be observed to effect is the tangible photon that it accompanies. That is the phenomenon of interference. Interference is not a special property of photons alone. Quantum theory predicts, and experiment confirms, that it occurs for every sort of particle. So there must be hosts of shadow neutrons accompanying every tangible neutron, hosts of shadow electrons accompanying every electron, and so on. Each of these shadow particles is detectable only indirectly, through its interference with the motion of its tangible counterpart.
In the post quantum world we might think of calling the shadow particles, collectively, a parallel universe, for they too are affected by tangible particles only through interference phenomena. It turns out that shadow particles are partitioned among themselves in exactly the same way as the universe of tangible particles is partitioned from them. In other words, they do not form a single, homogeneous parallel universe vastly larger than the tangible one, but rather a huge number of parallel universes, each similar in composition to the tangible one, and each obeying the same laws of physics, but differing in that the particles are in different positions in each universe. Physicists prefer to carry on using the word ‘universe’ to distinguish the realm of tangible particles that it always denoted, even though that entity now turns out to be only a small part of physical reality. A new word, multiverse has been coined to denote physical reality as a whole.
Single particle interference experiments like those described above show us that the multiverse exists and that it contains many counterparts for each particle in the tangible universe. To reach the further conclusion that the multiverse is roughly partitioned into parallel universes, we must consider interference phenomena involving more than one particle. The simplest way of doing this is to ask, by way of a ‘thought experiment’ what must be happening at a microscopic level when shadow photons strike an opaque object. They are stopped, of course: we know that because interference ceases when an opaque barrier is placed in the paths of shadow photons. But why? What stops them? We can rule out the straightforward answer – that they are absorbed, like tangible photons would be, by the tangible atoms in the barrier. For one thing, we know that shadow photons do not interact with tangible atoms. For another, we can verify by measuring the atoms in the barrier (or more precisely, by replacing the barrier with a detector) that they neither absorb energy nor change their state in any way unless they are struck by tangible photons. Shadow photons have no effect on tangible atoms.
Therefore there is some sort of shadow barrier at the same location as the tangible barrier. It takes no great leap of imagination to conclude that this shadow barrier is made up of shadow atoms that we already know must be present as counterparts of the tangible atoms in the barrier. There are very many of them present for each tangible atom. Indeed, the total density of shadow atoms in even the thinnest fog would be more than sufficient to stop a tank, let alone a photon, if they could all affect it. Since we find that partially transparent barriers have the same degree of transparency for shadow photons as for tangible ones, it follows that not all the shadow atoms in the path of a particular shadow photon can be involved in blocking its passage. Each shadow photon encounters much the same sort of barrier consisting of only a tiny proportion of all the shadow atoms that are present.
For the same reason, each shadow atom in the barrier can be interacting with only a small portion of the atoms in its vicinity, and the ones that it does interact with form a barrier much like the tangible one. And so on. All matter, and all physical processes, have this structure. If the tangible barrier is a frog’s retina, then there must be many shadow retinas, each capable of stopping only one of the shadow counterparts of each photon. Each shadow retina only interacts strongly with the corresponding shadow photons, and with the corresponding shadow frog, and so on. In other words, particles are grouped into parallel universes. They are ‘parallel’ in the sense that within each universe particles interact with each other just as they do in the tangible universe, but each universe affects the others only weakly, through interference phenomena.
We began with strangely shaped shadows cast by a flashlight and ended with parallel universes. Each step takes the form of noting that the behavior of objects that we observe can be explained only if there are unobserved objects present. The heart of the argument is that single-particle interference phenomena unequivocally rule out the possibility that the tangible universe around us is all that exists. There is no disputing the fact that such interference phenomena occur. Now to conclude this brief introduction, I could mourn all of the dead shadow cats for each of Erwin Schrödinger’s living tangible cats but I am going to instead return to my sunroom, where tangible Linda is knitting a tangible reality as trillions of her shadow Linda's each knit trillions of shadow variations on that reality. I observe Linda's tangibility according to the same observability definition that I previously gave because tangibility is relative to a given observer. So objectively there are not two kinds of photon, tangible and shadow, nor two kinds of frog, nor two kinds of Linda, nor two kinds of universe, one tangible and the rest shadow. There is nothing in the description I have given of the formation of shadows, or any of the related phenomena, that distinguish between the 'tangible' and the 'shadow' objects apart from the mere assertion that one of the 'copies' is 'tangible'. When I introduced the idea of tangible and shadow photons I distinguished them by saying that we can see the former, but not the latter. While I was writing that, hosts of shadow Bills were writing it too. They too drew a distinction between tangible and shadow photons; but the photons they called 'tangible' are among those that I called 'shadow'.
It seems that the synapses in our brains are so small that quantum effects are significant. This means that there is quantum uncertainty about whether a neuron will fire or not - and this degree of freedom that nature has allows for the interaction of mind and matter. This concept called decoherence means that billions of me are splitting off every fraction of a second into discrete universes with the implication that everything possible exists in one universe or another. Not only do none of the copies of an object have any privileged position in the explanation of shadows that I have just outlined, neither do they have a privileged position in the full mathematical explanation provided by quantum theory. I may feel subjectively that I am distinguished among the copies as the 'tangible' one, but I must come to terms with the fact that the others feel the same about themselves. Many of those Bills are at this moment writing these very words. Some have explained it better. Others have gone to take a nap.
Epistemology is the study of knowledge and justified belief. I will play with the idea that its first form came in Adam who had to have been a solipsist whose view of the ‘fabric of reality’ must have been confined to his own mental fluctuations. He might have thought that existence was a multi-dimensional movie being constructed in his head so that everything that he experienced — physical objects, events and processes — anything that would commonly be regarded as a constituent of his space and time, was his alone. For the solipsist, it is not merely the case that he believes that his thoughts, experiences, and emotions are, as a matter of contingent fact, the only thoughts, experiences, and emotions. Rather, the solipsist can attach no meaning to the supposition that there could be thoughts, experiences, and emotions other than his own. In short, the true solipsist understands the word “pain,” for example, to mean “my pain.” He cannot accordingly conceive how this word is to be applied in any sense other than this exclusively egocentric one. No wonder Adam was confused when God’s stentorian voice said “Be fruitful and multiply, and fill the earth and subdue it; and have dominion over the fish of the sea and over the birds of the air and over every living thing that moves upon the earth." If it would have been me, I would stay exactly in the same spot, contemplating the strangeness of having a navel, and trying to successfully parse the message while hoping to figure out some satisfying plan of action.
While I am on the topic of Adam's navel one must note that if Adam was indeed created in the image of God in Heaven, that our Lord must also be blessed with a navel and its origin is the point of attachment to be located between two star clusters, the Quinn Cluster and Gamma Quadrant Four. This location is hundreds of light years from our planet and was chosen because its coordinates lay within the area scientists have theorized to be the center of the Big Bang, the point of the universe's creation. There is also some evidence that God may have been the first and only solipsist and that our stage and all of its players exist only in His consciousness so that He quickly became bored with the limited action of His play and deduced that "It is not good for man to be alone; I will make a fitting helper for him." But all of His efforts to come up with an adequate helper fail. He parades all of the wild beasts and birds of the sky before the man and allows him the privilege of naming them, but "no fitting helper is found." By clear implication, the man rejects the whole of God's labors in creating other living creatures. They may be "good," but they are not good enough for man. God, now clearly distressed into genuine labor, is driven to the extreme expedient of crafting a woman from one of the man's ribs. The man, to his credit, speaking the first words in the Bible from a human being, acclaims her with joy but without expressing any gratitude or otherwise acknowledging God. Unfortunately, Eve has arrived bearing the Descartes flaw of thinking that she also exists. Next day they begin to compare notes about existence and God centered realism was born.
The world-view of the God centered realists was false, but it was not illogical. They believed in revelation and traditional authority as sources of reliable knowledge. They could simply point out that no amount of observation or argument can ever prove that one explanation of a physical phenomenon is true and another false. As they would put it, God could produce the same observed effects in an infinity of different ways, so that it is pure vanity and arrogance to possess a way of knowing, merely through one’s fallible observation and reason, which way He chose. These first theories of knowledge stressed its absolute, permanent character, whereas the later theories put the emphasis on its relativity or situation-dependence, its continuous development or evolution, and its active interference with the world and its subjects and objects. The whole trend moves from a static, passive view of knowledge towards a more and more adaptive and active one.
In 1633 Galileo Galilei revived the ancient idea of expressing general theories about nature in mathematical form, and improved upon it by developing the method of systematic experimental testing, which characterizes science as we know it? The reliability of scientific reasoning is not just an attribute of us: of our knowledge and our relationship with reality. It is a new fact about physical reality itself, a fact which Galileo expressed in the phrase “the book of nature is written in mathematical symbols”. It is impossible literally to ‘read’ any shred of a theory in nature. But what is genuinely out there is evidence, or, more precisely, a reality that will respond with evidence if we interact appropriately with it. Given a shred of a theory, or rather, shreds of several rival theories, the evidence is available out there to enable us to distinguish between them. Anyone can search for it, find it and improve upon it if they take the trouble. They do not need authorization, or initiation, or holy texts. They need only be looking in the right way – with fertile problems and promising theories in mind. After he collected and analyzed evidence for the heliocentric theory, the Inquisition tried Galileo for heresy and forced him under the threat of torture to kneel and read aloud a long, abject recantation saying that he “abjured, cursed, and detested” the heliocentric theory.
The next stage of development of epistemology may be called pragmatic. Parts of it can be found in early twentieth century approaches, such as logical positivism, conventionalism, and the "Copenhagen interpretation" of quantum mechanics. This philosophy still dominates most present work in cognitive science and artificial intelligence. According to pragmatic epistemology, knowledge consists of models that attempt to represent the environment in such a way as to maximally simplify problem-solving. It is assumed that no model can ever hope to capture all relevant information, and even if such a complete model would exist, it would be too complicated to use in any practical way. Therefore we must accept the parallel existence of different models, even though they may seem contradictory. The model which is to be chosen depends on the problems that are to be solved. The basic criterion is that the model should produce correct (or approximate) predictions (which may be tested) or problem-solutions, and be as simple as possible.
The pragmatic epistemology does not give a clear answer to the question where knowledge or models come from. There is an implicit assumption that models are built from parts of other models and empirical data on the basis of trial-and-error complemented with some heuristics or intuition. A more radical point of departure is offered by constructivism. It assumes that all knowledge is built up from scratch by the subject of knowledge. There are no 'givens', neither objective empirical data or facts, nor inborn categories or cognitive structures. The idea of a correspondence or reflection of external reality is rejected. Because of this lacking connection between models and the things they represent, the danger with constructivism is that it may lead to relativism, to the idea that any model constructed by a subject is as good as any other and that there is no way to distinguish adequate or 'true' knowledge from inadequate or 'false' knowledge.
The observational evidence being considered at this moment by physicists and astronomers would also have been available a billion years ago, and will still be available a billion years hence. The very existence of general explanatory theories implies that disparate objects and events are physically alike in some ways. The light reaching us from distant galaxies is, after all, only light, but it looks to us like other galaxies. Thus reality contains not only evidence, but also the means (such as our minds, and our artifacts) of understanding it. There are mathematical symbols in physical reality. The fact that it is we who put them there does not make them any less physical. In these symbols, in our planetariums, books, films, and computer memories and in our brains – there are images of physical reality at large, images not just of the appearance of objects, but the structure of reality. There are laws and explanations, reductive and emergent. There are descriptions and explanations of the Big Bang and of sub nuclear particles and processes; there are mathematical abstractions; fiction; art; morality; shadow photons; parallel universes. To the extent that these symbols, images and theories are true – that is, they resemble in appropriate respects the concrete or abstract things they refer to – there existence gives reality a new sort of self-similarity, the self similarity we call knowledge.
For Quantum Theory I tried to make an argument that a small fundamental phenomenon exhibiting only the tiny effect of quantum interference in the double slit experiment deserves to be considered as one of the four main strands from which the fabric of reality is knitted. Then I struggled to find a few words to establish that Epistemology over time has taken our view of the fabric from an absolute, permanent character, to later theories which put the emphasis on its relativity or situation-dependence, its continuous development or evolution, and its active interference with the world and its subjects and objects. Now I have arrived at the point where I need to make the case that life is a fundamental phenomenon of nature and therefore that Evolution deserves to be considered as the third main strand in the fabric of reality.
When I studied biology in high school, life was not considered to be fundamental at all. The very term ‘nature study’ – meaning biology – had become an anachronism. Fundamentally nature was physics which had an offshoot of organic chemistry, which studied the properties of compounds of the element carbon. Organic chemistry, in turn had an offshoot, biology which studied the chemical processes we call life. Only because we happen to be in the middle of such a process was this remote offshoot of a fundamental subject interesting to us. Physics, in contrast, was regarded as self-evidently important in its own right because the entire universe, life included, conforms to its principles. The Copernican revolution made the earth subsidiary to the central, inanimate sun. Subsequent discoveries in physics and astronomy showed not only that the universe is vast in comparison to the earth, but that it is described by all-encompassing laws that make no mention of life at all.
I believe that a phenomenon is ‘fundamental’ if a sufficiently deep understanding of the world depends on understanding that phenomenon. Opinions differ widely, of course, about what aspects of the world are worth understanding and consequently about what is deep or fundamental. Some would say that love is the most fundamental phenomenon in the world and correspondingly “All We Need”. Others believe that when one has learned certain sacred texts by heart, one understands everything that is worth understanding. The understanding that I am trying to write about is expressed in laws of physics, and in principles of logic and philosophy. A ‘deeper’ understanding is one that has more generality, incorporates more connections between superficially diverse truths, and explains more with fewer unexplained assumptions. The most fundamental phenomena are implicated in the explanation of many other phenomena, but are themselves explained only by basic laws and principles.
Returning to my biology class offshoot of organic chemistry in which we studied the chemical processes we call life, I would now have a better understanding of why we spent so much time talking about a characteristic of living things called ‘replication’. From my imperfect understanding of DNA, I would now understand that organisms are not copied during reproduction and far less do they do their own copying. Instead they are constructed afresh according to blueprints embodied in the parent organism’s DNA. For example, my broken nose has no chance of being copied to my son David. But if a change is made to the corresponding gene immediately after David was conceived, you need change only one molecule and David will not only have the new shape of nose, but copies of the new gene as well. That shows us that the shape of the nose is caused by that gene, and not by the shape of any previous nose. So the shape of my nose makes no causal contribution to the shape of David’s nose. But the shape of my genes contributes both to their own copying and to the shape of David’s nose.
So an organism is the immediate environment, or the ‘virtual machine’ which copies the real replicators: the organism’s genes. These living molecules, or genes – are merely molecules, obeying the same laws of physics and chemistry as non-living ones. They contain no special substance, nor do they have any special physical attributes. They just happen, in certain environments (niches) to be replicators. The property of being a replicator is highly contextual – that is, it depends on intricate details of the replicator’s environment and not in another. Also, the property of being adapted to a niche does not depend on any simple, intrinsic physical attribute that the replicator has at the time, but on effects that it may cause in the future – and under hypothetical circumstances at that time. Contextual and hypothetical properties are essentially derivative, so it is hard to see how a phenomenon characterized by such properties could possibly be a fundamental phenomenon of nature.
The only way that we can elevate these meaty virtual reality generators into something fundamental is to add some information accrual through adaptation. Adaptation can be defined directly in terms of knowledge: an entity is adapted to its niche if it embodies knowledge that causes the niche to keep that knowledge in existence. Now we are getting closer to the reason why life is fundamental. Life is about the physical embodiment of knowledge just as the Turing principle is about the physical embodiment of knowledge. It says that that it is possible to embody the laws of physics, as they apply to every possible environment, in programs for a virtual-reality generator. Genes are such programs. Not only that, but all other virtual-reality programs that physically exist, or will ever exist, are direct or indirect effects of life. For example, the virtual-reality programs that run our computers and our brains are indirect effects of human life. So life is the means – presumably a necessary means – by which the effects referred to in the Turing principle have been implemented in nature.
It was not until my working years took me into the realm of object oriented technology that I acquired a little understanding of how Information Theory allows us to understand the beautiful abstract mathematical structures inside the development of symbolic logic. These strange mathematical patterns reflect truths, falsities, hypotheticals, possibilities, and counterfactualities; and offer profound glimpses into the hidden wellsprings of human thought. Do these abstract states then evolve into dreads and dreams, hopes and griefs, ideas and beliefs, interest and doubts, infatuations and envies, memories and ambitions, bouts of nostalgia and floods of empathy, flashes of guilt and sparks of genius; play any role in the fashioning of the fabric of reality? Do such pure abstractions have causal powers? Can they shove massive things around, or are they just impotent fictions? Can a blurry, intangible "I" dictate to concrete objects such as electrons or muscles what to do? Have religious beliefs caused war and genocide, or have all these sad things been caused by the interactions of infinitesimal particles according to the laws of physics. Do drones cause boredom, Do jokes cause laughter? Do smiles cause swoons? Does love cause marriage? Or, in the end, are there just myriads of particles pushing each other around according to the laws of physics -- leaving, in the end, no room for selves and souls, dreads or dreams, love or marriage, smiles or swoons, jokes or laughter, drones or boredom.
You are already seeing that I don't understand Information Theory very well because I am starting out like a child does in asking a lot of questions. Even the fact that we always capitalize the first person pronoun "I". The convention is striking and strange, hinting that the word must designate something very important. Indeed, to some people -- the ineffable sense of being an "I" or a "first person", the intuitive sense of "being there" or simply "existing" the powerful sense of "having experience" and of having "raw sensations" seem to be the realest things in their lives, and an inner voice bridles furiously at any proposal that this might all be an illusion, or merely the outcome of some kind of physical processes taking place among "third-person" objects.
Living beings, having been shaped by evolution, have survival as their most fundamental, automatic, and built-in goal. To enhance the chances of survival, any living thing must be able to react flexibly to events that take place in its environment. This means it must develop the ability to sense and categorize, however rudimentarily, the goings-on in its immediate environment. Once the ability to sense external goings-on has developed, however, there ensues a curious side effect that will have vital and radical consequences. This is the fact that the living being's ability to sense certain aspects of its environment flips around and endows the being with the ability to sense certain aspects of itself. That this flipping-around takes place is not in the least amazing or miraculous, rather, it is quite unremarkable, indeed trivial, consequence of the being's ability to perceive. Some people may find the notion of such self-perception peculiar, pointless, or even perverse, but such a prejudice does not make self-perception a complex or subtle idea, let alone paradoxical. After all, in the case of a being struggling to survive, the one thing that is always in its environment is itself. So why, of all things, should the being be perceptually immune to the most salient item in its world? For a living creature to have evolved rich capabilities of perception and categorization but to be constitutionally incapable of focusing any of that apparatus onto itself would be highly anomalous. Its selective neglect would be pathological, and would threaten its survival.
I am always trying not to get bogged down in definitions but some of my readers will now contend that I have only made an argument for my being's reception, or image receiving capability and I have not yet made the case for its perception. Perception takes as its starting point some kind of input (possibly but not necessarily a two-dimensional image) composed of a vast number of tiny signals, but then it goes much further, eventually winding up in the selective triggering of a small subset of a large repertoire of dormant signals -- descrete structures that have representational quality. That is to say, a symbol inside a cranium that can be thought of as a triggerable physical structure that constitutes the brain's way of implementing a particular category or concept. There are symbols in our brain for many nouns and for action concepts like "kick", "kiss", and "kill", for relational concepts like "before", "behind", and "between", and so on. Symbols in the brain are the neurological entities that correspond to concepts, just as genes are the chemical entities that respond to hereditary traits. Each symbol is dormant most of the time, but on the other hand, every symbol in our brain's repertoire is potentially triggerable at any time.
The passage leading from vast numbers of received signals in a handful of triggered symbols is a kind of funneling process in which initial input signals are manipulated or "massaged", the results of which selectively trigger further (i.e., more "internal") signals, and so forth. This baton passing by squads of signals traces out an ever-narrowing pathway in the brain, which winds up triggering a small set of symbols whose identities are of course a subtle function of the original input signals. For the dog loving "determinist" who was the original target for this musing, I would next try to compare "dogthink". As we move upward on the purely biological ladder of perceptual sophistication, rising from viruses to bacteria to mosquitoes to frogs to dogs to people, the repertoire of triggerable symbols becomes richer and richer. Simply judging from their behavior, no one could doubt that pet dogs develop a respectable repertoire of categories, including such examples as "my paw", "my tail", "my food", "my water", "my dish", "indoors", "outdoors", "dog door", "human door", "open", "closed", "hot", "cold", "nighttime", "daytime", "sidewalk", "road", "bush", "grass", "leash", "take a walk" "the park", "car", "car door", "my big owner", "my little owner", "the cat", "the friendly neighbor dog", "the mean neighbor dog", "UPS truck", "the vet", "ball", "eat", "lick", "drink", "play", "sit", "sofa", "climb onto", "bad behavior", "punishment", and on and on. Service dogs often learn hundreds of more words and respond to highly variegated instances of these concepts in many different contexts, thus demonstrating something of the richness of their internal category systems (i.e., their repertoire of triggerable symbols).
I am trying to communicate with other English speaking humans so I used a set of English words and phrases in order to suggest the nature of a canine repertoire of categories, but of course I am not claiming that human words are involved when a dog reacts to a neighbor dog or to the UPS truck. But one word bears special mention, and that is the word "my", as in "my tail" or "my dish". I suspect that my friend Elaine would agree that a pet dog realizes that a particular paw belongs to itself, as opposed to being merely a random physical object in the environment or a part of some other animal. Likewise, when a dog chases his tail, even though it is surely unaware of the irony of the act it must know that "that" tail is part of its "own" body. I am thus suggesting that a dog has some kind of rudimentary self-model, some kind of sense of itself. In addition, to its symbols for "car", "ball", and "leash", and its symbols for other animals and human beings, it has some kind of internal cerebral structure that represents itself (i.e., the dog itself and not the symbol itself).
Creatures of the sophistication level of dogs, thanks to the inevitable flipping-around of their perceptual apparatus and their modest but nontrivial repertoire of categories, cannot help developing an approximate sense of themselves as physical entities in a larger world. Although a dog will never know a thing about its kidneys or its cerebral cortex, it will develop some notion of its paws, mouth and tail, and perhaps of its tongue or its teeth. it may have seen itself in a mirror and perhaps realized that "that dog over there by my master" is in fact itself. Or it may have seen itself in a home video with its master, recognized the recording of its master's voice, and realized that the barking on the video was its own.
For those who are extremely tolerant of my wordiness and who tend toward a religious perspective on things, I would add that this is similar to Spinoza’s idea of God as nature. The divine is alleged to communicate with us through information. This is a persistent theme of Information Theory which sees the universe as information and even Christ as information. Such information has a kind of electrostatic life connected to the theory of what is called orthogonal time. The latter is rich and strange idea of time that is completely at odds with the standard, linear conception, which goes back to Aristotle, as a sequence of now-points extending from the future through the present and into the past. Orthogonal time is a circle that contains everything rather than a line both of whose ends disappear in infinity just as grooves on an LP contain that part of the music which has already been played; they don’t disappear after the stylus tracks them.” It is like that seemingly endless final chord in the Beatles’ “A Day in the Life” that gathers more and more momentum and musical complexity as it decays. In other words, orthogonal time permits total recall.
And yet all of this, though in many ways impressive, is still extremely limited in comparison to the sense of self and "I"-ness that continually grows over the course of a normal human being's lifetime. A spectacular evolutionary gulf opened up at some point as human beings were gradually separating from other primates: their category systems became arbitrarily extendible. Into our mental lives there entered a dramatic quality of open endedness, an essentially unlimited extensibility, as compared with the very palpable limitedness in other species. Concepts in the brains of humans acquired the property that that they could get rolled together with other concepts into larger packets, and any such larger packet could then become a new concept in its own right. In other words, concepts could nest inside each other hierarchically, and such nesting could go to arbitrary degrees.
For instance, the phenomenon of having offspring gave rise to concepts such as "mother", "father", and "child". These concepts gave rise to the nested concept of "parent" -- nested because forming it depends upon having three prior concepts: "mother", "father", and the abstract idea of "either"/or". Once the concept of "parent" existed, that opened the door to the concepts of "grandmother" ("mother of a parent") and "grandchild" ("child of a child"), and then of "great-grandmother" and "great-grandchild". All of these concepts came to us courtesy of nesting. With the addition of "sister" and "brother", then further notions having greater levels of nesting, such as "uncle", "aunt", and "cousin" could come into being. And then a yet more-nested notion such as "family" could arise. ("Family" is more nested because it takes for granted and builds on these prior concepts).
In the collective human idiosphere, the buildup of concepts through such acts of composition started to snowball, and it turns out to know no limits. Our species would soon find itself leapfrogging upwards to concepts called "love affair", "love triangle", "fidelity", "temptation", "revenge", "despair", "insanity", "nervous breakdown", "hallucination","illusion", "reality", "fantasy", "abstraction", "dream", "multi-verse", and at the grand pinnacle of it all, "soap opera" (in which are also nested the concepts of "commercial break", "ring around the collar", and "Brand-X"). In another section of this blog called Cosmology I used the language of Information Theory to call a human being an Instantiated Unit of Consciousness (IUOC) residing within the complex interconnected networks formed by relationships between objects in a system—including social networks, the interactions of particles, and the "symbols" that stand for ideas in a brain or intelligent computer. This consciousness system has free will because free will is a component of consciousness. As an IUOC, I have the potential to be independent, to have an independent freewill, within a reality within the Larger Consciousness System (LCS). My independent free will requires a reality. Otherwise I am simply accumulated data, a potentiality within the LCS.
The LCS selects a historical individual, like me, or a sequence of historical characters that have progressed through multiple lifetimes. To this information, the LCS adds a freewill -- which means it inserts me into a virtual reality where I can make free choices within an evolving decision space appropriate to my ability/quality/awareness. Recall that consciousness itself is the only thing that is fundamental and that everything else is virtual. This “everything else” includes all structured (within the bounds of some sort of rule-set) realities where experiential interaction takes place. All experiential realities are virtual. Consciousness creates the structure. The structure defines the reality, and the reality creates the possibility for an interactive experience between subsets of consciousness. The quality of the subset of consciousness (as specified by its history) and the structural bounds defining the reality together determine the available decision space and the nature of possible interactions.
The historical record of these subsets or entities grows or evolves as choices are made and their intent is expressed. What is gained by a subset of consciousness participating in a virtual reality is a new historical record that (thinking positively) accumulates quality (reduces entropy) as it engages in exercising its freewill intent. I can be considered as an individual subset of consciousness with a history and could be “bubbled up” or be chosen by the LCS to engage in a virtual reality appropriate to the evolutionary needs. I am an IUOC that is restricted to abide by the current rule-set. As I experience and collapse probability waves in this virtual reality, my IOUC collects the data and integrates it in real time. The depth and complexity of this human memory of mine is staggeringly rich. Little wonder then, that, that when a human being, possessed of such a rich armory of concepts and memories with which to work, turns its attention to itself, as it inevitably must, it produces a self-model that is extraordinarily deep and tangled. That deep and tangled self-model is what "I"-ness is all about.
I want to race to a conclusion by saying that Steven Hawking snapped a picture of the human race as just an astrophysically insignificant 'chemical scum' but he took that picture before Information Theory presented us with the idea that the gross behavior of our planet, star, and galaxy depend upon the emergent but fundamental physical quantity: the knowledge in that scum. The creation of useful knowledge by science, and adaptations by evolution, must be understood as the emergence of the self-similarity that is mandated by a principle of physics, the Turing principle. Thus the problem with taking any of these fundamental theories individually as the basis of a world-view is that they are each, in an extended sense, reductionist. That is, they have a monolithic explanatory structure in which everything follows from a few extremely deep ideas. But that leaves aspects of the subject entirely unexplained. In contrast, the explanatory structure that they jointly provide for the fabric of reality is not hierarchical: each of the four strands contains principles which are 'emergent' from the perspective of the other three, but nevertheless help to explain them. Three of the four strands seem to rule out human beings and human values from a fundamental level of explanation. The fourth, epistemology, makes knowledge primary but gives no reason to regard epistemology itself as having relevance beyond the psychology of our own species. But if knowledge is of fundamental significance, we might ask what sort of role now seems natural for knowledge-creating beings such as ourselves in the fabric of reality.
I first learned about 'nonlocality' in the 1990s and it startled me with its implications, I felt that no discovery since quantum mechanics had posed more challenges to our sense of everyday reality. In everyday speech 'locality' is a slightly pretentious word for a neighborhood, town, or other place. But its original meaning, dating to the seventeenth century, is about the very concept of 'place.' It means that everything has a place. You can always point to an object and say, 'Here it is.' If you can't, that thing must not really exist. If your teacher asks where your homework is and you say it isn't anywhere, you have some explaining to do.
The world we experience possesses all the qualities of locality. We have a strong sense of place and of relations among places. We feel the pain of separation from those we love and the impotence of being too far away from something we want to effect. And yet quantum mechanics and other branches of physics now suggest that, at a deeper level, there may be no such thing as distance. Physics experiments can bind the fate of two particles together, so that they behave like a pair of magic coins: if you flip them, each will land on heads or tails--but always on the same side as its partner. They act in a coordinated way even though no force passes through the space between them. Those particles might zip off to opposite sides of the universe, and still they act in unison. These particles violate locality and transcend space.
Evidently nature has struck a peculiar and delicate balance: under most circumstances it obeys locality, and must obey locality if we are to exist, yet it drops hints of being nonlocal at its foundations. For those who study it, nonlocality is the mother of all physics riddles, implicated in a broad cross section of the mysteries of quantum particles, but also the fate of black holes, the origin of the universe, and the essential unity of nature.
For Albert Einstein, locality was one aspect of a broader philosophical puzzle: Why are we humans able to do science at all? Why is the world such that we can make sense of it? In a famous essay in 1936, Einstein wrote that the most incomprehensible thing about the universe is that it is comprehensible. At first glance, this statement itself seems incomprehensible. The universe is not a conspicuously rational place. It is wild and capricious, full of misdirection and arbitrariness, injustice and misfortune. Much of what happens defies reason. Yet against this backdrop of inexplicable happenings, the world's rules glow with reassuring regularity. The sun rises in the east. Things fall when you drop them. After the rain comes a rainbow. People go into physics out of a conviction that these are not just gratifying exceptions to the anarchy of life, but glimpses of an underlying order.
Einstein's point was that physicists really had no right to expect that. The world needn't have been orderly at all. It didn't have to abide by laws; under other circumstances, it might have been anarchic all the way down. When a friend wrote to ask Einstein what he had meant by the incomprehensibility remark, he wrote back, "A priori one should expect a chaotic world which cannot be grasped by the mind in any way."
Although Einstein said comprehensibility was a 'miracle' we shall never understand, that didn'y stop him from trying. He spent his entire professional life articulating exactly what it is about the universe that makes it make sense, and his thinking set the course of modern physics. He recognized for example, that the inner workings of nature are highly symmetrical, looking the same if you view the world from a different angle. Symmetry brings order to the bewildering zoo of particles that physicists have found; entire species of particles are, in a sense, mirror images of one another. But among all the properties of the world that give us hope for understanding it, Einstein kept coming back to locality as the most important.
Locality is a subtle concept that can mean different things to different people. For Einstein, it had two aspects. The first he called 'separability,' which says that you can separate any two objects or parts of an object and consider each on its own, at least in principle. You can take your dining room chairs and put each one in a different corner of the room. They will not cease to exist or lose any of their features--size, style, cushiness. The entire dining room set derives its properties from the chairs that make it up, if each chair can seat one person, a set of chairs can seat four people. The whole is the sum of its parts. The second aspect that Einstein identified is known as 'local action,' which says that objects interact only by banging into one another or recruiting some middleman to to bridge the gap between them. Whenever a distance separates us from someone, we know we cannot have any effect on that person unless we cross the distance and touch, talk to, punch--somehow make direct contact with--that person, or send someone or something to do it for us. Modern technology does not evade this principle, it merely recruits new intermediaries. A phone translates sound waves into electrical signals or radio waves that travel through wires or open space and then get translated back into sound waves on the other end. At every step of the way something has to make direct contact with something else. If there is even a hairline crack in the wire, the message gets as far as a scream on the airless moon. Simply put, separability defines what objects are, and local action dictates what they do.
Einstein captured these principles in the theory of relativity. Specifically relativity theory says that no material thing can move faster than light. Without such an ultimate speed limit, objects might move infinitely fast and distance would lose its meaning. All the forces of nature must wend their way laboriously through space, rather than leap across it in a single bound, as physicists used to suppose. Relativity theory thereby provides a measure of isolation among separated objects and ensures their mutual distinctness.
Depending on your frame of mind, relativity theory and the other laws of physics are either a satisfying deep order to the universe or a series of killjoy rules, like the authoritarian parent trying to take all of the fun out of life. How great it would be to flap our arms and fly--but sorry, no can do. We could solve the world's problems by creating energy--oh, physics won't allow that, either, we can only convert one form of energy into another. And now comes locality, yet another draconian diktat, to spoil our dreams of faster-than-light starships and psychic powers. Locality dashes sports fans' eternal hope that, by crossing their fingers or bellowing some insightful comment from their armchairs, they might give their team an edge on the playing field. Unfortunately, if your team is losing and you're serious about wanting to help, you'll have to get up and go to the stadium.
Yet locality is for our own good. It grounds our sense of self, our confidence that our thoughts and feelings are our own. With all due respect to John Donne, every man is an island, entire of himself. We are insulated from one another by seas of space, and we should be grateful for it. Were it not for locality, the world would be magical--and not in a happy, Disneyesque way. As much as sports fans may wish they could sway the game from their living rooms, they should be careful what they wish for, because supporters of the opposing team would presumably have this power too. Millions of couch potatoes across the land would strain to give their side some advantage, making the game itself meaningless--a contest of fans' wills rather than of talent on the field. Not just sports games but the entire world would become hostile to us. In a world without locality, objects outside of your body could reach inside without having to pass through your skin, and your body would lose its ability to control its internal condition. You would blend into your environment. And that is the very definition of death.
By focusing on locality as a crucial prerequisite to comprehending nature, Einstein crystallized two thousand years of philosophical and scientific thought. For ancient Greek thinkers such as Aristotle and Democritus, locality made rational explanation possible. When objects can effect one another only by making direct contact, you can describe any event by giving a blow-by-blow account of 'this hit that, which in turn knocked into that, which in turn bounced off some other thing.' Every effect has a cause linked to it by a chain of events unbroken in space and time. There's no point at which you have to wave your hands and mumble, 'Then a miracle occurs.' It wasn't the miracle the Greek philosophers objected to--they weren't atheists--so much as the mumbling. Even gods, they felt, should exert their power by clear and explicable rules. Locality is essential not just to the types of explanations that philosophers and scientists seek, but to the methods they use. They can isolate objects from one another, grasp them one at a time, and build up a picture of the world step by step. They are not faced with the impossible task of taking it in all at once.
In 1948, toward the end of his life, Einstein summarized the importance of locality in a short essay:"The concepts of physics refer to a real external world. . . things that claim a 'real existence' independent of one another, insofar as these things 'lie in different parts of space.' Without such an assumption of the mutually independent existence. . of spatially distant things, an assumption that originates in everyday thought, physical thought in the sense familiar to us would not be possible. Nor does one see how physical laws could be formulated and tested without such a clean separation."
Locality has such a pervasive importance because it is the essence of what space is. By 'space' I don't just mean 'outer space,' the realm of astronauts and asteroids, but the space between us and all around us, the space that our bodies and everything else occupy, the space through which we swing a baseball bat or stretch a measuring tape. Whether you point your telescope at the planets or at the next-door neighbors, you are peering across space. For me, the beauty of the landscape comes from the giddy sense of spanning space, a sort of horizontal vertigo when you realize the little dots on the other side of a valley really are there and that you could touch them if only your arms were long enough.
As painters have long realized, space is not mere absence, but a thing in its own right. What comes between objects on a canvas is as important to the composition as the objects themselves. For a physicist, space is the canvas of physical reality. Almost every attribute of our physical selves is spatial. We occupy a place. We have a shape. We move. Our bodies are intricate choreographies of cells and fluids dancing in space. Every interaction we have with the rest of the world passes through space. Living things are things, and what is a thing but a part of the universe that acquires an individual identity by virtue of occupying a certain volume of space?
Physics is rooted in the study of how things move through space, and space defines practically every quantity that physics deals in: distance, size, shape, position, speed, direction. Other quantities of the world may not appear spatial, but are; color for example, corresponds to the size of a light wave. Only a very few properties of matter have no known spatial explanation, such as electric charge, and even these betray themselves by deflecting motion through space. When we look at an object, everything about it is ultimately spatial arising from how its particles are arranged; the particles themselves are the barest flecks. Function follows form. Even non spatial concepts become spatial in physicists' minds; time becomes an axis on a graph, and the laws of nature operate within abstract spaces of possibility. No less an authority than Immanuel Kant, whose ideas were a major influence on Einstein, thought it impossible to conceive of a world without space.
What a twist of fate that the greatest champion of locality was also its undoer. Though best known to the wider world for relativity theory, Einstein actually won his Nobel for co-founding quantum mechanics, the theory that describes how atoms and subatomic particles behave. Actually, physicists think quantum mechanics describes how everything behaves, although its distinctive effects are strongest on tiny scales. The theory grew out of Einstein's nd his contemporaries' epiphany that atoms and particles can't just be little versions of the things we see around us. If they were--if they acted according to the classical laws of physics developed by Isaac Newton and others--the world would self-destruct. Atoms would implode; particles would explode; light bulbs would fry you with deadly radiation. The fact we're still alive means that matter must be governed by some new set of laws. Einstein welcomed the strangeness; in fact, despite the reputation that he later acquired as a rearguard defender of classical physics, he was consistently ahead of everyone else in appreciating the alien features of the quantum world.
Among those features was nonlocality. Quantum mechanics predicts that two particles can become blood brothers. For want of a mechanism to couple them, the particles should be completely autonomous, yet to touch one was to touch the other, as if the distance between them meant nothing. The scientific method of divide and conquer fails for them. The particles have joint properties that escape you if you view them one at a time; you must measure the particles together. Our world is crisscrossed by a web of these seemingly mystical relationships. Atoms in your body retain a bond with everyone you have loved--which sounds romantic until you realize that you're also linked to every weirdo who brushed against you while walking down the street.
Particles on opposite sides of the universe can't really be connected, can they? The idea struck Einstein as silly, a regression to pre-scientific notions of sorcery. Any theory that implied such 'spooky actions at a distance," he reasoned, had to be missing something. He figured that the world was in fact local and merely gave the impression of being nonlocal, and he sought a deeper theory that would lay bare the hidden mechanism whereby two particles can act in unison. Try as he might, though, Einstein could never find such a theory, and he recognized that he might be the one who was missing something. There might be no concealed clockwork. The principle of locality--and with it, our conception of space--might not hold. A few months before he died, Einstein reflected on what the dissolution of space might mean to our understanding of the world; "Then nothing will remain of my whole castle in the air including the theory of gravitation, but also nothing of contemporary physics."
What was really spooky was how sanguine most of his contemporaries were. To them, nonlocality was a non issue. The reasons for their dismissive attitude were complicated and are still debated by historians, but perhaps the most charitable explanation is pragmatism. The questions that vexed Einstein just didn't seem relevant to the practical applications of quantum theory. Only in the 1960s did a new generation of physicists and philosophers give Einstein's worries a real hearing. The experiments they did suggested that nonlocality was not a theoretical curiosity, but a fact of life. And even then, most of their colleagues gave it little thought--which is I stumbled upon it in my dotage.
In the past thirty years, though, we have witnessed a remarkable evolution in attitudes. Nonlocality has surged into the currents of mainstream physics and swept far past the phenomenon that Einstein discovered. The Columbia University string theorist Brian Greene in his 2003 book The Fabric of the Cosmos, wrote that nonlocal connections "show us, fundamentally, that space is not what we once thought it to be." Well what is it then? Investigating nonlocality may clue us in. Many physicists now think that space and time are doomed--not fundamental elements of nature, but products of some primeval condition of spacelessness. Space is like a rug with ragged edges and worn spots. Just as we can look at those frayed areas to see how the rug is woven, we can study nonlocal phenomena to glimpse how space is assembled from spaceless components.
Images of Greece 1989