Tag Archives: spacetime

Time is an emergent phenomenon that is a side effect of quantum entanglement

Emergence is a popular idea in science. In particular, physicists have recently become excited about the idea that gravity is an emergent phenomenon. So it’s a relatively small step to think that time may emerge in a similar way.

This is an elegant and powerful idea. It suggests that time is an emergent phenomenon that comes about because of the nature of entanglement. And it exists only for observers inside the universe. Any god-like observer outside sees a static, unchanging universe, just as the Wheeler-DeWitt equations predict.

Source: Quantum Experiment Shows How Time ‘Emerges’ from Entanglement

The End of Time

What is Time?
Time is the space of probability that gives certain events the chance to occur.

In probability theory, a probability space or a probability triple is a mathematical construct that models a real-world process (or “experiment“) consisting of states that occur randomly. A probability space is constructed with a specific kind of situation or experiment in mind. One proposes that each time a situation of that kind arises, the set of possible outcomes is the same and the probability levels are also the same.

Time

Time and the Probability of Events
Quantum probability was developed in the 1980s as a noncommutative analog of the Kolmogorovian theory of stochastic processes. One of its aims is to clarify the mathematical foundations of quantum theory and its statistical interpretation. A significant recent application to physics is the dynamical solution of the quantum measurement problem, by giving constructive models of quantum observation processes which resolve many famous paradoxes of quantum mechanics.

Quantum states can freeze like water to ice if they are observed continuously.

For example the quantum Zeno effect is a situation in which an unstable particle, if observed continuously, will never decay. One can “freeze” the evolution of the system by measuring it frequently enough in its (known) initial state. The meaning of the term has since expanded, leading to a more technical definition in which time evolution can be suppressed not only by measurement: the quantum Zeno effect is the suppression of unitary time evolution caused by quantum decoherence in quantum systems provided by a variety of sources: measurement, interactions with the environment, stochastic fields, and so on. As an outgrowth of study of the quantum Zeno effect, it has become clear that applying a series of sufficiently strong and fast pulses with appropriate symmetry can also decouple a system from its decohering environment.

Observation can freeze Time
Consciousness is order and the absence of order in the physical realm is radioactivity. In cryptography, the one-time pad (OTP) is a type of encryption which has been proven to be impossible to crack if used correctly. Each bit or character from the plaintext is encrypted by a modular addition with a bit or character from a secret random key (or pad) of the same length as the plaintext, resulting in a ciphertext. If the key is truly random (like the randomness deriving from radioactive decay), as large as or greater than the plaintext, never reused in whole or part, and kept secret, the ciphertext will be impossible to decrypt or break without knowing the key.

Radioactive decay produce the best natural random numbers which are used for encryption. The absence of consciousness is the reason for radioactivity in our physical realm. But if the consciousness is too high, the system freeze and no event can occur.

For this kind of randomness time is a critical factor. Without the necessary time certain events cannot occur and with a constant observation like in the situation of the Quantum Zeno effect the flowing of time for that system will freeze and no such event will ever occur. The flow of time depends strictly from consciousness. Consciousness is what produces the space-time. Our universe may look on the first superficial observation of an science rookie a little bit random, but it must be an astonishingly improbable coincidence revolving around the indisputable fact that the cosmos could have any properties but happens to have EXACTLY the right ones for life, right? We live in a biocentric world, based on consciousness and mind rules the physical realm.

If it is a proven fact that consciousness freeze the evolution of time in a microscopic system, why do we continue to experience flow of time?

Life force, according to the traditional chinese medicine, flows in specific meridians or paths during the day. Our life force, or consciousness, is not enough to be able to observe our body as a whole. We are not living in the state of a Einstein-Podolsky-Rosen paradox, in which a pair of quantum systems may be described by a single wave function, which encodes the probabilities of the outcomes of experiments that may be performed on the two systems, whether jointly or individually. No, we experience decay and time flow. So our ability to observe is limited as our life force is limited. Time is therefore our friend, because without time no event could occur and we are not frozen in a state of continuous observation. Thanks to this paths of life force our consciousness flows through our body during 24 hours, manifesting order in our organism, but without freezing us by concentrating all the awareness on one point. If such a concentrated awareness on one point occurs we experience a so called “blockage” which can be painful and hinder our organism to work properly.

Our awareness is distributed in our organism through the meridian paths. This kind of distribution of life force enables life.

The End of Time
Can time come finally to an end? Well, it is possible in laboratory under certain conditions on the atomic level. But the end of time doesn’t mean only that a physical state freeze. It means simply that certain events cannot occur anymore. Imagine that every thought has a specific frequency, like sounds in a musical piece. Some thoughts seems to be harmonious to other thoughts because of their compatible frequency. If they can be combined in a harmonious way forming a piece of art, then this kind of thoughts will be promoted by the mind. Every thought can be compared to an event which occur in the mind. Thoughts have consequences on the physical realm too. A thought will influence society and life. No kingdom can persist without thoughts, and this is what will certainly happen, that the overall frequency of all minds, of the collective consciousness, will be raised up to such an high level that all thoughts of a lower level will not be possible anymore. What we perceive as reality is a process that involves consciousness and consciousness has a frequency. Raising the frequency of the collective consciousness will force a separation in the entire humankind.

Every thought has a frequency. Bad thoughts has a lower frequencies, but thoughts on truth, love and kindness has very high frequencies.

For example thoughts which derives from pure love and kindness has a high frequency. Hate or immorality has a very low frequency. What kind of thoughts will then finally be possible when the carrier frequency of the collective mind will be raised? All the people who hates the truth and are addicted to bad behavior will walk around like zombies, loosing their mind. In absence of a clear mind they will kill each other. This is a purification process. Only the mild-tempered ones will survive. And time will continue to flow eternally for whoever loves the truth.

See also:
- Consciousness is order and the source of energy
- Radioactivity is the Absence of Order and Consciousness
- Biocentrism and Cosmology

Spacetime

In physics, spacetime (or space-time, space time, space-time continuum) is any mathematical model that combines space and time into a single continuum. Spacetime is usually interpreted with space as being three-dimensional and time playing the role of a fourth dimension that is of a different sort from the spatial dimensions. From a Euclidean space perspective, the universe has three dimensions of space and one dimension of time. By combining space and time into a single manifold, physicists have significantly simplified a large number of physical theories, as well as described in a more uniform way the workings of the universe at both the supergalactic and subatomic levels.

Two-dimensional analogy of spacetime distortion. Matter changes the geometry of spacetime, this (curved) geometry being interpreted as gravity. White lines do not represent the curvature of space but instead represent the coordinate system imposed on the curved spacetime, which would be rectilinear in a flat spacetime.

In non-relativistic classical mechanics, the use of Euclidean space instead of spacetime is appropriate, as time is treated as universal and constant, being independent of the state of motion of an observer. In relativistic contexts, time cannot be separated from the three dimensions of space, because the observed rate at which time passes for an object depends on the object’s velocity relative to the observer and also on the strength of gravitational fields, which can slow the passage of time.

Cosmology
In cosmology, the concept of spacetime combines space and time to a single abstract universe. Mathematically it is a manifold consisting of “events” which are described by some type of coordinate system. Typically three spatial dimensions (length, width, height), and one temporal dimension (time) are required. Dimensions are independent components of a coordinate grid needed to locate a point in a certain defined “space”. For example, on the globe the latitude and longitude are two independent coordinates which together uniquely determine a location. In spacetime, a coordinate grid that spans the 3+1 dimensions locates events (rather than just points in space), i.e. time is added as another dimension to the coordinate grid. This way the coordinates specify where and when events occur. However, the unified nature of spacetime and the freedom of coordinate choice it allows imply that to express the temporal coordinate in one coordinate system requires both temporal and spatial coordinates in another coordinate system. Unlike in normal spatial coordinates, there are still restrictions for how measurements can be made spatially and temporally (see Spacetime intervals). These restrictions correspond roughly to a particular mathematical model which differs from Euclidean space in its manifest symmetry.

Time Dilation
Until the beginning of the 20th century, time was believed to be independent of motion, progressing at a fixed rate in all reference frames; however, later experiments revealed that time slowed down at higher speeds of the reference frame relative to another reference frame (with such slowing called “time dilation” explained in the theory of “special relativity“). Many experiments have confirmed time dilation, such as atomic clocks onboard a Space Shuttle running slower than synchronized Earth-bound inertial clocks and the relativistic decay of muons from cosmic ray showers. The duration of time can therefore vary for various events and various reference frames.

When dimensions are understood as mere components of the grid system, rather than physical attributes of space, it is easier to understand the alternate dimensional views as being simply the result of coordinate transformations.

The term spacetime has taken on a generalized meaning beyond treating spacetime events with the normal 3+1 dimensions. It is really the combination of space and time. Other proposed spacetime theories include additional dimensions—normally spatial but there exist some speculative theories that include additional temporal dimensions and even some that include dimensions that are neither temporal nor spatial. How many dimensions are needed to describe the universe is still an open question. Speculative theories such as string theory predict 10 or 26 dimensions (with M-theory predicting 11 dimensions: 10 spatial and 1 temporal), but the existence of more than four dimensions would only appear to make a difference at the subatomic level.

See also:
- Biocentrism and Cosmology
- Pregeometry
- The Electric Universe

Fractal Cosmology

In physical cosmology, fractal cosmology is a set of minority cosmological theories which state that the distribution of matter in the Universe, or the structure of the universe itself, is a fractal. More generally, it relates to the usage or appearance of fractals in the study of the universe and matter. A central issue in this field is the fractal dimension of the Universe or of matter distribution within it, when measured at very large or very small scales.

A 'galaxy of galaxies' from the Mandelbrot Set

The use of fractals to answer questions in cosmology has been employed by a growing number of serious scholars close to the mainstream, but the metaphor has also been adopted by others outside the mainstream of science, so some varieties of fractal cosmology are solidly in the realm of scientific theories and observations, and others are considered fringe science, or perhaps metaphysical cosmology. Thus, these various formulations enjoy a range of acceptance and/or perceived legitimacy.

Fractals in observational cosmology
The first attempt to model the distribution of galaxies with a fractal pattern was made by Luciano Pietronero and his team in 1987, and a more detailed view of the universe’s large-scale structure emerged over the following decade, as the number of cataloged galaxies grew larger. Pietronero argues that the universe shows a definite fractal aspect, over a fairly wide range of scale, with a fractal dimension of about 2. The ultimate significance of this result is not immediately apparent, but it seems to indicate that both randomness and hierarchal structuring are at work, on the scale of galaxy clusters and larger.

A debate still ensues, over whether the universe will become homogeneous and isotropic (or is smoothly distributed) at a large enough scale, as would be expected in a standard Big Bang or FLRW cosmology, and in most interpretations of the Lambda-CDM (expanding Cold Dark Matter) model. Scientific consensus interpretation is that the Sloan Digital Sky Survey suggests that things do indeed seem to smooth out above 100 Megaparsecs. Recent analysis of WMAP, SDSS, and NVSS data by a team from the University of Minnesota shows evidence of a void around 140 Megaparsecs across, however, coinciding with the CMB cold spot, which, if confirmed, calls the assumption of a smooth universe into question. However there are serious hints that the apparent cold spot is a statistical artifact.

In May 2008, another paper was published by a team including Pietronero, that concludes the large scale structure in the universe is fractal out to at least 100 Mpc/h. The paper asserts that the team has demonstrated that the most recent SDSS data shows “large amplitude density fluctuations at all scales” within that range, and that the data is consistent with fractality beyond this point, but inconsistent with a lower scale of homogeneity, or with predictions of large scale structure based solely on gravity. Their analysis shows the fractal dimension of the arrangement of galaxies in the universe (up to the range of 30 Mpc/h) to be about 2.1 (plus or minus 0.1).

However, an analysis of luminous red galaxies in the Sloane survey calculated the fractal dimension of galaxy distribution (on a scales from 70 to 100 Mpc/h) at 3, consistent with homogeneity; they also confirm that the fractal dimension is 2 “out to roughly 20 Mpc/h”.

Fractals in theoretical cosmology
In the realm of theory, the first appearance of fractals in cosmology was likely with Andrei Linde’s “Eternally Existing Self-Reproducing Chaotic Inflationary Universe” theory, in 1986. In this theory, the evolution of a scalar field creates peaks that become nucleation points which cause inflating patches of space to develop into “bubble universes,” making the universe fractal on the very largest scales. Alan Guth’s 2007 paper on “Eternal Inflation and its implications” shows that this variety of Inflationary universe theory is still being seriously considered today. And inflation, in some form or other, is widely considered to be our best available cosmological model.

Since 1986, however, quite a large number of different cosmological theories exhibiting fractal properties have been proposed. And while Linde’s theory shows fractality at scales likely larger than the observable universe, theories like Causal dynamical triangulation and Quantum Einstein gravity are fractal at the opposite extreme, in the realm of the ultra-small near the Planck scale. These recent theories of quantum gravity describe a fractal structure for spacetime itself, and suggest that the dimensionality of space evolves with time. Specifically; they suggest that reality is 2-d at the Planck scale, and that spacetime gradually becomes 4-d at larger scales. French astronomer Laurent Nottale first suggested the fractal nature of spacetime in a paper on Scale Relativity published in 1992, and published a book on the subject of Fractal Space-Time in 1993.

French mathematician Alain Connes has been working for a number of years to reconcile Relativity with Quantum Mechanics, and thereby to unify the laws of Physics, using Noncommutative geometry. Fractality also arises in this approach to Quantum Gravity. An article by Alexander Hellemans in the August 2006 issue of Scientific American quotes Connes as saying that the next important step toward this goal is to “try to understand how space with fractional dimensions couples with gravitation.” The work of Connes with physicist Carlo Rovelli suggests that time is an emergent property or arises naturally, in this formulation, whereas in Causal dynamical triangulation, choosing those configurations where adjacent building blocks share the same direction in time is an essential part of the ‘recipe.’ Both approaches suggest that the fabric of space itself is fractal, however.

Publications
The book Discovery of Cosmic Fractals by Yurij Baryshev and Pekka Teerikorpi gives an overview of fractal cosmology, and recounts other milestones in the development of this subject. It recapitulates the history of cosmology, reviewing the core concepts of ancient, historical, and modern astrophysical cosmology. The book also documents the appearance of fractal-like and hierarchal views of the universe from ancient times to the present. The authors make it apparent that some of the pertinent ideas of these two streams of thought developed together. They show that the view of the universe as a fractal has a long and varied history, though people haven’t always had the vocabulary necessary to express things in precisely that way.

Beginning with the Sumerian and Babylonian mythologies, they trace the evolution of Cosmology through the ideas of Ancient Greeks like Aristotle, Anaximander, and Anaxagoras, and forward through the Scientific Revolution and beyond. They acknowledge the contributions of people like Emanuel Swedenborg, Edmund Fournier D’Albe, Carl Charlier, and Knut Lundmark to the subject of cosmology and a fractal-like interpretation, or explanation thereof. In addition, they document the work of Gérard de Vaucouleurs, Mandelbrot, Pietronero, Nottale and others in modern times, who have theorized, discovered, or demonstrated that the universe has an observable fractal aspect.

On the 10th of March, 2007, the weekly science magazine New Scientist featured an article entitled “Is the Universe a Fractal?” on its cover. The article by Amanda Gefter focused on the contrasting views of Pietronero and his colleagues, who think that the universe appears to be fractal (rough and lumpy) with those of David Hogg of NYU and others who think that the universe will prove to be relatively homogeneous and isotropic (smooth) at a still larger scale, or once we have a large and inclusive enough sample (as is predicted by Lambda-CDM). Gefter gave experts in both camps an opportunity to explain their work and their views on the subject, for her readers.

This was a follow-up of an earlier article in that same publication on August 21 of 1999, by Marcus Chown, entitled “Fractal Universe.”. Back in November 1994, Scientific American featured an article on its cover written by physicist Andrei Linde, entitled “The Self-Reproducing Inflationary Universe” whose heading stated that “Recent versions of the inflationary scenario describe the universe as a self-generating fractal that sprouts other inflationary universes,” and which described Linde’s theory of chaotic eternal inflation in some detail.

In July 2008, Scientific American featured an article on Causal dynamical triangulation, written by the three scientists who propounded the theory, which again suggests that the universe may have the characteristics of a fractal.

Avoiding the Infinite

Tonight is the thriller night, a night full of horror. Materialistic scientist from all over the world are going to watch a movie at the local cinema outside the town, near the dark wood. The title of the movie is so mysterious like the invitation which was teleported right on top of their desks:”The Infinite

You should know something about materialistic scientists, they fear the infinite like a little girl fears the darkness in the attic. They avoid this thematic and when they are confronted with it they start immediately to whistle, changing the topic, crying loud out for mommy, or just running away. Cowards! Indeed they fear to have reached the edge of their worldview and recognize finally that they lived in a illusion. Materialists are loosing territory with every single scientific discovery. Indeed the best they could do is to hide into theoretical classical physics and playing with their Big Bang simulation until mommy is going to get them to home.

People do think that if they avoid the truth, it might change to something better before they have to hear it.
(Marsha Norman)

The Big Bang and the Imaginary Time
Imaginary time was introduced by Stephen Hawking (materialist scientist) to avoid singularities, or points at which the spacetime curvature becomes infinite, that occur in ordinary time. Imaginary time too would be curved by matter in the universe and therefore would meet the three spatial dimensions to form a closed surface like that of Earth. This curved surface would not have a beginning or end, or indeed any boundaries or edges. This idea helps to avoid the fundamental question of what happened before the Big Bang.

Action at a distance

In physics, action at a distance is the interaction of two objects which are separated in space with no known mediator of the interaction.

This term was used most often in the context of early theories of gravity and electromagnetism to describe how an object responds to the influence of distant massive or charged bodies. More generally “Action at a distance” describes the break between human intuition, where objects have to touch to interact, and physical theory. The exploration and resolution of this problematic phenomenon led to significant developments in physics, from the concept of a field, to descriptions of quantum entanglement and the mediator particles of the standard model.

Electricity

Efforts to account for action at a distance in the theory of electromagnetism led to the development of the concept of a field which mediated interactions between currents and charges across empty space. According to field theory we account for the Coulomb (electrostatic) interaction between charged particles through the fact that charges produce around themselves an electric field, which can be felt by other charges as a force. The concept of the field was elevated to fundamental importance in Maxwell’s equations, which used the field to elegantly account for all electromagnetic interactions, as well as light (which, until then, had been a completely unrelated phenomenon). In Maxwell’s theory, the field is its own physical entity, carrying momenta and energy across space, and action at a distance is only the apparent effect of local interactions of charges with their surrounding field.

Electrodynamics can be described without fields (in Minkowski space) as the direct interaction of particles with light-like separation vectors. This results in the Fokker-Tetrode-Schwartzchild action integral. This kind of electrodynamic theory is often called “direct interaction” to distinguish it from field theories where action at a distance is mediated by a localized field (localized in the sense that its dynamics are determined by the nearby field parameters). This description of electrodynamics, in contrast with Maxwell’s theory, explains apparent action at a distance not by postulating a mediating entity (the field) but by appealing to the natural geometry of special relativity in which two events in spacetime can be physically distinct and still have “zero” separation. Perceived action at a distance is a result of human bias for spatial separation, charged particles can be separated in space, and yet geometrically connected.

Various proofs, beginning with that of Dirac have shown that direct interaction theories (under reasonable assumptions) do not admit Lagrangian or Hamiltonian formulations (these are the so-called No Interaction Theorems). Consequently, the Fokker-Tetrode action is mostly a historic novelty. Still, attempts to recapture action at a distance without a field, which is often difficult to quantize, lead directly to the development of the quantum electrodynamics of Feynman and Schwinger.

Gravity

Newton
Newton’s theory of gravity offered no prospect of identifying any mediator of gravitational interaction. His theory assumed that gravitation acts instantaneously, regardless of distance. Kepler’s observations gave strong evidence that in planetary motion angular momentum is conserved. (The mathematical proof is only valid in the case of a Euclidean geometry.) Gravity is also known as a force of attraction between two objects because of their mass.

A related question, raised by Ernst Mach, was how rotating bodies know how much to bulge at the equator. This, it seems, requires an action-at-a-distance from distant matter, informing the rotating object about the state of the universe. Einstein coined the term Mach’s principle for this question.

Einstein
According to Albert Einstein’s theory of special relativity, instantaneous action-at-a-distance was seen to violate the relativistic upper limit on speed of propagation of information. If one of the interacting objects were to suddenly be displaced from its position, the other object would feel its influence instantaneously, meaning information had been transmitted faster than the speed of light.

One of the conditions that a relativistic theory of gravitation must meet is to be mediated with a speed that does not exceed c, the speed of light in a vacuum. It could be seen from the previous success of electrodynamics that the relativistic theory of gravitation would have to use the concept of a field or something similar.

This problem has been resolved by Einstein’s theory of general relativity in which gravitational interaction is mediated by deformation of space-time geometry. Matter warps the geometry of space-time and these effects are, as with electric and magnetic fields, propagated at the speed of light. Thus, in the presence of matter, space-time becomes non-Euclidean, resolving the apparent conflict between Newton’s proof of the conservation of angular momentum and Einstein’s theory of special relativity. Mach’s question regarding the bulging of rotating bodies is resolved because local space-time geometry is informing a rotating body about the rest of the universe. In Newton’s theory of motion, space acts on objects, but is not acted upon. In Einstein’s theory of motion, matter acts upon space-time geometry, deforming it, and space-time geometry acts upon matter.

Quantum mechanics

Since the early 20th century, quantum mechanics has posed new challenges for the view that physical processes should obey locality. The collapse of the wave function of an electron being measured, for instance, is presumed to be instantaneous. Whether this counts as action-at-a-distance hinges on the nature of the wave function and its collapse, issues over which there is still considerable debate amongst scientists and philosophers. One important line of debate originated with Einstein, who challenged the idea that the wave function offers a complete description of the physical reality of a particle by showing that such a view leads to a paradox. Einstein, along with Boris Podolsky and Nathan Rosen, proposed a thought experiment to demonstrate how two physical quantities with non-commuting operators (e.g. position and momentum) can have simultaneous reality. Since the wave function does not ascribe simultaneous reality to both quantities and yet they can be shown to exist simultaneously, Einstein, Podolsky and Rosen (EPR) argued that the quantum mechanical description of reality must not be complete.

This thought experiment, which came to be known as the EPR paradox, hinges on the principle of locality. A common presentation of the paradox is as such: two particles interact briefly and then are sent off in opposite directions. One could imagine an atomic transition that releases two photons A and B (spin-1 particles) with no overall change in momentum. The photons end up so far away from each other that one can no longer influence the other (this is the principle of locality). As long as the photons act only locally, the perfect anticorrelation of their momenta will hold. That is, if photon A has a momentum of 1 (in appropriate units) then by the conservation of momentum photon B must have a momentum of -1. Therefore, EPR’s argument goes, we could measure the position of photon A, and also simultaneously know photon A’s momentum by measuring photon B (since A’s momentum must be the opposite of B’s).

Because EPR’s proposal involved properties that were not captured in the wave equation and which were local and real, it became known as a local ‘hidden variables’ theory. After the EPR paper, several scientists such as de Broglie took up interest in local hidden variables theories. In the 1960s John Bell derived an inequality that showed a testable difference between the predictions of quantum mechanics and local hidden variables theories. Experiments testing Bell-type inequalities in situations analogous to EPR’s thought experiments have been consistent with the predictions of quantum mechanics, suggesting that local hidden variables theories can be ruled out. Whether or not this is interpreted as evidence for nonlocality depends on one’s interpretation of quantum mechanics. In the standard interpretation the wave function is still considered a complete description so the nonlocality is generally accepted, but there is still debate over what this means physically.

One important question raised by this ambiguity is whether Einstein’s theory of relativity is compatible with the experimental results demonstrating nonlocality. Relativistic quantum field theory requires interactions to propagate at speeds less than or equal to the speed of light, so “quantum entanglement” cannot be used for faster-than-light-speed propagation of matter, energy, or information. Measurements of one particle will be correlated with measurements on the other particle, but this is only known after the experiment is performed and notes are compared, therefore there is no way to actually send information faster than the speed of light. On the other hand, relativity predicts causal ambiguities will result from the nonlocal interaction. In terms of the EPR experiment, in some reference frames measurement of photon A will cause the wave function to collapse, but in other reference frames the measurement of photon B will cause the collapse.

Non-standard interpretations of quantum mechanics also vary in their response to the EPR-type experiments. Bohm interpretation gives an explanation based on nonlocal hidden variables for the correlations seen in entanglement. Many advocates of the many-worlds interpretation argue that it can explain these correlations in a way that does not require a violation of locality, by allowing measurements to have non-unique outcomes.

Retrocausality

Retrocausality (also called retro-causation, retro-chronal causation, backward causation, and similar terms) is any of several hypothetical phenomena or processes that reverse causality, allowing an effect to occur before its cause.

Retrocausality is primarily a thought experiment in philosophy of science based on elements of physics, addressing the question: Can the future affect the present, and can the present affect the past? Philosophical considerations of time travel often address the same issues as retrocausality, as do treatments of the subject in fiction, although the two terms are not universally synonymous.

A few legitimate physical theories have sometimes been interpreted as leading to retrocausality. This is not considered part of science, since the distinction between cause and effect in physics is not made at the most fundamental level.

Retrocausality and Antimatter
As the modern understanding of particle physics began to develop, retrocausality was at times employed as a tool to model then-unfamiliar or unusual conditions, including electromagnetism and antimatter.

Time runs left to right in this Feynman diagram of electron-positron annihilation. When interpreted to include retrocausality, the electron (marked e-) was not destroyed, instead becoming the positron (e+) and moving backward in time.

The Wheeler–Feynman absorber theory, proposed by John Archibald Wheeler and Richard Feynman, uses retrocausality and a temporal form of destructive interference to explain the absence of a type of converging concentric wave suggested by certain solutions to Maxwell’s equations. These advanced waves don’t have anything to do with cause and effect, they are just a different mathematical way to describe normal waves. The reason they were proposed is so that a charged particle would not have to act on itself, which, in normal classical electromagnetism leads to an infinite self-force.

Feynman, and earlier Stueckelberg, proposed an interpretation of the positron as an electron moving backward in time, reinterpreting the negative-energy solutions of the Dirac equation. Electrons moving backward in time would have a positive electric charge. Wheeler invoked this concept to explain the identical properties shared by all electrons, suggesting that “they are all the same electron” with a complex, self-intersecting worldline. Yoichiro Nambu later applied it to all production and annihilation of particle-antiparticle pairs, stating that “the eventual creation and annihilation of pairs that may occur now and then is no creation or annihilation, but only a change of direction of moving particles, from past to future, or from future to past.” The backwards in time point of view is nowadays accepted as completely equivalent to other pictures, but it doesn’t have anything to do with the macroscopic terms “cause” and “effect“, which do not appear in a microscopic physical description.

Current topics
Open topics in physics, especially involving the reconciliation of gravity with quantum physics, suggest that retrocausality may be possible under certain circumstances.

Closed timelike curves, in which the world line of an object returns to its origin, arise from some exact solutions to the Einstein field equation. Although closed timelike curves do not appear to exist under normal conditions, extreme environments of spacetime, such as a traversable wormhole or the region near certain cosmic strings, may allow their formation, implying a theoretical possibility of retrocausality. The exotic matter or topological defects required for the creation of those environments have not been observed. Furthermore, Stephen Hawking has suggested a mechanism he describes as the chronology protection conjecture, which would destroy any such closed timelike curve before it could be used. These objections to the existence of closed timelike curves are not universally accepted, however.

Retrocausality has also been proposed as a mechanism to explain what Albert Einstein called “spooky action at a distance” occurring as a result of quantum entanglement. Although the prevailing scientific viewpoint is that the effects generated by quantum entanglement do not require any direct communication between the involved particles, Costa de Beauregard proposed an alternative theory.

At an American Association for the Advancement of Science symposium, University of Washington physicist John Cramer presented the design for an experiment to test for backward causation in quantum entanglement, subsequently receiving some attention from the popular media. Work on Cramer’s non-local communication test started in January 2007. Cramer included a status report on the “UW Test of Nonlocal Quantum Communications with Momentum-Entangled Photon Pairs” in his “Five Decades of Physics” talk at a symposium in his honor at the University of Washington, Seattle, Washington, September 11, 2009. Work on the experiment will continue during 2010.

Retrocausality has also been proposed as an explanation for the delayed choice quantum eraser.

The hypothetical superluminal particle called the tachyon, proposed in the context of the Bosonic string theory and certain other fields of high-energy physics, moves backward in time. Despite frequent depiction in science fiction as a method to send messages back in time, theories predicting tachyons do not permit them to interact with normal “time-like” matter in a manner that would violate standard causality. Specifically, the Feinberg reinterpretation principle renders impossible construction of a tachyon detector capable of receiving information.

Outside the Mainstream
Outside the mainstream scientific community, retrocausality has also been proposed as a mechanism to explain purported anomalies, paranormal events or personal events, but mainstream scientists generally regarded these explanations as pseudoscientific. Most notably, parapsychologist Helmut Schmidt presented quantum mechanical justifications for retrocausality, eventually claiming that experiments had demonstrated the ability to manipulate radioactive decay through retrocausal psychokinesis. These results and their underlying theory have been rejected by the mainstream scientific community, although they continue to have some support from fringe science sources.

Efforts to associate retrocausality with prayer healing have been similarly discounted by legitimate scientific method.