sábado, 18 de agosto de 2012

Einstein's Equations, Cosmology and Astrophysics

I give a compact, pedagogical review of our present understanding of the universe as based on general relativity. This includes the uniform models, with special reference to the cosmological 'constant'; and the equations for spherically-symmetric systems, in a particularly convenient form that aids their application to astrophysics. New ideas in research are also outlined, notably involving extra dimensions.
Subjects: General Relativity and Quantum Cosmology (gr-qc)
Cite as: arXiv:1208.3433v1 [gr-qc]

Submission history

From: Paul Wesson [view email]
[v1] Wed, 15 Aug 2012 01:11:28 GMT (200kb)
 
Source: arXiv

viernes, 17 de agosto de 2012

Massive Galaxy Cluster Finally Acts as Predicted

Astronomers have discovered a supermassive galaxy cluster that both meets and challenges expectations for how clusters ought to behave.

The Milky Way measures almost 100,000 light-years across, a distance that defies imagination. But clusters of galaxies, the largest gravitationally bound structures in the universe, span tens of millions of light-years. Viewed in visible light, galaxy clusters look like their name — dozens, sometimes thousands of galaxies swarming amid the voids of space.

But view the same cluster with X-ray vision, and you’ll mostly see a huge, glowing blob. You’re seeing extremely thin, superheated plasma filling the space between galaxies. As you might expect, the multi-million-degree plasma is densest near the cluster’s middle, where it’s compressed under its own weight. This superhot gas radiates heat away with intense X-rays, so it ought to cool quickly. The densest gas in the core should cool the quickest, followed by thinner gas from the edges of the cluster.

All this newly cooled intergalactic gas should sink to the core of the cluster and condense into stars, almost like cooling water vapor condensing into raindrops. Yet astronomers have found precious little star formation at the center of most galaxy clusters, even those with cooling gas. Until now.

A paper published in Nature today describes observations of one of the most massive galaxy clusters discovered to date. The cluster SPT –CLJ2344-4243, nicknamed Phoenix for its constellation (and pronounceability), weighs in at roughly 2 million billion times the mass of our Sun, or a couple thousand Milky Ways, putting it in competition with heavyweight champion El Gordo. The cluster is almost as young as El Gordo too — its light took 5.7 billion years to arrive at Earth, meaning we see it back when the universe was only 8 billion years old. But the Phoenix’s crowning jewel is the bright UV light radiating from its core, emitted from newborn stars forming at the prodigious rate of 740 solar masses a year.

The new stars are likely forming out of the intergalactic gas, which is cooling and flowing into the cluster center at a rate of 4,000 solar masses a year, probably along radial filaments like spokes on a wheel,

With such a high star formation rate, “this is clearly an extreme object,” says Chis O’Dea (RIT), who was not involved in the study.

There’s only one problem: expectations of cluster behavior have changed. To explain the rarity of cooling flows and central star formation in other galaxy clusters, astronomers had to come up with a theory — something must be reheating the gas. And a likely culprit is the supergiant galaxy, often more than 1 million light-years across, that reigns over the center of most large clusters. In the extra-big galaxy’s heart lurks an extra-big supermassive black hole. If jets stream out from the black hole and heat the gas, blowing gigantic bubbles in the process, that would make the gas too hot to form stars effectively.

So if the Phoenix cluster’s core is churning out stars, then it must be going through a brief period where jets from the supermassive black hole haven’t been “on” long enough to reheat the cooling gas.

“We suspect that this strong cooling/star formation episode will only last for around 100 million years, or roughly 1% of the cluster’s lifetime,” says Michael McDonald (MIT), the study’s lead author. “So, even if this short-lived phase is ubiquitous, we still shouldn’t see it in many clusters.”

“Further observations of the cold gas reservoir from which stars are formed will be very important,” suggests Andrew Fabian (University of Cambridge), who was not involved in the study. For example, telescopes such as ALMA could determine how much fuel exists for star formation.

“It will be exciting if similar objects at even higher redshifts are found,” Fabian adds.
Posted by Monica Young, August 15, 2012
Source: Sky and Telescope - Virtual Blog 

Study finds link between climate and conflict

By Laura Shin | August 24, 2011, 10:00 AM PDT
Source: SMART PLANET


People have long speculated that changes in climate can affect human society. For instance, many ancient civilizations, including the Mayan, are thought to have collapsed due to drought.
Now, a study shows that shifts in global climate can also destabilize modern-day societies.
Columbia University researchers have found that an El Niño, which brings hot, dry weather, doubles the likelihood of a civil war in more than 90 tropical countries affected by this climate cycle.
The authors, who are publishing their study in Nature on Thursday, say that El Niño may have played a role in one in five civil conflicts since 1950.
“We’re extending the general hypothesis that throughout history human societies used to be influenced by the climate to say that, now, modern society continues to be influenced by the modern climate …. It wasn’t so difficult to convince people that maybe Angkor Wat or Mayan civilizations collapsed due to climatic changes, but I think it’s harder for people to accept that we still depend on the climate to a very large extent,” author Solomon M. Hsiang, who is now at Princeton, said. (Disclosure: I know Hsiang from a class I took at Columbia University for my master’s four years ago; he was the teaching assistant.)
While Hsiang and his co-authors emphasize their study focuses on the El Niño climate pattern, not climate change, their work could shed light on the potential consequences of global warming, which will make the world more El Niño-like: hotter and drier around the middle.

The connection between El Niño and warfare

El Niño and La Niña are two sides of a weather pattern that oscillates every three to seven years. During El Niño years, tropical parts of the globe become hotter and thirstier than they are during comparatively cool, rainy La Niña years.
The researchers found that the likelihood that a country in the tropics erupts into a civil conflict is 3% during a La Niña year, but 6% during an El Niño year. Countries whose climates are largely unaffected by El Niño only had a 2% chance of experiencing a civil conflict no matter the year.
Hsiang and his co-authors, Kyle C. Meng of Columbia and Mark Cane of Columbia’s Lamont-Doherty Earth Observatory, did not analyze international conflicts because civil wars constitute the vast majority of all conflicts since 1950.
The map below shows, in red, the 93 countries that become warmer in El Niño years, and the 82 unaffected countries in blue.

While Hsiang does not say that El Niño causes these conflicts, he asserts it creates the conditions that make warfare more likely. He compared the presence of El Niño during a conflict to the presence of ice on a road during a car accident:
When there is more ice on the road, there are clearly more car accidents …. Does the ice cause car accidents? The truth is that drivers and their mistakes create car accidents, and the ice is not necessarily at fault, but the ice increases the likelihood with which drivers will make errors.
He, Meng and Cane speculate that the hotter, drier conditions of an El Niño indirectly lead to conflict in several ways: drought could cause large crop losses, which would in turn increase food prices and spark unemployment — and the unemployed have more time to contemplate injustices and act on them. The researchers also cite evidence that hotter temperatures have physiological effects on people, making them more likely to instigate violence.

Import for climate change

Although this study did not look directly at the consequences of global warming, it could portend its impacts and be used to improve preparedness for humanitarian crises, because strong El Niños, which have the greatest connection to warfare, can be predicted up to two years in advance.
Hsiang, Meng and Cane learned from and built upon a widely criticized 2009 study by Marshall B. Burke and others that examined the effect of local climate cycles on conflict in Africa. In anticipation of similar attacks, Hsiang’s team controlled for factors such as country’s income, level of democracy, population, etc. They ran all the checks that the Burke study ran, as well as all the checks that a subsequent opposing study ran. Even then, the El Niño influence persisted.
Hsiang says that Burke’s study tried to see what might happen in the future climate of individual countries by looking at local weather and rainfall — i.e., if a country became slightly hotter than its neighbors, would it be more likely to have a civil conflict?
“But local weather and local rainfall are not good approximations of global climate change,” he says, noting that global climate causes large patterns of environmental changes. For instance, a spate of recent related weather disasters has led to a spike in food prices. “When an El Niño event occurs, it leads to warming and drying throughout the entire tropics. It leads to a reduction in agricultural output throughout the entire tropics. And it is very likely that … each individual society is affected … when agricultural production is falling everywhere at the same time.”
Cane says:
We can’t say what will happen for sure with climate change, but it does show beyond a doubt that even in the modern world, climate variations have an impact on people’s propensity to fight. It is difficult to see why that wouldn’t carry over into a world disrupted by global warming.
photo: Nubian desert in Sudan (Bertramz/Wikimedia)
map: Hsiang et al. Nature

jueves, 16 de agosto de 2012

Quantum Mechanics, Gravity, and the Multiverse

The discovery of accelerating expansion of the universe has led us to take the dramatic view that our universe may be one of the many universes in which low energy physical laws take different forms: the multiverse. I explain why/how this view is supported both observationally and theoretically, especially by string theory and eternal inflation. I then describe how quantum mechanics plays a crucial role in understanding the multiverse, even at the largest distance scales. The resulting picture leads to a revolutionary change of our view of spacetime and gravity, and completely unifies the paradigm of the eternally inflating multiverse with the many worlds interpretation of quantum mechanics. The picture also provides a solution to a long-standing problem in eternal inflation, called the measure problem, which I briefly describe.
Comments: 18 pages, 6 figures. An article published in the Astronomical Review, based on talks given by the author at various institutions. v2: a note added
Subjects: High Energy Physics - Theory (hep-th); Cosmology and Extragalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Phenomenology (hep-ph); Quantum Physics (quant-ph)
Journal reference: AstRv. 7, 36 (2012)
Report number: UCB-PTH-12/04
Cite as: arXiv:1205.2675v2 [hep-th]

Submission history

From: Yasunori Nomura [view email]
[v1] Fri, 11 May 2012 19:40:07 GMT (686kb)
[v2] Mon, 30 Jul 2012 04:25:59 GMT (686kb)
 
Source: arXiv

A Phantom Menace? Cosmological consequences of a dark energy component with super-negative equation of state

R.R. Caldwell (Dartmouth)
It is extraordinary that a number of observations indicate that we live in a spatially flat, low matter density Universe, which is currently undergoing a period of accelerating expansion. The effort to explain this current state has focused attention on cosmological models in which the dominant component of the cosmic energy density has negative pressure, with an equation of state $w \ge -1$. Remarking that most observations are consistent with models right up to the $w=-1$ or cosmological constant ($\Lambda$) limit, it is natural to ask what lies on the other side, at $w<-1$. In this regard, we construct a toy model of a ``phantom'' energy component which possesses an equation of state $w<-1$. Such a component is found to be compatible with most classical tests of cosmology based on current data, including the recent type 1a SNe data as well as the cosmic microwave background anisotropy and mass power spectrum. If the future observations continue to allow $w<-1$, then barring unanticipated systematic effects, the dominant component of the cosmic energy density may be stranger than anything expected.
Comments: update of original version, includes new material, matches version appearing in Phys. Lett. B, (17 pages, 7 eps figures)
Subjects: Astrophysics (astro-ph); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Phenomenology (hep-ph)
Journal reference: Phys.Lett.B545:23-29,2002
DOI: 10.1016/S0370-2693(02)02589-3
Cite as: arXiv:astro-ph/9908168v2

Submission history

From: Robert Caldwell [view email]
[v1] Mon, 16 Aug 1999 21:31:43 GMT (53kb)
[v2] Sun, 15 Sep 2002 22:50:49 GMT (230kb)
 
source: 

Cosmology when living near the Great Attractor

If we live in the vicinity of the hypothesized Great Attractor, the age of the universe as inferred from the local expansion rate can be off by three per cent. We study the effect that living inside or near a massive overdensity has on cosmological parameters induced from observations of supernovae, the Hubble parameter and the Cosmic Microwave Background. We compare the results to those for an observer in a perfectly homogeneous LCDM universe. We find that for instance the inferred value for the global Hubble parameter changes by around three per cent if we happen to live inside a massive overdensity such as the hypothesized Great Attractor. Taking into account the effect of such structures on our perception of the universe makes cosmology perhaps less precise, but more accurate.
Comments: 8 pages, 6 figures, Submitted to MNRAS
Subjects: Cosmology and Extragalactic Astrophysics (astro-ph.CO)
Journal reference: Mon. Not. R. Astron. Soc. 424, 495--501 (2012)
DOI: 10.1111/j.1365-2966.2012.21218.x
Cite as: arXiv:1203.4567v1 [astro-ph.CO]

Submission history

From: Ole Bjaelde [view email]
[v1] Tue, 20 Mar 2012 20:00:05 GMT (55kb)
 
SOURCE:  arXiv

The Current Status of Galaxy Formation

Joe Silk (1,2,3), Gary A. Mamon (1) ((1) IAP, (2) JHU, (3) BIPAC, Oxford)
Understanding galaxy formation is one of the most pressing issues in cosmology. We review the current status of galaxy formation from both an observational and a theoretical perspective, and summarise the prospects for future advances.
Comments: Minor modifications incorporating the numerous comments received
Subjects: Cosmology and Extragalactic Astrophysics (astro-ph.CO)
Cite as: arXiv:1207.3080v2 [astro-ph.CO]

Submission history

From: Gary Mamon [view email]
[v1] Thu, 12 Jul 2012 20:00:02 GMT (3146kb,D)
[v2] Thu, 2 Aug 2012 13:37:20 GMT (3143kb,D)

Spontaneous B-L Breaking as the Origin of the Hot Early Universe

The decay of a false vacuum of unbroken B-L symmetry is an intriguing and testable mechanism to generate the initial conditions of the hot early universe. If B-L is broken at the grand unification scale, the false vacuum phase yields hybrid inflation, ending in tachyonic preheating. The dynamics of the B-L breaking Higgs field and thermal processes produce an abundance of heavy neutrinos whose decays generate entropy, baryon asymmetry and gravitino dark matter. We study the phase transition for the full supersymmetric Abelian Higgs model. For the subsequent reheating process we give a detailed time-resolved description of all particle abundances. The competition of cosmic expansion and entropy production leads to an intermediate period of constant 'reheating' temperature, during which baryon asymmetry and dark matter are produced. Consistency of hybrid inflation, leptogenesis and gravitino dark matter implies relations between neutrino parameters and superparticle masses. In particular, for a gluino mass of 1 TeV, we find a lower bound on the gravitino mass of 10 GeV.
Comments: 64 pages, 8 figures, 1 table. v2: minor numerical corrections, slightly different parameter point chosen in section 4, final results unchanged
Subjects: High Energy Physics - Phenomenology (hep-ph); Cosmology and Extragalactic Astrophysics (astro-ph.CO); High Energy Physics - Experiment (hep-ex)
Journal reference: Nucl.Phys.B862 (2012) 587-632
DOI: 10.1016/j.nuclphysb.2012.05.001
Report number: DESY 11-174
Cite as: arXiv:1202.6679v2 [hep-ph]

Submission history

From: Valerie Fiona Domcke [view email]
[v1] Wed, 29 Feb 2012 20:47:24 GMT (1221kb,D)
[v2] Fri, 16 Mar 2012 14:56:05 GMT (1405kb,D)
 
Source: arXiv 





Kenyan Fossils Rekindle Debate over Early Human Diversity

Opinion, arguments & analyses from the editors of Scientific American




Koobi Fora fossils
The KNM-ER 1470 cranium, discovered in 1972, combined with the new lower jaw from Koobi Fora. The specimens are thought to belong to the same species. The lower jaw is shown as a photographic reconstruction, and the cranium is based on a computed tomography scan. © Photo by Fred Spoor

If I had to pick the hottest topic in paleoanthropology right now, I’d say it’s the origin and early evolution of our genus, Homo. Researchers know quite a bit about our australopithecine predecessors (Lucy and her ilk) and about later phases of Homo’s evolution. But the dawn of our lineage is cloaked in mystery. One question experts have long puzzled over is whether Homo split into multiple lineages early on, or whether the known early Homo fossils all belong to a single lineage. To that end, new discoveries made at the site of Koobi Fora in northern Kenya—one of the Leakey’s longtime fossil hunting grounds—are said to settle that matter in favor of multiple lineages. But some critics disagree.
The new finds—a partial face including almost all of the molars in the upper jaw, a nearly complete lower jaw and a partial lower jaw that date to between 1.78 million and 1.95 million years ago—bear on the identity of a famously enigmatic skull from Koobi Fora known as KNM-ER 1470. Ever since the discovery of the 1470 skull in 1972, researchers have struggled to place it in the human family tree. On one hand, at nearly two million years old it is the same age as H. habilis fossils from Koobi Fora and other locales in East Africa. The skull also shares some features in common with that species, which most researchers consider to be the founding member of Homo. On the other hand, 1470 is much larger than established H. habilis fossils, and differs from them in having a flat, long face, among other distinctive traits. Some experts thus assigned 1470 and some other fossils from Koobi Fora to a separate species, H. rudolfensis.
Meave Leakey and Fred Spoor
Paleontologists Meave Leakey and Fred Spoor collect fossils close to the site where the new face KNM-ER 62000 was found. © Photo by Mike Hettwer, www.hettwer.com, courtesy of National Geographic
But nailing down whether 1470 is a rogue H. habilis or a separate species has been tricky because no other skull shared that long, flat face and the specimen lacks teeth and a lower jaw to compare with other fossils. This is where the new fossils from Koobi Fora come in. In a paper published in the August 9 Nature, Meave Leakey of the Turkana Basin Institute in Nairobi, Fred Spoor of the Max Planck Institute for Evolutionary Anthropology in Leipzig and their colleagues report that the new face mirrors 1470’s face shape, although it is smaller overall. Inferring 1470’s upper jaw anatomy from the new face, the authors say the lower jaw fossils they found are good matches for the upper jaws of 1470 and the new face. (Scientific American is part of Nature Publishing Group.)
New mandible from Koobi Fora
The lower jaw KNM-ER 60000 after initial restoration but before the adhering matrix was carefully removed. © Photo by Mike Hettwer, www.hettwer.com, courtesy of National Geographic
“For the past 40 years we have looked long and hard in the vast expanse of sediments around Lake Turkana for fossils that confirm the unique features of 1470’s face and show us what its teeth and lower jaw would have looked like,” Leakey remarked in a prepared statement. “At last we have some answers.” The answers, in their view, indicate that 1470 and the new fossils represent a distinct human lineage from other early Homo fossils. This would mean that two Homo lineages lived alongside our ancestor H. erectus. H. erectus itself may have evolved from one of these two groups or another, as-yet-unknown group. The researchers did not formally name the new fossils from Koobi Fora, because of confusion surrounding the fossil that defines H. habilis, but they suggest that it may be appropriate to assign them to H. rudolfensis. Bottom line, they’re saying the fossils confirm that the non-erectus early Homo fossils in East Africa constitute two lineages, not one.
Although it may be hard to imagine sharing turf with another human species today, members of these ancient contemporaneous lineages need not have stepped on each other’s toes. In background materials distributed to the press the discovery team noted that chimpanzees and gorillas live in some of the same habitats. Both eat ripe fruit, but gorillas focus more heavily on tough vegetation than chimps do. “The early hominins [members of the human branch of the family tree] could have separated their neighborhoods in the same way,” the researchers explain. “They may simply have focused on different primary food items.” Exactly what these hominins were eating is uncertain, “but there are clues from the arrangement of the face and jaws that the newly described fossils, and the previously known [1470 skull], with their tall faces but shortened front tooth row, may have been focusing on foods that required chewing on the back teeth.” Analyses of the chemical composition of the teeth, as well as their wear marks, may yield further insights into what these hominins ate.
In an accompanying commentary Bernard Wood of George Washington University calls the new evidence for at least two parallel lineages in the early evolution of Homo “compelling.” Indeed he suggests that this chapter of our evolutionary history was even more complex than that. “My prediction is that by 2064, 100 years after [Louis] Leakey and colleagues’ description of H. habilis, researchers will view our current hypotheses about this phase of human evolution as remarkably simplistic,” he writes.
Other researchers are not convinced that the new Koobi Fora fossils show multiple lineages of early Homo co-existed. Adam Van Arsdale of Wellesly College, who has studied the 1.76 million-year-old H. erectus fossils from the site of Dmanisi in the Republic of Georgia, notes that in light of the considerable variation evident in the well-dated Dmanisi sample, the variation in the early Homo fossils from Africa can be accommodated by one species. In fact, the new Kenyan fossils show features in common with the Dmanisi ones, and thus help to link early Homo in Africa to H. erectus in Georgia, he says. In his view, all of these fossils—the habilis/rufolfensis ones and H. erectus–belong to one lineage.
“What the African assemblage lacks is a good sample from a single locality that shows variation. Instead you have lots of fragmentary, isolated specimens, all with temporal uncertainty, that show a huge amount of variation,” Van Arsdale explains. Whereas Leakey and Spoor see this variation as evidence for multiple concurrent lineages, “I tend to see this new evidence as making it harder to reject the idea of a single evolving lineage,” he says.
A more pointed criticism of the new study comes from Lee Berger of the University of the Witwatersrand in Johannesburg. Berger notes that in their paper Leakey, Spoor and their colleagues neglected to compare the new Koobi Fora fossils to Australopithecus africanus and A. sediba fossils from South Africa, which were contemporaries of early Homo from East Africa. (Berger led the team that discovered A. sediba, which was announced in 2010 and held up as a possible ancestor of Homo.) By ignoring those South African fossils, Berger contends, the team cannot rule out alternatives to their interpretation.
Berger also took issue with the team’s use of fragmentary material to argue its position. “All this paper does, unfortunately, is highlight the mess that the isolated and fragmentary East African record in this time period makes of the debate around the origins of the genus Homo, and it does little to illuminate the question,” he contends. Berger has previously argued that A. sediba, which is best known from two largely complete skeletons exhibiting a mosaic of australopithecine–like and Homo-like traits, demonstrates that evolution mixed and matched fossil human features in sometimes surprising ways, and that fragmentary remains therefore cannot be reliably assigned to species. “We and others have shown that you can’t take isolated bits and force them into anatomical association.  The [Koobi Fora] mandible goes with the maxilla?  Where is the evidence for that,” he demands.  “While we need more fossils like this, it’s not helpful to shoehorn them into debates they are not complete enough to be of use as evidence in.”
Spoor counters that he and his colleagues did include the South African fossils in their analysis, but that they excluded those comparisons from their report because their Nature paper focuses on the question of what the new fossils reveal about taxa of early Homo in eastern Africa. “A. africanus and sediba have nothing to say about that,” he asserts, noting that africanus and sediba have primitive faces, with “nothing specifically Homo-like in the skull.” He adds, “the interesting parts of A. sediba are in the postcranial skeleton.”
Suffice it to say, I doubt very much that we have heard the last of this debate. Stay tuned.
Update (8/9/12 at 9:53 A.M.): Paleoanthropologist Philip G. Rightmire of Harvard University sent the below observations about the new fossils. Like Van Arsdale, Rightmire has long studied the Dmanisi fossils, but he arrives at different conclusion about the Koobi Fora remains:
“It’s my impression that the authors are on the money in attributing their material to the hypodigm including KNM-ER 1470. For a long time, this group was quite poorly documented and therefore enigmatic. The new facial parts duplicate some of the key features of the original (very distinctive) face, but at a much smaller scale. Sex dimorphism and individual variation within a single lineage seems to be the best explanation. Also, it’s clear that this lineage differs from specimens such as OH 13, OH 24, and KNM-ER 1813 (attributed to Homo habilis). I’d say that there is a good case for the presence of two distinct Homo lineages along side Homo erectus. For me, an important question is which (if any) of these hominins took the first steps out of Africa, to establish settlements at localities such as Dmanisi. Our material is best described as early Homo erectus, but I think that the direct ancestors to the Dmanisi population were a more archaic form of Homo. The first hominins out of Africa may have been Homo habilis (not the group documented in the new paper). Or we may have to keep looking for an appropriate ancestor to Homo erectus. In any case, Homo erectus evolving in Asia probably dispersed only later (back) to Africa, and of course toward the Far East.”

About the Author: Kate Wong is an editor and writer at Scientific American covering paleontology, archaeology and life sciences. Follow on Twitter @katewong.
The views expressed are those of the author and are not necessarily those of Scientific American.
Tags: , , , , , , , ,

SOURCE:  SCIENTIFIC AMERICAN 





Quantum Teleportation Achieved over Record Distances

Opinion, arguments & analyses from the editors of Scientific American

 




Telescope used in teleportation experiments
The European Space Agency's Optical Ground Station on Tenerife in the Canary Islands was used as a receiver in recent quantum teleportation experiments. Credit: ESA
Two teams of researchers have extended the reach of quantum teleportation to unprecedented lengths, roughly equivalent to the distance between New York City and Philadelphia. But don’t expect teleportation stations to replace airports or train terminals—the teleportation scheme shifts only the quantum state of a single photon. And although part of the transfer happens instantaneously, the steps required to read out the teleported quantum state ensure that no information can be communicated faster than the speed of light.
Quantum teleportation relies on the phenomenon of entanglement, through which quantum particles share a fragile, invisible link across space. Two entangled photons, for instance, can have correlated, opposite polarization states—if one photon is vertically polarized, for instance, the other must be horizontally polarized. But, thanks to the intricacies of quantum mechanics, each photon’s specific polarization remains undecided until one of them is measured. At that instant the other photon’s polarization snaps into its opposing orientation, even if many kilometers have come between the entangled pair.
An entangled photon pair serves as the intermediary in the standard teleportation scheme. Say Alice wants to teleport the quantum state of a photon to Bob. First she takes one member of a pair of entangled photons, and Bob takes the other. Then Alice lets her entangled photon interfere with the photon to be teleported and performs a polarization measurement whose outcome depends on the quantum state of both of her particles.
Because of the link between Alice and Bob forged by entanglement, Bob’s photon instantly feels the effect of the measurement made by Alice. Bob’s photon assumes the quantum state of Alice’s original photon, but in a sort of garbled form. Bob cannot recover the quantum state Alice wanted to teleport until he reverses that garbling by tweaking his photon in a way that depends on the outcome of Alice’s measurement. So he must await word from Alice about how to complete the teleportation—and that word cannot travel faster than the speed of light. That restriction ensures that teleported information obeys the cosmic speed limit.
Even though teleportation does not allow superluminal communication, it does provide a detour around another physics blockade known as the no-cloning theorem. That theorem states that one cannot perfectly copy a quantum object to, for instance, send a facsimile to another person. But teleportation does not create a copy per se—it simply shifts the quantum information from one place to another, destroying the original in the process.
Teleportation can also securely transmit quantum information even when Alice does not know where Bob is. Bob can take his entangled particle wherever he pleases, and Alice can broadcast her instructions for how to ungarble the teleported state over whatever conventional channels—radio waves, the Internet—she pleases. That information would be useless to an eavesdropper without an entangled link to Alice.
Physicists note that quantum entanglement and teleportation could one day form the backbone of quantum channels linking hypothetical quantum processors or enabling secure communications between distant parties. But for now the phenomenon of teleportation is in the gee-whiz exploratory phase, with various groups of physicists devising new tests to push the limits of what is experimentally possible.
In the August 9 issue of Nature, a Chinese group reports achieving quantum teleportation across Qinghai Lake in China, a distance of 97 kilometers. (Scientific American is part of Nature Publishing Group.) That distance surpasses the previous record, set by a group that included several of the same researchers, of 16 kilometers.
But a more recent study seems to have pushed the bar even higher. In a paper posted May 17 to the physics preprint Web site arXiv.org, just eight days after the Chinese group announced their achievement on the same Web site, a European and Canadian group claims to have teleported information from one of the Canary Islands to another, 143 kilometers away. That paper has not been peer-reviewed but comes from a very reputable research group.
Both teams of physicists faced serious experimental challenges—sending a single photon 100 kilometers and then plucking it out of the air is no easy task. In practical terms, both groups’ Alices and Bobs needed laser-locked telescopes for sending and receiving their photons, as well as complex optics for modifying and measuring the photons’ quantum states.
But that’s nothing compared to what the physicists have in mind for future experiments. Both research groups note that their work is a step toward future space-based teleportation, in which quantum information would be beamed from the ground to an orbiting satellite.
About the Author: John Matson is an associate editor at Scientific American focusing on space, physics and mathematics. Follow on Twitter @jmtsn.
The views expressed are those of the author and are not necessarily those of Scientific American.

SOURCE: Scientific American

¿Pueden nuestras elecciones futuras afectar a nuestro pasado?

domingo, 5 de agosto de 20120 comentarios

Referencia: PhysicsWorld.com ,
Autor: Philip Ball, 3 de agosto 2012

Lo que hagas hoy podría afectar a lo que pasó ayer, esta es la increíble conclusión de un experimento mental de física cuántica, descrita en un artículo publicado por Yakir Aharonov, y sus colegas, de la Universidad de Tel Aviv, en Israel.

Suena imposible, de hecho puede estar violando uno de los principios más preciados de la ciencia, la causalidad, pero los investigadores dicen que las reglas del mundo cuántico conspiran para preservar la causalidad "ocultando" la influencia de las futuras elecciones hasta que esas decisiones hayan sido efectivamente realizadas.

Ocupando el corazón de esta idea está el fenómeno cuántico de "no localidad", en el que dos o más partículas existen en estados interrelacionados o "entrelazados", algo que queda  indeterminado hasta que se efectúa la medición de uno de ellos. Una vez que la medición se lleva a cabo, el estado de la otra partícula instantáneamente también queda fijado, sin importar lo lejos que pueda estar. Albert Einstein, lo denominó "acción a distancia" en 1935, cuando argumentaba que la teoría cuántica debe estar incompleta. Los experimentos modernos han confirmado que esta 'acción instantánea', es de hecho, una realidad, y ahora resulta clave para llevar las tecnologías cuánticas a la práctica, como la computación cuántica y la criptografía.

Aharonov y sus colaboradores, describen el experimento para un gran grupo de las partículas entrelazadas. Y afirman que, bajo ciertas condiciones, la elección del experimentador que mide los estados de las partículas, se puede demostrar que afectan a los estados que tenían las partículas en un tiempo anterior, cuando se hizo una holgada medición. En efecto, la anterior medición "débil" se anticipa a la elección hecha en la posterior medición "fuerte".

4D en lugar de 3D

Este trabajo se basa en una forma de pensar acerca del entrelazamiento, llamada "two-state vector formalism" [formalismo vectorial de dos estados] (TSVF), propuesto por Aharonov hace tres décadas. La TSVF considera las correlaciones entre las partículas en un espacio-tiempo 4D, en lugar de un espacio 3D. "En tres dimensiones, parece que existe alguna influencia milagrosa entre las dos partículas distantes", dice el colega de Aharonov, Avshalom Elitzur, del Instituto de Ciencia Weizmann en Rehovot, Israel. "Tomado el espacio-tiempo como un todo, resulta una interacción continua que se extiende entre los eventos pasados ​​y futuros."

Aharonov y su equipo, han descubierto ahora una implicación notable de la TSVF que incide sobre la cuestión de en qué estado está una partícula entre dos mediciones, una versión cuántica del famoso acertijo de Einstein sobre cómo podemos estar seguros de que la Luna está ahí sin mirarla. ¿Cómo podemos averiguar algo acerca de las partículas sin medirlas? La TSVF muestra que es posible llegar a esa información intermedia, haciendo mediciones suficientemente "débiles" de un montón de partículas entrelazadas, preparadas de la misma manera, y calculando un promedio estadístico.

Medidas débiles

La teoría de la medición débil (la primera propuesta y desarrollada por Aharonov y su grupo en 1988), establece que es posible "suave" o "débilmente" medir un sistema cuántico, y obtener alguna información acerca de una propiedad (digamos, la posición), sin una alteración apreciable de una propiedad complementaria (moméntum), y por tanto, de la evolución futura del sistema. Aunque la cantidad de información obtenida por cada medida es muy pequeña, el promedio de múltiples mediciones nos da una estimación precisa de la medida de una propiedad sin distorsionar su valor final.

Cada medición débil puede decirnos algo acerca de las probabilidades de los diferentes estados (el valor de espín arriba o abajo, por ejemplo), aunque con un montón de errores, sin colapsar realmente a las partículas en estados definidos, como sucedería con una medida fuerte. "La medida débil cambia el estado medido e informa sobre el estado localizado resultante", aclara Elitzur. "Pero hace esta labor de manera muy holgada, y el cambio que ocasiona en el sistema es más débil que la información que ofrece."

Como resultado, explica Elitzur, "cada una medida débil, en sí misma, no nos dice casi nada. Las mediciones proveen resultados fiables sólo después de calcularlas todas. Entonces, los errores se cancelan y se puede extraer alguna información sobre el conjunto como un todo."

En el experimento mental de los investigadores, los resultados de estas mediciones débiles coinciden con los de las medidas más fuertes, en el cual el experimentador elige libremente la orientación del espín a medir, a pesar de que los estados de las partículas están todavía por determinar tras las mediciones débiles. Lo que esto significa, explica Elitzur, es que dentro de la TSVF "una partícula entre dos mediciones posee los dos estados indicados por ambos, el pasado y el futuro".

La naturaleza es inquieta

El problema es que, solamente añadiendo la información consiguiente desde las medidas fuertes se pueden revelar lo que las mediciones débiles estaban "en realidad" diciendo. La información ya estaba allí, pero encriptada y sólo se expone en retrospectiva. Así que, la causalidad se conserva, aunque no exactamente como normalmente la conocemos. ¿Por qué existe esta censura no está claro, salvo desde una perspectiva casi metafísica. "La naturaleza es conocida por ser muy exigente para que nunca parezca inconsistente", comentó Elitzur. "Así que no aprecia demasiado la causalidad abierta hacia atrás, la gente a matar a sus abuelos y así sucesivamente."

Elitzur dice que algunos especialistas en óptica cuántica han expresado su interés en realizar este experimento en el laboratorio, y él piensa que no debería ser más difícil que estudios anteriores de entrelazamiento.

Charles Bennett de IBM's T. J. Watson Research Center en Yorktown Heights, Nueva York, es un especialista en la teoría de información cuántica, y no está muy convencido. Él ve la TSVF solamente como una manera de visualizar resultados, y cree que los resultados pueden ser interpretados sin ninguna "causalidad retroactiva" aparente, por lo que los autores están erigiendo un hombre de paja (una falacia). "Para hacer que su hombre de paja se vea más fuerte, utilizan un lenguaje que oscurece la diferencia crucial entre la comunicación y la correlación", señala. Y añade que es como un experimento de criptografía cuántica, en la que el emisor envía al receptor la clave de descifrado antes de enviar (o incluso decidir sobre) el mensaje, y luego afirma que la clave es de alguna manera una "anticipación" del mensaje.

Sin embargo, Aharonov y sus colegas, sospechan que sus hallazgos podrían tener incluso consecuencias para el libre albedrío. "Nuestro grupo sigue estando algo divididos sobre estas cuestiones filosóficas", dice Elitzur. En opinión de Aharonov, la discusión "es algo talmúdica: todo lo que vamos a hacer ya lo conoce Dios, pero tú todavía tienes la elección."


- Fuente: El estudio está disponible en el servidor de arXiv .  
La traduccion es de Bitnavegantes..Pedro Donaire
- Imagen: ¿Elecciones futuras y consecuencias en el pasado?, PhysicsWorld.com

martes, 14 de agosto de 2012


Study fingers climate change as cause of recent heat waves

By | August 13, 2012, 3:03 AM PDT


It’s long been a tenet of climate science that specific weather events cannot be attributed to climate change.
But a prominent NASA climate scientist and advocate for policies to combat climate change, James E. Hansen, has now come out with a paper that upends that scientific conventional wisdom.
Using a statistical analysis, Dr. Hansen, the head of NASA’s Godard Institute for Space Studies in Manhattan, and his colleagues found that specific heat waves of recent years –  the Texas heat wave in 2011, the Russian heat wave in 2010 and the European heat wave of 2003 – were so out of line with what had traditionally been considered natural variability that they must have been directly caused by climate change.
Additionally, he published an op-ed in the Washington Post asserting that once this summer is over, the U.S. heat wave and drought will also likely be found to be a direct result of climate change.
“I don’t want people to be confused by natural variability — the natural changes in weather from day to day and year to year,” Dr. Hansen said in a press release. “We now know that the chances these extreme weather events would have happened naturally — without climate change — is negligible.”
The paper has divided climate scientists for two reasons: One is simply Dr. Hansen’s visible role as an advocate for action to mitigate climate change. The other is the fact that the paper did not rely on climate science to pinpoint climate change as the cause of these heat waves but simply on math and statistics.
Some felt he had come up with a new way to understand climate extremes and others asserted that he had simply put a new spin on old data.

The study

The paper, published in the Proceedings of the National Academy of Sciences, looked at how temperature varies within a season, and how that variability is changing.
Dr. Hansen and his co-authors, who included Makiko Sato, also a NASA scientist at the Columbia University’s Earth Institute, and Reto Ruedy, of Trinnovim LLC, which provides scientific support for NASA, looked at the climate variability of the years from 1951 to 1980, and contrasted that against the variability of the years since.
They compared how much of the earth’s land surface was under what would have been considered extreme heat from June through August of each year.
  • From 1951 to 1980, only 0.2% of land surface experienced these heat waves.
  • But in the years 2006 to 2011, 4% to 13% of land surfaces experienced extreme heat.
They concluded that the heat waves during those years would not have occurred were it not for the greenhouse gases warming our planet.
Therefore, the paper did not show, using climate science, how climate change caused each of those events, but instead looked at the likelihood of these events happening without the presence of climate change and decided that climate change was the only plausible cause for them.

Good science? Or spin?

Scientists were split on whether they agreed with the way Dr. Hansen and his co-authors reached his conclusions.
Andrew Weaver, a climate scientist at the University of Victoria in British Columbia told the Associated Press that the study reframes the way we think about extreme weather in relation to climate change:
“Rather than say, ‘Is this because of climate change?’ That’s the wrong question. What you can say is, ‘How likely is this to have occurred with the absence of global warming?’ It’s so extraordinarily unlikely that it has to be due to global warming.”
But not all scientists were so convinced. Peter Stott of the U.K. Met Office, who co-authored a landmark study on the 2003 European heat wave, had previously found that global warming made it much more likely this type of heat wave would occur, but still attributed some of the cause to natural variability.
Other scientists not involved in the study, pointed out that that the increase in heat extremes demonstrates an overall shift toward warmer global average temperatures, not a change in climate variability.
“The one stretch in the paper is in the linking of the increase in areal extremes to an increase in climate variability,” Gavin Schmidt, a climate scientist who works with Dr. Hansen at GISS, told Climate Central.
What do you think? Did Dr. Hansen come up with a new way to understand climate variability or did he just tell a new tale using old data?

Related on SmartPlanet:

via: The Washington Post, The New York Times, Proceedings of the National Academy of Sciences, Climate Central
photo: DI91m/Wikimedia


Source: SMART PLANET

sábado, 11 de agosto de 2012

Posted: 09 Aug 2012 09:07 AM PDT
Referencia: ThunderBolts.info ,
Autor: Stephen Smith, 6 de agosto 2012

Los astrónomos siguen aferrándose a las teorías anticuadas sobre la formación de las estrellas.

La Agencia Espacial Europea (ESA) el 19 de mayo de 2009, puso en marcha la plataforma del telescopio Planck, en una órbita alrededor del punto L2 Lagrange. Planck está diseñada para analizar la radiación del fondo cósmico de microondas (CMBR) con mayor precisión que sus predecesores. La radiación de primer plano, que se dice interfiere con las mediciones precisas, se sustraerían de los datos del CMBR, revelando "[...] la información codificada acerca de lo que está hecho nuestro universo y sobre el origen de su estructura."

Wikipedia. Curvas de potencial en un sistema de dos cuerpos (aquí el Sol y la Tierra), mostrando los cinco puntos de Lagrange. Las flechas indican pendientes alrededor de los puntos L – acercándose o alejándose de ellos. Contra la intuición, los puntos L4 y L5 son máximos.

Una de las fuentes de radiación en primer plano son las nubes frías de hidrógeno molecular, que según se cree, son donde nacen las estrellas. Como estas nubes de hidrógeno en el espacio profundo están cercanas al cero absoluto (-273,15 ºC) no son fáciles de detectar. Lo que hace Planck es elegir otra molécula que irradie con mayor intensidad: el monóxido de carbono. Los astrónomos creen que el monóxido de carbono se mezcla en las nubes de hidrógeno acumulado a través de la atracción gravitatoria.

Las teorías predominantes declaran que el Universo comenzó como un mar de indiferenciadas "partículas" energéticas, en un evento de creación conocido como el Big Bang. Una vez no hubo nada, y posteriormente explotó hacia el todo. Cómo y por qué sucedió esto ni se explica ni se sabe.

A pesar de la ironía inherente en esta declaración, supuestamente se entró a la existencia cuando dicha explosión se expandió y se enfriaron las partículas. Estas partículas son los protones y electrones que se combinaron en lo que conocemos como materia, principalmente el gas de hidrógeno. Dado que el hidrógeno está compuesto de un protón y un electrón, se piensa que es uno de los primeros átomos, junto con el helio. Entonces, ¿de dónde viene el monóxido de carbono?

De acuerdo con las teorías de consenso, los primordiales hidrógeno y helio empezaron de inmediato a fusionarse en vastas "guarderías estelares", donde los remolinos de gas se condensaban en formas esféricas. Estas esferas crecían y se hacían más y más calientes haciendo que los átomos dentro de ellas incrementaran su movimiento. Por último, la compresión se volvía tan intensa que sus núcleos generaban energía termonuclear. Esta teoría se conoce como la hipótesis nebular, propuesta originalmente por Pierre Simon de Laplace en 1796.

Las teorías astrofísicas sostienen que, como aquella Primera Población de estrellas fusionó su hidrógeno en elementos más pesados, sus núcleos se volvieron inestables debido a las cada vez mayores concentraciones de átomos más masivos. Una vez que una estrella acumula suficiente material termonuclear sufre una implosión catastrófica, ya que las reacciones nucleares ya no pueden mantener a raya la contracción gravitacional. La superficie externa de la estrella colapsa hacia el interior a una velocidad tremenda, rebotando desde el material del denso núcleo. La estrella, entonces, estalla hacia fuera en una explosión de supernova, lanzando sus capas exteriores hacia el espacio.

El carbono y el oxígeno se dice que existen porque fueron forjados en los mismos núcleos de las primeras estrellas. Si esto es así, el monóxido de carbono de nuestra galaxia existe porque innumerables estrellas de la Primera Población lo han disparado por todo el Universo durante incontables eones, esparciendo sus restos a lo largo de millones de años luz.

Los astrofísicos continúan reflexionando acerca del misterio de por qué algunas estrellas crecen agregando más masa durante su gestación de lo que es teóricamente posible. Desde el colapso de la nube de gas que les dio origen, se suponía que generan más radiación al condensarse de lo que su estructura puede soportar, la capa de gas alrededor de ellas debería expulsarla antes de que pueda condensarse la suficiente materia.

La razón más probable de que esas protoestrellas no obedezcan la teoría convencional es que no son lo que los astrónomos piensan que son.

La hipótesis del Universo Eléctrico, aplicada a estas estrellas, propone que el papel de los campos de plasma y eléctricos en el espacio es mucho más relevante que los datos que la actividad cinética (gas caliente). Las emanaciones radiantes que observa Planck son el resultado de corrientes eléctricas.

Lo que ve Planck, son nubes de polvo filamentosas que contienen iones cargados llamado plasma. Nubes de plasma que se mueven generando corrientes eléctricas una dentro de otra. Siempre que esas corrientes eléctricas fluyen a través de las corrientes espirales de plasma en los filamentos, se atraen mutuamente. Sin embargo, debido a la repulsión de corto alcance de sus campos electromagnéticos, en lugar de fusionarse rotan en espiral alrededor una de la otra en pares de "corrientes de Birkeland".

A medida que aumenta la densidad energética, los filamentos entran en "modo resplandor", mientras que la densidad de flujo magnético atrae la materia del espacio circundante. Con el tiempo, forman una serie de brillantes globos de plasma, como las "cuentas de de un collar."

De esta manera es como nacen las estrellas. La gravedad es una fuerza débil si se compara con un campo eléctrico y las partículas ionizadas. A pesar de que juega su papel en la evolución estelar —derivando los iones a un lado de una nube de plasma, por ejemplo—, la electricidad, sin duda, es el progenitor estelar y no la gravedad.

- Título original "Protostar Expostulation"
- Vídeo: Lanzamiento Planck .
- Site SpaceSpin.org "Coolest spacecraft ever in orbit around L2".
- Site ESA - Planck .

SOURCE: Bitnavegantes , Pedro Donaire

viernes, 10 de agosto de 2012

How Do You Count Parallel Universes? You Can’t Just Go 1, 2, 3, …



Cosmologists have been thinking for years that our universe might be just one bubble amid countless bubbles floating in a formless void. And when they say “countless,” they really mean it. Those universes are damned hard to count. Angels on a pin are nothing to this. There’s no unambiguous way to count items in an infinite set, and that’s no good, because if you can’t count, you can’t calculate probabilities, and if you can’t calculate probabilities, you can’t make empirical predictions, and if you can’t make empirical predictions, you can’t look anyone in the eye at scientist wine-and-cheese parties. In a Sci Am article last year, cosmologist Paul Steinhardt argued that this counting crisis, or “measure problem,” is reason to doubt the theory that predicts bubble universes.
Other cosmologists think they just need to learn how to count better. In April I went to a talk by Leonard Susskind (silhouetted in the photo above), who has been arguing for a decade that you don’t need to count all the parallel universes, just those that are capable of affecting you. Forget the causally disconnected ones and you might have a shot at recovering your empiricist credentials. “Causal structure is, I think, all important,” Susskind said. He presented a study he did last year with three other Stanford physicists, Daniel Harlow, Steve Shenker, and Douglas Stanford. I didn’t follow everything he said, but I was enamored of a piece of mathematics he invoked, known as p-adic numbers. As I began to root around, I discovered that these numbers have inspired an entire subfield within fundamental physics, involving not just parallel universes but also the arrow of time, dark matter, and the possible atomic nature of space and time.
Lest you think that the whole notion of parallel universes was ill-starred to begin with, cosmologists have good cause to think our universe is just one member of a big dysfunctional family. The universe we see is smooth and uniform on its largest scales, yet it hasn’t been around long enough for any ordinary process to have homogenized it. It must have inherited its smoothness and uniformity from an even larger, older system, a system permeated with dark energy that drives space to expand rapidly and evens it out—the process known as cosmic inflation. Dark energy also destabilizes the system and causes universes to nucleate out like raindrops in a cloud. Voilà, our universe.
Other bubbles are nucleating all the time. Each gains its own endowment of dark energy and can give rise to new bubbles—bubbles within bubbles within bubbles, an endless cosmic effervescence. Even our universe has a dab of dark energy and can birth new bubbles. The space between the baby bubbles expands, keeping them isolated from one another. A bubble has contact only with its parent.
The process produces a family tree of universes. The tree is a fractal: no matter how closely you zoom in, it looks the same. In fact, the tree is a dead ringer for one of the most famous fractals of all, the Cantor set.
In a simplified case, if you start with a single universe, by the Nth generation, you have 2N of them. You label each universe by a binary number giving its position in the structure. After the first bubble nucleation, you have two universes, the inside and outside of the bubble: 0 and 1. In the first generation, universe 0 spawns 00 and 10, and universe 1 spawns 01 and 11. Then, universe 00 gives birth to 000 and 100, and so it goes.
The process goes on forever, approaching a continuum of universes (the red line at the top of the diagram) indexed by numbers with an infinity of bits. The fun thing is that these numbers are not standard-issue infinite-digit numbers like 1.414… (√2) or 3.1415… (π), which mathematicians call “real” numbers—the ones you find on a grade-school number line. Instead they are so-called 2-adic numbers with very different mathematical properties. In a more general setup, each universe could fork into p universes rather than just two, hence the general term p-adic.
Mathematicians came up with p-adic numbers in the late 19th century as an alternative way, besides real numbers, to fill in the spaces between integers and integer fractions to make an uninterrupted block of numbers. In fact, Russian mathematician Alexander Ostrowski showed that p-adics are the only alternative to the reals.
Unfortunately, mathematicians have done a good job of smothering the beauty beneath formal definitions, theorems, lemmas, and corollaries that dot every ‘i’ but never tell you what they’re spelling out. (My mathematician friends, too, complain that math texts are as compelling to read as software license agreements.) It wasn’t until I heard Susskind’s description in terms of counting parallel universes that I had a clue what p-adics were or appreciated their sheer awesomeness.
What differentiates p-adics from reals is how distance is defined. For them, distance is the degree of consanguinity: two p-adics are close by virtue of having a recent common ancestor in their family tree. Numerically, if two points have a common ancestor in the Nth generation, those points are separated by a distance of 1/2N. For instance, to find a common ancestor of the numbers 000 and 111, you have to go all the way back to the root of the tree (N=0). Thus these numbers are separated by a distance of 1—the full width of the multiverse. For the numbers 000 and 110, the most recent common ancestor is the first generation (N=1), so the distance is 1/2. For 000 and 100, the distance is 1/4.
To put it another way, if someone gives you two p-adic numbers, you determine the distance between them using the following procedure. Line them up, one on top of the other. Compare the rightmost bits. If they’re different, stop! You’re done. The distance is 1. If they’re the same, shift to the left and compare the next bits over. If they’re different, stop! The distance is 1/2. Keep going until you find the first bit that is different. This bit—and none other—determines the distance.
This distance rule messes with your mind. Two parallel universes that look nearby can be far apart because they lie on different branches of the tree. Likewise, two points that look far apart might be nearby. In the figure at left, universe ‘B’ is closer to universe ‘C’ than to ‘A’. What is more, the number 100 is smaller than the number 10, since it is closer to the far left side of the multiverse. With p-adics, you gain precision by adding digits to the left side of the number rather than to the right. Accordingly, mathematician Andrew Rich and undergraduate Matthew Bauman have dubbed them “leftist numbers.”
p-adics can be added, subtracted, multiplied, and divided like any other self-respecting number, but their leftist proclivities change the rules and make arithmetic unexpectedly easier. To add two p-adics, you start with the most significant digit (on the right) and add them one by one toward the least significant digits (on the left). With reals, on the other hand, you start with the least significant digit, and you’re out of luck if you have a number such as π with an infinite number of digits.
The weirdness doesn’t stop there. Consider three p-adic numbers. You can think of them as the three corners of a triangle. Oddly, at least two sides of the triangle must have the same length; p-adics, unlike reals, don’t give you the liberty to make the sides all different. The reason is evident from the tree diagram: there is only one path from one number to the other two numbers, hence at most two common ancestors, hence at most two different lengths. In the jargon, p-adics are “ultrametric.” On top of that, distance is always finite. There are no p-adic infinitesimals, or infinitely small distances, such as the dx and dy you see in high-school calculus. In the argot, p-adics are “non-Archimedean.” Mathematicians had to cook up a whole new type of calculus for them.
Prior to the multiverse study, non-Archimedeanness was the main reason physicists had taken the trouble to decipher those mathematics textbooks. Theorists think that the natural world, too, has no infinitely small distances; there is some minimal possible distance, the Planck scale, below which gravity is so intense that it renders the entire notion of space meaningless. Grappling with this granularity has always vexed theorists. Real numbers can be subdivided all the way down to geometric points of zero size, so they are ill-suited to describing a granular space; attempting to use them for this purpose tends to spoil the symmetries on which modern physics is based.
By rewriting their equations using p-adics instead, theorists think they can capture the granularity in a consistent way, as Igor Volovich of the Steklov Mathematical Institute in Moscow argued in 1987. The resulting dynamics might even explain dark matter and the mechanics of cosmic inflation.
Naturally, having found a new toy to play with, physicists immediately wonder how to break it. Susskind and his colleagues took the tree of parallel universes, lopped off some of its branches, and figured out how it would deform the p-adics. Those pruned branches represented infertile baby universes: those born with zero dark energy or a negative density of the stuff. Just as pruning a real tree might seem destructive but actually helps it to grow, pruning the tree of universes mucks up its symmetry but does so in a good cause: it explains, the team argued, why time is unidirectional—why the past is different from the future.
p-adics are a case study of how a concept mathematicians invented for its own beauty might turn out to have something to do with the real world. What a bonus that they may be more real than the reals.
Photograph courtesy of Gary Smaby. Bubble figure courtesy of George Musser. Tree figures courtesy of Daniel Harlow, Stanford University.
George MusserAbout the Author: George Musser is a contributing editor at Scientific American. He focuses on space science and fundamental physics, ranging from particles to planets to parallel universes. He is the author of The Complete Idiot's Guide to String Theory. Musser has won numerous awards in his career, including the 2011 American Institute of Physics's Science Writing Award. Follow on Twitter @gmusser.
The views expressed are those of the author and are not necessarily those of Scientific American.
SOURCE: Scientific American

lunes, 6 de agosto de 2012


Is Death An Illusion? Evidence Suggests Death Isn’t the End

Photo of Light




















































































Robert Lanza, MD 


BIOCENTRISM

CHIEF SCIENTIFIC OFFICER OF ADVANCED CELL TECHNOLOGY


Autor: Robert Lanza M.D.
SOURCE: http://www.robertlanzabiocentrism.com/is-death-an-illusion-evidence-suggests-death-isnt-the-end/



After the death of his old friend, Albert Einstein said “Now Besso has departed from this strange world a little ahead of me. That means nothing. People like us … know that the distinction between past, present and future is only a stubbornly persistent illusion.”
New evidence continues to suggest that Einstein was right – death isan illusion.
Our classical way of thinking is based on the belief that the world has an objective observer-independent existence. But a long list of experiments shows just the opposite. We think life is just the activity of carbon and an admixture of molecules – we live awhile and then rot into the ground.
We believe in death because we’ve been taught we die. Also, of course, because we associate ourselves with our body and we know bodies die. End of story. But biocentrism – a new theory of everything – tells us death may not be the terminal event we think. Amazingly, if you add life and consciousness to the equation, you can explain some of the biggest puzzles of science. For instance, it becomes clear why space and time – and even the properties of matter itself – depend on the observer. It also becomes clear why the laws, forces, and constants of the universe appear to be exquisitely fine-tuned for the existence of life.
Until we recognize the universe in our heads, attempts to understand reality will remain a road to nowhere.
Consider the weather ‘outside’: You see a blue sky, but the cells in your brain could be changed so the sky looks green or red. In fact, with a little genetic engineering we could probably make everything that is red vibrate or make a noise, or even make you want to have sex like with some birds. You think its bright out, but your brain circuits could be changed so it looks dark out. You think it feels hot and humid, but to a tropical frog it would feel cold and dry. This logic applies to virtually everything. Bottom line: What you see could not be present without your consciousness.
In truth, you can’t see anything through the bone that surrounds your brain. Your eyes are not portals to the world. Everything you see and experience right now – even your body – is a whirl of information occurring in your mind. According to biocentrism, space and time aren’t the hard, cold objects we think. Wave your hand through the air – if you take everything away, what’s left? Nothing. The same thing applies for time. Space and time are simply the tools for putting everything together.
Consider the famous two-slit experiment. When scientists watch a particle pass through two slits in a barrier, the particle behaves like a bullet and goes through one slit or the other. But if you don’t watch, it acts like a wave and can go through both slits at the same time. So how can a particle change its behavior depending on whether you watch it or not? The answer is simple – reality is a process that involves your consciousness.
Or consider Heisenberg’s famous uncertainty principle. If there is really a world out there with particles just bouncing around, then we should be able to measure all their properties. But you can’t. For instance, a particle’s exact location and momentum can’t be known at the same time. So why should it matter to a particle what you decide to measure? And how can pairs of entangled particles be instantaneously connected on opposite sides of the galaxy as if space and time don’t exist? Again, the answer is simple: because they’re not just ‘out there’ – space and time are simply tools of our mind.
Death doesn’t exist in a timeless, spaceless world. Immortality doesn’t mean a perpetual existence in time, but resides outside of time altogether.
Our linear way of thinking about time is also inconsistent with another series of recent experiments. In 2002, scientists showed that particles of light “photons” knew – in advance – what their distant twins would do in the future. They tested the communication between pairs of photons. They let one photon finish its journey – it had to decide whether to be either a wave or a particle. Researchers stretched the distance the other photon took to reach its own detector. However, they could add a scrambler to prevent it from collapsing into a particle. Somehow, the first particle knew what the researcher was going to do before it happened – and across distances instantaneously as if there were no space or time between them. They decide not to become particles before their twin even encounters the scrambler. It doesn’t matter how we set up the experiment. Our mind and its knowledge is the only thing that determines how they behave. Experiments consistently confirm these observer-dependent effects.
Bizarre? Consider another experiment that was recently published in the prestigious scientific journal Science (Jacques et al, 315, 966, 2007). Scientists in France shot photons into an apparatus, and showed that what they did could retroactively change something that had already happened in the past. As the photons passed a fork in the apparatus, they had to decide whether to behave like particles or waves when they hit a beam splitter. Later on – well after the photons passed the fork – the experimenter could randomly switch a second beam splitter on and off. It turns out that what the observer decided at that point, determined what the particle actually did at the fork in the past. At that moment, the experimenter chose his past.
Of course, we live in the same world. But critics claim this behavior is limited to the microscopic world. But this ‘two-world’ view (that is, one set of physical laws for small objects, and another for the rest of the universe including us) has no basis in reason and is being challenged in laboratories around the world. A couple years ago, researchers published a paper in Nature (Jost et al, 459, 683, 2009) showing that quantum behavior extends into the everyday realm. Pairs of vibrating ions were coaxed to entangle so their physical properties remained bound together when separated by large distances (“spooky action at a distance,” as Einstein put it). Other experiments with huge molecules called ‘Buckyballs’ also show that quantum reality extends beyond the microscopic world. And in 2005, KHC03 crystals exhibited entanglement ridges one-half inch high, quantum behavior nudging into the ordinary world of human-scale objects.
We generally reject the multiple universes of Star Trek as fiction, but it turns out there is more than a morsel of scientific truth to this popular genre. One well-known aspect of quantum physics is that observations can’t be predicted absolutely. Instead, there is a range of possible observations each with a different probability. One mainstream explanation, the “many-worlds” interpretation, states that each of these possible observations corresponds to a different universe (the ‘multiverse’). There are an infinite number of universes and everything that could possibly happen occurs in some universe. Death does not exist in any real sense in these scenarios. All possible universes exist simultaneously, regardless of what happens in any of them.
Life is an adventure that transcends our ordinary linear way of thinking. When we die, we do so not in the random billiard-ball-matrix but in the inescapable-life-matrix. Life has a non-linear dimensionality – it’s like a perennial flower that returns to bloom in the multiverse.
“The influences of the senses,” said Ralph Waldo Emerson “has in most men overpowered the mind to the degree that the walls of space and time have come to look solid, real and insurmountable; and to speak with levity of these limits in the world is the sign of insanity.”
Robert Lanza has published extensively in leading scientific journals. His book “Biocentrism” lays out the scientific argument for his theory of everything.
The Most Amazing Experiment
From Biocentrism (Robert Lanza and Bob Berman)
Quantum theory has unfortunately become a catch-all phrase for trying to prove various kinds of New Age nonsense. It’s unlikely that the authors of the many books making wacky claims of time-travel or mind-control, and who use quantum theory as “proof,” have the slightest knowledge of physics or could explain even the rudiments of QT. The popular 2004 film, What the Bleep Do We Know? is a good case in point. The movie starts out claiming quantum theory has revolutionized our thinking – which is true enough – but then, without explanation or elaboration, goes on to say that it proves people can travel into the past or “choose which reality you want.”
QT says no such thing. QT deals with probabilities, and the likely places particles may appear, and likely actions they will take. And while, as we shall see, bits of light and matter do indeed change behavior depending on whether they are being observed, and measured particles do indeed appear to amazingly influence the past behavior of other particles, this does not in any way mean that humans can travel into their past or influence their own history.
Given the widespread generic use of QT, plus the paradigm-changing tenets of biocentrism, using QT as evidence might raise eyebrows among the skeptical. For this reason, it’s important that readers have some genuine understanding of QT’s actual experiments — and can grasp the real results rather than the preposterous claims so often associated with it. For those with a little patience, this chapter can provide a life-altering understanding of the latest version of one of the most famous and amazing experiments in the history of physics.
The astonishing “double-slit” experiment, which has changed our view of the universe – and serves to support biocentrism — has been performed repeatedly for many decades. This specific version summarizes an experiment published in Physical Review A, (65, 033818) in 2002. But it’s really merely another variation, a tweak to a demonstration that has been performed again and again for three quarters of a century.
It all really started early in the 20th century when physicists were still struggling with a very old question – whether light is made of particles called photons, or whether instead they are waves of energy. Isaac Newton believed “particles.” But by the late 19th century, waves seemed more reasonable. In those early days, some physicists presciently and correctly thought that even solid objects might have a “wave nature” as well.
To find out, we use a source of either light or particles. In the classic double-slit experiment, the particles are usually electrons, since they are small, fundamental (they can’t be divided into anything else) and easy to beam at a distant target. A classic TV set, for example, directs electrons at the screen. We start by aiming light at a detector wall. First, however, the light must pass through an initial barrier with two holes. We can shoot a flood of light or just a single indivisible photon at a time – the results remain the same. Each bit of light has a 50-50 change of going through the right or the left slit. After awhile, all these photon-bullets will logically create a pattern – falling preferentially in the middle of the detector with fewer on the fringes, since most paths from the light source go more-or-less straight ahead. The laws of probability say that we should see a cluster of hits like this:
When plotted on a graph (in which number of hits is vertical, and position on the detector screen horizontal) the expected result for a barrage of particles is to indeed have more hits in the middle and fewer near the edges, which produces a curve like this:
But that’s not the result we actually get. When experiments like this are performed – and they have been done thousands of times during the past century – we find that the bits of light instead create a curious pattern:
Plotted on a graph, the pattern’s “hits” look like this:
In theory, those smaller side peaks around the main one should be symmetrical. In practice, we’re dealing with probabilities and individual bits of light, so the result usually deviates a bit from the ideal. Anyway, the big question here is: Why this pattern?
Turns out, it’s exactly what we’d expect if light is made of waves, not particles. Waves collide and interfere with each other, causing ripples. If you toss two pebbles into a pond at the same time, the waves of each meet each other and produce places of higher-than-normal, or lower-than-normal water-rises. Some waves reinforce each other, or, if one’s crest meets another’s trough, they cancel out at that spot.
So this early 20th-century result of an interference pattern, which can only be caused by waves, showed physicists that light is a wave, or at least acts that way when this experiment is performed. The fascinating thing is that when solid physical bodies like electrons were used, they got exactly the same result. Solid particles have a wave-nature too! So, right from the get-go, the double slit experiment yielded amazing information about the nature of reality. Solid objects have a wave nature!
Unfortunately, or fortunately, this was just the appetizer. Few realized that true strangeness was only beginning. The first oddity happens when just one just photon or electron is allowed to fly through the apparatus at a time. After enough have gone through and been individually detected, this same interference pattern emerges. But how can this be? With what is each of those electrons or photons interfering? How can we get an interference pattern when there’s only indivisible object in there at a time?
A single photon hits the detector.
A second photon hits the detector.
A third photon hits the detector.
Somehow, these individual photons add up to an interference pattern!
There has never been a truly satisfactory answer for this. Wild ideas keep emerging. Could there be other electrons or photons “next door” in a parallel universe, from another experimenter doing the same thing? Could their electrons be interfering with ours? That’s so far-fetched, few believe it.
The usual interpretation of why we see an interference pattern is that photons or electrons have two choices when they encounter the double slit. They do not actually exist as real entities in real places until they are observed, and they aren’t observed until they hit the final detection barrier. So when they reach the slits, they exercise their probabilistic freedom of taking bothchoices. Even though actual electrons or photons are indivisible, and never split themselves under any conditions whatsoever, their existence as “probability waves” are another story. Thus, what goes “through the slit” are not actual entities but just probabilities. . THE PROBABILITY WAVES OF THE INDIVIDUAL PHOTONS INTERFERE WITH THEMSELVES! When enough have gone through, we see the overall interference pattern as all probabilities congeal into actual entities making impacts and being observed – as waves.
Sure it’s weird, but this, apparently, is how reality works. And this is just the very beginning of Quantum Weirdness. QT, as we mentioned last chapter, has a principle called complementarity which says that we can observe objects to be one thing or another – or have one position or property or another, but never both. It depends on what one is looking for, and what measuring equipment is used.
Now, suppose we wish to know which slit a given electron or photon has gone through, on its way to the barrier. It’s a fair enough question, and it’s easy enough to find out. We can use polarized light (meaning light whose waves vibrate either horizontally or vertically or else slowly rotate their orientation) and when such a mixture is used, we get the same result as before. But now let’s determine which slit each photon is going through. Many different things have been used, but in this experiment we’ll use a “quarter wave plate” in front of each slit. Each quarter wave plate alters the polarity of the light in a specific way. The detector can let us know the polarity of the incoming photon. So by noting the polarity of the photon when it’s detected, we know which slit it went through.
Now we repeat the experiment, shooting photons through the slits one at a time, except this time we know which slot each photon goes through. Now theresults dramatically change. Even though QWPs do not alter photons except for harmlessly shifting their polarities (later we prove that this change in results is not caused by the QWPs), now we no longer get the interference pattern. Now the curve suddenly changes to what we’d expect if the photons were particles:
Something’s happened. Turns out, the mere act of measurement, of learning the path of each photon, destroyed the photon’s freedom to remain blurry and undefined and take both paths until it reached the barriers. Its “wave function” must have collapsed at our measuring device, the QWPs, as it instantly “chose” to become a particle and go through one slit or the other. Its wave nature was lost as soon as it lost its blurry probabilistic not-quite-real state. But why should the photon have chosen to collapse its wave-function? How did it know that we, the observer, could learn which slit it went through?
Countless attempts to get around this, by the greatest minds of the past century, have all failed. Our knowledge of the photon or electron path alone caused it to become a definite entity ahead of the previous time. Of course physicists also wondered whether this bizarre behavior might be caused by some interaction between the “which-way” QWP detector or various other devices that have been tried, and the photon. But no. Totally different which-way detectors have been built, none of which in any way disturbs the photon. Yet we always lose the interference pattern. The bottom line conclusion, reached after many years, is that it’s simply not possible to gain which-way information and the interference pattern caused by energy-waves.
We’re back to QT’s complementarity – that you can measure and learn just one of a pair of characteristics, but never both at the same time. If you fully learn about one, you will know nothing about the other. And just in case you’re suspicious of the quarter wave plates, let it be said when used in all other contexts, including double slit experiments but without information-providing polarization-detecting barriers at the end, the mere act of changing a photon’s polarization never has the slightest effect on the creation of an interference pattern.
Okay, let’s try something else. In nature, as we saw in the last chapter, there are “entangled particles” or bits of light (or matter) that were born together and therefore “share a wave function” according to QT. They can fly apart – even across the width of the galaxy – and yet they still retain this connection, this knowledge of each other. If one is meddled with in any way so that it loses its “anything’s possible” nature and has to instantly decide to materialize with, say, a vertical polarization, its twin will instantaneously then materialize too, and with a horizontal polarity. If one becomes an electron with an up spin, the twin will too, but with a down spin. They’re eternally linked in a complementary way.
So now let’s use a device which shoots off entangled twins in different directions. Experimenters can create the entangled photons by using a special crystal called beta-barium borate (BBO). Inside the crystal, an energetic violet photon from a laser is converted to two red photons, each with half the energy (twice the wavelength) of the original, so there’s no net gain or loss of energy. The two outbound entangled photons are sent off in different directions. We’ll call their paths direction p and s.
We’ll set up our original experiment with no which-way information measured. Except now, we add a “coincidence counter.” The role of the coincidence counter is to prevent us from learning the polarity of the photons at detector S unless a photon also hits detector P. One twin goes through the slits (call this photon s) while the other merely barrels ahead to a second detector. Only when both detectors register hits at about the same time do we know that both twins have completed their journeys. Only then does something register on our equipment. The resulting pattern at detector S is our familiar interference pattern:
This makes sense. We haven’t learned which slit any particular photon or electron has taken. So the objects have remained probability waves.
But let’s now get tricky. First we’ll restore those QWPs so we can get which-way information for photons traveling along path S.
As expected, the interference pattern now vanishes, replaced with the particle pattern, the single curve.
So far so good. But now let’s destroy our ability to measure the which-way paths of the s photons, but without interfering with them in any way. We can do this by placing a polarizing window in the path of the other photon P, far away. This plate will stop the second detector from registering coincidences. It’ll measure only some of the photons, and effectively scramble up the double-signals. Since a coincidence-counter is essential here in delivering information about the completion of the twins’ journeys, it has now been rendered thoroughly unreliable. The entire apparatus will now be uselessly unable to let us learn which slit individual photons take when they travel along path S because we won’t be able to compare them with their twins – since nothing registers unless the coincidence counter allows it to. And let’s be clear: We’ve left the QWPs in place for photon S. All we’ve done is to meddle with the p photon’s path in a way that removes our ability to use the coincidence counter to gain which-way knowledge. (The set-up, to review, delivers information to us, registers “hits,” only when polarity is measured at detector S AND the coincidence counter tells us that either a matching or non-matching polarity has been simultaneously registered by the twin photon at detector P). The result:
They’re waves again. The interference pattern is back. The physical places on the back screen where the photons or electrons taking path s hit have now changed. Yet we did nothing to these photons’ paths, from their creation at the crystal all the way to the final detector. We even left the QWPs in place. All we did was meddle with the twin photon far away so that it destroyed our ability to learn information. The only change was in our minds. How could photons taking path S possibly know that we put that other polarizer in place — somewhere else, far from their own paths? And QT tells us that we’d get this same result even if we placed the information-ruiner at the other end of the universe.
(Also, by the way, this proves that it wasn’t those QWP plates that were causing the photons to change from waves to particles, and to alter the impact points on the detector. We now get an interference pattern even with the QWPs in place. It’s our knowledge alone that the photons or electrons seem concerned about. This alone influences their actions.)
Okay, this is bizarre. Yet these results happen every time, without fail. They’re telling us that an observer determines physical behavior of “external” objects. Could it get any weirder? Hold on: Now we’ll try something even more radical – an experiment only first performed in 2002. Thus far the experiment involved erasing the which-way information by meddling with the path of p and then measuring its twin s. Perhaps some sort of communication takes place between photon p and s, letting s know what we will learn, and therefore giving it the green light to be a particle or a wave and either create or not create an interference pattern. Maybe when photon p meets the polarizer it sends s an IM (instant message) at infinite speed, so that photon s knows it must materialize into a real entity instantly, which has to be a particle since only particles can go through one slit or the other and not both. Result: No interference pattern.
To check out whether this is so, we’ll do one more thing. First we’ll stretch out the distance p photons have to take until they reach their detector, so it’ll take them more time to get there. This way, photons taking the S route will hit their own detectors first. But oddly enough, the results do not change! When we insert the QWPs to path S the fringes are gone; and when we insert the polarizing scrambler to path P and lose the coincidence-measuring ability that lets us determine which-way info for the S photons, the fringes return as before. But how can this be? Photons taking the S-path already finished their journeys. They either went through one or the other slit, or both. They either collapsed their “wave function” and became a particle or they didn’t. The game’s over, the action’s finished. They’ve each already hit the final barrier and were detected – before twin p encountered the polarizing scrambling device that would rob us of which-way information.
The photons somehow know whether or not we will gain the which-way information in the future. They decide not to collapse into particles before their distant twins even encounter our scrambler. (If we take away the P scrambler, the S photons suddenly revert to being particles, again before P’s photons reach their detector and activate the coincidence counter.) Somehow, photon s knows whether the “which-way” marker will be erased even though neither it, nor its twin, have yet encountered an erasing mechanism. It knows when its interference behavior can be present, when it can safely remain in its fuzzy both-slits ghost reality, because it apparently knows photon p — far off in the distance — is going to eventually hit the scrambler, and that this will ultimately prevent us from learning which way p went.
It doesn’t matter how we set up the experiment. Our mind and its knowledge or lack of it is the only thing that determines how these bits of light or matter behave. It forces us, too, to wonder about space and time. Can either be real if the twins act on information before it happens, and across distances instantaneously as if there is no separation between them?
Again and again, observations have consistently confirmed the observer-dependent effects of QT. In the past decade, physicists at the National Institute of Standards and Technology have carried out an experiment that, in the quantum world, is equivalent to demonstrating that a watched pot doesn’t boil. “It seems,” said Peter Coveney, a researcher there, “that the act of looking at an atom prevents it from changing.” (Theoretically, if a nuclear bomb were watched intently enough, it would not explode, that is, if you could keep checking its atoms every million trillionth of a second. This is yet another experiment that supports the theory that the structure of the physical world, and of small units of matter and energy in particular, are influenced by human observation.)
In the last couple of decades, quantum theorists have shown, in principle, that an atom cannot change its energy state as long as it is being continuously observed. So, now, to test this concept, the group of laser experimentalists at the NIST held a cluster of positively charged beryllium ions, the “water” so to speak, in a fixed position using a magnetic field, the “kettle”. They applied “heat” to the kettle in the form of a radio-frequency field that would boost the atoms from a lower to a higher energy state. This transition generally takes about a quarter of a second. However, when the researchers kept checking the atoms every four milliseconds with a brief pulse of light from a laser, the atoms never made it to the higher energy state, despite the force driving them toward it. It would seem that the process of measurement gives the atoms “a little nudge,” forcing them back down to the lower energy state–in effect, resetting the system to zero. This behavior has no analog in the classical world of everyday sense awareness and is apparently a function of observation.
Arcane? Bizarre? It’s hard to believe such effects are real. It’s a fantastic result. When quantum physics was in its early days of discovery in the beginning of the last century, even some physicists dismissed the experimental findings as impossible or improbable. It is curious to recall Albert Einstein’s reaction to the experiments: “I know this business is free of contradictions, yet in my view it contains a certain unreasonableness.”
It was only with the advent of quantum physics and the fall of objectivity, that scientists began to consider again the old question of the possibility of comprehending the world as a form of mind. Einstein, on a walk from The Institute for Advanced Study at Princeton to his home on Mercer street, illustrated his continued fascination and skepticism about an objective external reality, when he asked Abraham Pais if he really believed that the moon existed only if he looked at it. Since that time, physicists have analyzed and revised their equations in a vain attempt to arrive at a statement of natural laws that in no way depends on the circumstances of the observer. Indeed, Eugene Wigner, one of the 20th century’s greatest physicists, stated that it is “not possible to formulate the laws of [physics] in a fully consistent way without reference to the consciousness [of the observer].” So when quantum theory implies that consciousness must exist, it tacitly shows that the content of the mind is the ultimate reality, and that only an act of observation can confer shape and form to reality– from a dandelion in a meadow, to sun, wind and rain.