viernes, 25 de noviembre de 2011

Quantum Theory's 'Wavefunction' Found to Be Real Physical Entity

Quantum Theory's 'Wavefunction' Found to Be Real Physical Entity

The wavefunction is a real physical object after all, say researchers.

November 17, 2011 , Nature reproduction by Scientific American

--------------------------------------------------------------------

By Eugenie Samuel Reich of Nature magazine

At the heart of the weirdness for which the field of quantum mechanics is famous is the wavefunction, a powerful but mysterious entity that is used to determine the probabilities that quantum particles will have certain properties. Now, a preprint posted online on November 14 reopens the question of what the wavefunction represents--with an answer that could rock quantum theory to its core. Whereas many physicists have generally interpreted the wavefunction as a statistical tool that reflects our ignorance of the particles being measured, the authors of the latest paper argue that, instead, it is physically real.

"I don't like to sound hyperbolic, but I think the word 'seismic' is likely to apply to this paper," says Antony Valentini, a theoretical physicist specializing in quantum foundations at Clemson University in South Carolina.

Valentini believes that this result may be the most important general theorem relating to the foundations of quantum mechanics since Bell's theorem, the 1964 result in which Northern Irish physicist John Stewart Bell proved that if quantum mechanics describes real entities, it has to include mysterious "action at a distance".

Action at a distance occurs when pairs of quantum particles interact in such a way that they become entangled. But the new paper, by a trio of physicists led by Matthew Pusey at Imperial College London, presents a theorem showing that if a quantum wavefunction were purely a statistical tool, then even quantum states that are unconnected across space and time would be able to communicate with each other. As that seems very unlikely to be true, the researchers conclude that the wavefunction must be physically real after all.

David Wallace, a philosopher of physics at the University of Oxford, UK, says that the theorem is the most important result in the foundations of quantum mechanics that he has seen in his 15-year professional career. "This strips away obscurity and shows you can't have an interpretation of a quantum state as probabilistic," he says.

Historical debate

The debate over how to understand the wavefunction goes back to the 1920s. In the `Copenhagen interpretation' pioneered by Danish physicist Niels Bohr, the wavefunction was considered a computational tool: it gave correct results when used to calculate the probability of particles having various properties, but physicists were encouraged not to look for a deeper explanation of what the wavefunction is.

Albert Einstein also favoured a statistical interpretation of the wavefunction, although he thought that there had to be some other as-yet-unknown underlying reality. But others, such as Austrian physicist Erwin Schrödinger, considered the wavefunction, at least initially, to be a real physical object.

The Copenhagen interpretation later fell out of popularity, but the idea that the wavefunction reflects what we can know about the world, rather than physical reality, has come back into vogue in the past 15 years with the rise of quantum information theory, Valentini says.

Rudolph and his colleagues may put a stop to that trend. Their theorem effectively says that individual quantum systems must "know" exactly what state they have been prepared in, or the results of measurements on them would lead to results at odds with quantum mechanics. They declined to comment while their preprint is undergoing the journal-submission process, but say in their paper that their finding is similar to the notion that an individual coin being flipped in a biased way--for example, so that it comes up 'heads' six out of ten times--has the intrinsic, physical property of being biased, in contrast to the idea that the bias is simply a statistical property of many coin-flip outcomes.

Quantum information

Robert Spekkens, a physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, who has favoured a statistical interpretation of the wavefunction, says that Pusey's theorem is correct and a "fantastic" result, but that he disagrees about what conclusion should be drawn from it. He favours an interpretation in which all quantum states, including non-entangled ones, are related after all.

Spekkens adds that he does expect the theorem to have broader consequences for physics, as have Bell's and other fundamental theorems. No one foresaw in 1964 that Bell's theorem would sow the seeds for quantum information theory and quantum cryptography--both of which rely on phenomena that aren't possible in classical physics. Spekkens thinks this theorem may ultimately have a similar impact. "It's very important and beautiful in its simplicity," he says.

This article is reproduced with permission from the magazine Nature. The article was first published on November 17, 2011.

Universe Expands While Minds Contract

The proof is in the pudding only if you concede the fact of the pudding

By Steve Mirsky | Wednesday, November 23, 2011 | 30

The leaves are turning as I write in early October. Also turning is my stomach, from the accounts coming out of something called the Values Voter Summit in Washington, D.C. According to Sarah Posner writing online in Religion Dispatches, talk-radio host Bryan Fischer went out of his way to attack me. And probably you. Anybody, really, who accepts science as an arbiter of reality. Fischer told the assembled that America needs a president who will “reject the morally and scientifically bankrupt theory of evolution.”

Evolution is a strange process indeed, to cobble together organisms who so completely and emotionally reject it. Well, evolution concerns itself only with differential survival, and brainpower may not be a crucial factor. Fischer may as well have gotten out of a car at the convention center and proclaimed that the car had not brought him there and did not in fact exist. To thunderous applause. One’s only reasonable response to this whole scene is to bring forefinger to mouth and rapidly toggle the lips while humming, so as to produce a sound roughly in accord with a spelling of “Blblblblblblblblblb.”

A few days before the summit, over in the rational world, Saul Perlmutter won a share of the 2011 Nobel Prize in Physics. He and his fellow laureates, Adam Riess and Brian Schmidt, showed that the universe is not only expanding, the expansion is accelerating. (On hearing this news, my brother asked me if there was a limit. I told him yes, no more than three people can share any one Nobel Prize.)

Perlmutter’s Nobel led to an additional, highly coveted prize. His University of California, Berkeley—­home to 22 Nobelists over the years—gives newly minted laureates a campus-wide parking permit. And, if asked, every time Perlmutter exits his car he will no doubt respond that he arrived in it and that it exists.

Perlmutter the driver also surely has the good sense to know that alcohol impairs judgment and neuromuscular skills. Contrast that mind-set with Miami Herald reporter Jose Cassola—well, former Miami Herald reporter now—who ran a stop sign shortly before Perlmutter was getting news of his Nobel and then told the cop who pulled him over, “You can’t get drunk off of vodka.”

As Cassola explained to the arresting officer: “I’m fat, I won’t be able to get drunk from only seven shots.” He later expounded on his unique theories about alcohol and its effects to media-watch reporter Gus Garcia-Roberts of the Miami New Times: “Dude, I go to Chili’s all the time and have two-for-one margaritas, and then I get in my car. Am I drunk? No!”

The disoriented mind pronouncing itself whole is always a wonder to behold. Which brings us back to the Values Voter Summit. Oddly, Fischer’s enraptured audience may have been morphologically identifiable. That notion appears in an article in the June 25, 1885, issue of the journal Nature by Charles Darwin’s half cousin Francis Galton. (It’s probably a good example of our information inundation that less than an hour after I discovered this 126-year-old article, I cannot re-create the steps by which I wound up reading it. E-mail? Twitter? Link within a link? It’s all part of the mystery.)

Galton found himself at a boring lecture and decided to study the sea of heads in front of him. He noted that “when the audience is intent each person ... holds himself rigidly in the best position for seeing and hearing.” In other words, they sit up straight. When the talk got tedious, “the intervals between their faces, which lie at the free end of the radius formed by their bodies, with their seat as the centre of rotation varies greatly.” In other words, they lean.

By all accounts, the audience at the Values Voter Summit was sitting ramrod straight, indicating great engagement with the material being presented. Although a scientific mind-set requires a consideration of another possibility: that x-rays would reveal in each attendee a stick responsible for the vertical attitude and in desperate need of removal. 

Source...Scientific American
Permanent Address: http://www.scientificamerican.com/article.cfm?id=respect-for-evidence

Hunt for Higgs Particle Enters Endgame

Large Hadron Collider could soon deliver a clear verdict on missing boson.

November 18, 2011 | 12 Nature and reproduced by Scientific America
----------------------------------------------------------------------------------

By Geoff Brumfiel of Nature magazine

Bill Murray is a man with secrets. Along with a handful of other scientists based at CERN, Europe's particle-physics facility near Geneva, Switzerland, Murray is one of the few researchers with access to the latest data on the Higgs boson -- the most sought-after particle in physics.

Looking at his laptop, he traces a thin black line that wiggles across a shaded area at the centre of a graph. This is the fruit of his summer's labours. "It's interesting, actually, looking at this again," he muses. A tantalizing pause. "But no, I can't say..."

Despite Murray's coyness, there are few places left for the Higgs to hide. Billed as the particle that helps to confer mass on other matter, and the final missing piece in the `standard model' of particle physics, the Higgs would be a huge prize for CERN's Large Hadron Collider (LHC), the world's most powerful particle accelerator. But so far, the two massive detectors there--ATLAS, where Murray works, and the Compact Muon Solenoid (CMS) -- have not seen any convincing signals of the elusive particle.

At a conference in Paris on November 18, teams from ATLAS and the CMS experiments presented a combined analysis that wipes out a wide swathe of potential masses for the Higgs particle. Gone is the entire mass range from 141 to 476 giga­electronvolts (GeV; energy and mass are interchangeable in particle physics). Together with earlier results from the 1990s, the analysis leaves a relatively narrow window of just 114-141 GeV in which the Higgs could lurk (see `Cornering the Higgs').

Analysis of the very latest data from this autumn--which Murray isn't yet ready to share -- will scour the range that remains. If it turns out to be empty, physicists may have to accept that the particle simply isn't there. Working around the clock, the detector teams hope to have this larger data set analysed before the end of December. "We'll know the outcome within weeks," says Guido Tonelli, spokesman for the CMS detector.

Waiting for God

The quest for the Higgs boson, often called the `God particle' after the title of a 1993 book by Nobel prizewinner Leon Lederman, is the public face of science at the LHC. Most high-energy physicists wince at the deistic designation, but they hold a near-religious devotion to the boson. Contrary to the popular view, their belief has less to do with mass than with fundamental forces.

Four fundamental forces are at work in nature: gravity, the strong nuclear force, the weak nuclear force and electromagnetism. Since the mid-1960s, physicists have strongly suspected that the weak and electromagnetic forces are actually different aspects of a single `electroweak' force. This is partly because the photon, the force-carrying particle of electro­magnetism, is highly similar to the force-carrying particles of the weak force -- the W and Z bosons. Moreover, a single electroweak theory successfully predicts the interactions of fundamental particles.

There is one problem, however: the W and Z bosons are extremely heavy, nearly 100 GeV, whereas the photon is massless. To explain the difference, a number of physicists (including Peter Higgs in 1964) proposed a new field and particle. The eponymous Higgs mechanism would interact with the W and Z bosons, giving them mass, but would ignore the photon, allowing it to remain massless. Relatively straightforward tweaks to the Higgs machinery allow it to endow other particles, such as quarks, with their observed masses as well.

"The Higgs now sought at CERN is expected on the basis of the simplest picture" for the electroweak theory, says Steven Weinberg, a theorist who won a Nobel prize in 1979 for his work unifying electromagnetism and the weak force. "But there are other possibilities," he adds, reckoning the odds that the LHC's detectors will find the Higgs at 50/50 (see `Do you believe?').

If there is no Higgs, then what? Gian Giudice, a theorist at CERN, recently published work suggesting that giant clusters of W bosons might serve the same purpose, but even he admits that "it would be a great surprise if it were true". Other models without the Higgs boson invoke extra dimensions of space, but they are not yet sufficiently developed to guide experiments.

Perhaps the most likely alternative is that the Higgs is not a single particle, but rather a class of particles, which together do the job of unifying the two forces. Such a concept might appeal theoretically if a single Higgs is not found, but it would be a major headache for experimentalists to check. Theorists believe that the conventional Higgs boson would leave only a subtle mark on the detectors as it decays into W and Z bosons, high-energy photons and other particles. If there were two Higgs-like particles instead of one, the signal of each would be weaker still, says Murray. "It starts to get quite messy to do the analysis," he says.

The answer to the Higgs question lies in the data now being crunched at CERN and other academic-computing centres around the world. The first 70 trillion or so collisions turned up intriguing Higgs-like decays in the ATLAS and CMS experiments, hinting at a particle of around 140 GeV (see Nature 475, 434; 2011). But the second batch of collisions showed nothing. If the collisions now being analysed show further evidence of Higgs decays, then the teams on the two experiments are likely to announce that they have found a tentative signal, to be firmed up in 2012. If not, the search will probably continue until the LHC is shut down for an upgrade at the end of next year.

Even if that continued search shows no evidence for a Higgs or anything else, the LHC will push on. Without a unified electroweak force, the standard model is unable to predict how certain particles and forces interact inside the collider, says Matthew Strassler, a theorist at Rutgers University in Piscataway, New Jersey. The LHC will gather data on exactly those processes, and that information could potentially be used to find a way in which electro­magnetism and the weak force fit together. That process, Strassler adds, is likely to take many years.

This article is reproduced with permission from the magazine Nature. The article was first published on November 18, 2011. Source Scientific American

Another Origin for Cosmic Rays

Some superfast particles arriving at Earth may originate from shock waves in turbulent stellar clusters, a gamma-ray study published in the November 24th issue of Science suggests. The observations are the first firm direct evidence of a longstanding theory for the origin of these particles, called cosmic rays, but they don’t do anything for another, even longer-standing theory in favor of supernova remnants.

Cosmic rays were first discovered in 1912 by Victor Hess, who won a Nobel Prize for his detection of this strange source of radiation entering the atmosphere from space. Until the 1930s scientists thought cosmic rays were some sort of electromagnetic wave — hence their name. But the deceptively dubbed “rays” are actually speedy charged particles whizzing through the universe. They’re mostly protons from hydrogen atoms stripped of their electrons, but they can also be heavier atomic nuclei, electrons, and other subatomic particles.


Gamma rays detected by the Fermi LAT (top image) are emitted by freshly accelerated cosmic rays traveling through the stormy Cygnus X region (in infrared, bottom image). The cosmic ray "cocoon" fills the cavities carved out around and between two star clusters, Cyg OB2 and NGC 6910.
NASA / DOE / Fermi LAT / I. Grenier / L. Tibaldo

Yet even after 99 years, astronomers still don’t know for sure where cosmic rays receive their energy boost. The problem with figuring out where cosmic rays come from is that they appear to come from everywhere. Because they’re charged particles, cosmic rays react to whatever magnetic fields they encounter, and there are a lot of magnetic fields in galaxies, whether from stars or planets or even the galaxy itself. By the time the particles reach Earth, they’re hitting us from all sides.

Gamma rays don’t have this problem. The most energetic photons in the universe, gamma rays basically travel in straight lines from their sources to us. And because cosmic rays are stupendously energetic, they produce gamma rays when they run into stuff.

Astronomers have used gamma rays to probe likely sites of cosmic ray acceleration. For several decades researchers have suspected that our galaxy’s rays come from supernova remnants, and X-ray and gamma-ray observations do indicate that electrons are being accelerated to high energies at remnants’ shock fronts as they slam into surrounding gas and dust, sending the electrons surfing in and out of the blast wave. But there’s no conclusive evidence of proton and nuclei acceleration, and these heavier particles make up 99% of cosmic rays, says Isabelle Grenier (Paris Diderot University and CEA Saclay), a coauthor on the new study. “We have no smoking gun,” she says. “We have very strong hints, but no proof.”

To hunt for cosmic rays’ origin, Grenier and her colleagues turned the Fermi Gamma-ray Space Telescope’s Large Area Telescope to point at the star-forming region Cygnus X, a tumultuous section of space about 4,500 light-years away filled with billows of thousand-mile-per-second stellar winds and strong ultraviolet radiation from young stars. The team detected a diffuse gamma-ray glow from inside a superbubble blown out by the young, massive members of two of the region’s star clusters, Cyg OB2 and NGC 6910. What’s more, the radiation looks like it’s coming from protons, not electrons.

The average energies Grenier’s team observed are much higher than the energies of cosmic rays near Earth. Add that higher energy to the emission’s confinement (meaning, the particles haven’t had a chance to move very far from their energizing source), and the fact that the gamma rays come from protons, and it looks like the team’s caught, as they put it, “freshly accelerated cosmic rays” that haven’t slowed down to near-Earth energy levels yet.

To find the source, the team focused at first on a strong gamma-ray-emitting supernova remnant called γ Cygni that appears in the same part of the sky. The remnant’s distance isn’t pinned down, so it’s not clear if it’s actually associated with Cygnus X. But that it might be there, in the same place as cosmic rays, sparked the researchers’ interest. “We were so excited,” says Grenier. “And I must say that, several months after, I’m not convinced that it’s the best scenario anymore.” The diffuse gamma-ray emission showed no sign of any connection with the remnant.

But the astronomers discovered something else intriguing: the diffuse gamma-rays are completely confined to the superbubble created by the stars’ strong winds, even edged by an infrared-emitting shell of dust grains heated by the intense starlight.

That made the researchers turn to a second theory for cosmic ray production, one involving exactly this kind of environment. Astronomers have suspected since the 1980s or so that cosmic rays may also come from clusters of massive, young stars called OB associations, where the O and B stand for the two hottest, most massive types of the family of stars that fuse hydrogen in their cores. The suspicion stems from the cosmic rays’ composition. Many of the common heavier elements, such as carbon and silicon, are about as abundant among the particles as they are in the solar system, but there are some elements that are overrepresented. Particularly, a heavy isotope of neon, neon-22, is about five times as abundant in cosmic rays as it is in the solar system. But Ne-22 is seen in the outer layers thrown off by really massive, young, windy stars called Wolf-Rayet stars. Overall, the cosmic rays’ chemical makeup suggests that about 20% are created by WR stars, while the rest are other particles found in the interstellar medium, the stuff between the stars.

A sizable fraction of cosmic rays may be born in WR stars’ massive outflows, but that’s not necessarily where they gain their energy. In 1999 Richard Mewaldt (Caltech) and his colleagues reported the presence in cosmic rays of the cobalt isotope cobalt-59. Co-59 is a daughter isotope, an atom formed by the radioactive decay of nickel-59 when that atom captures an electron and shoves it together with one of its proton to make a neutron. Such a snatch can’t happen when the nickel atom’s nucleus is accelerated to high energies and stripped of its electrons, as cosmic ray particles are. That means that the nuclei that make up cosmic rays aren’t born with their high energies: they hang around a while — about 100,000 years, the team concluded — before being sped up and out into interstellar space.

“This rules out a supernova accelerating its own ejecta,” Mewaldt says, although some of the heavier cosmic ray nuclei probably first formed in supernova explosions. “But [it] is consistent with accelerating cosmic rays from a region where massive stars are born, a region that will be enriched in WR material because of the high-velocity winds of these stars.”

Grenier’s team didn’t measure specific chemical composition, so they don’t know what the cosmic rays are made of. Whatever the ingredients — and they’re probably a combination of interstellar medium, old supernova ejecta, and outflows from an earlier batch of Wolf-Rayet stars — it looks like they’re now being accelerated by the current stellar clusters’ winds.

“This is a very important paper,” says Mewaldt of Grenier’s study, “because it provides the first direct evidence for the distributed acceleration of cosmic rays in OB associations.”

The cosmic rays are still confined in a “cocoon” because they can’t spread out fast in the torrid environment inside the superbubble, Grenier says. The massive stars are only a few million years old, and their powerful winds and ultraviolet radiation create a maelstrom inside the cavity, twisting magnetic fields into tangles that trap the cosmic rays. Over time the particles will escape into quieter regions, but what happens to their energies while inside the cocoon remains a mystery.

It’s a mystery that’s particularly intriguing to Grenier. Low-energy cosmic rays (at least, lower energy than the ones the team observed) “are very, very important for the structure of the clouds of the gas from which we form stars,” she explains. Dense clump of clouds eventually collapse under their own gravity to make stars. While the clouds are pretty opaque to light, cosmic rays can sneak inside, bringing with them heat and catalyzing the formation of molecules. How that heat and chemistry influence star formation isn’t known, and Grenier is pursing the question with her colleagues. What is clear is that “if you radiate those clouds with more cosmic rays or [fewer] cosmic rays, you change the game.”

Fuente...SKY AND TELESCOPE
Posted by Camille Carlisle, November 23, 2011
related content: News Topics, Cosmology news, Milky Way news, Stellar science

links: + digg | + del.icio.us | + reddit | + permalink | + rss

martes, 22 de noviembre de 2011

Astrónomos reconstruyen la historia de un agujero negro

Tres equipos de astrónomos han logrado determinar la masa, la rotación y la distancia a la Tierra de un agujero negro especialmente famoso, Cygnus X-1, y con esos parámetros han reconstruido su historia. El objeto tiene casi 14,8 veces la masa del Sol, gira 800 veces por segundo y está a 6.070 años luz de aquí. Fue identificado como candidato a agujero negro hace casi cuatro décadas, pero entonces el gran especialista Stephen Hawking no estaba convencido y, en 1974, apostó con un colega y amigo, el físico teórico estadounidense Kip Thorne, a que no se trataba de tal objeto. Perdió. En 1990, cuando ya se habían hecho más observaciones de Cygnus X-1, el físico británico aceptó la derrota. Fue una de las varias apuestas que Hawking y Thorne han hecho sobre cuestiones científicas.

Una vez aceptado como tal, el objeto no perdió interés, al contrario. Cygnus X-1 es un agujero negro estelar, es decir, que se ha formado por el colapso de una estrella masiva, y forma un sistema doble con otro astro. Ahora, los tres grupos de astrónomos, que han trabajado con telescopios en tierra y en el espacio, presentan sus conclusiones complementarias en tres artículos publicados en The Astrophysical Journal. "La nueva información nos proporciona pistas sólidas acerca de cómo se formó el agujero negro, su masa y su velocidad de rotación, y es emocionante, porque no se sabe mucho acerca del nacimiento de un agujero negro", señala Mark Reid, líder de uno de los equipos, en un comunicado del Harvard-Smithsonian Center for Astrophysics (EE UU). El horizonte de sucesos (la frontera de no retorno de la materia que cae en un agujero negro) gira en este más de 800 veces por segundo, muy cerca del máximo calculado.

Otro dato importante es la edad: tiene unos seis millones de años, según estudios de la estrella compañera y modelos teóricos. Por tanto, es relativamente joven en términos astronómicos, y no ha tenido mucho tiempo para tragarse suficiente materia de su entorno como para acelerar su rotación, por lo que Cygnus X-1 debió nacer ya girando muy rápido. Además, debió formarse prácticamente con la misma masa que tiene ahora, 14,8 veces la del Sol. "Ahora sabemos que es uno de los agujeros negros estelares más masivos de la galaxia y gira más rápido que cualquier otro que conozcamos", afirma Jerome Orosz (San Diego State University). El telescopio espacial de rayos X Chandra, de la NASA, ha sido clave en esta investigación.

"Como no puede escapar de un agujero negro más información, su masa, rotación y su carga eléctrica supone la descripción completa", dice Reid. "Y la carga de este agujero negro es casi cero".

Un tercer equipo, gracias a los radiotelescopios sincronizados del sistema VLBA, ha logrado precisar la distancia de Cygnus X-1 (dato esencial para determinar la masa y la rotación), así como el desplazamiento del objeto en el espacio. Resulta que el agujero negro se mueve muy despacio respecto a la Vía Láctea, lo que significa que no recibió impulso al formarse. Este dato apoya la hipótesis según la cual este objeto no se formó en una explosión de supernova (cuando una estrella supermasiva ha consumido todo su combustible), que habría dado ese impulso y llevaría mucha más velocidad. Debió ser un colapso estelar, sí, pero sin explosión, lo que dio origen al agujero negro en cuestión. En cuanto a la distancia, antes de estas nuevas medidas que la han fijado en 6.070 años luz, se estimaba entre 5.800 y 7.800 años luz, indican los expertos del National Radio Astronomy Observatory (que opera el VLBA).
Fuente EL PAÍS - Madrid - 21/11/2011

Through data, evaluating the potential for life on other worlds

In many fields of science, the imagination is only limited by the language that can explain it.

As we discovered nearly a year ago, forms of life could exist that play by rules beyond our base of knowledge.

Scientists know that it’s likely that they will discover many more planets orbiting distant stars. They also know that researchers are most likely to focus on those that exhibit Earth-like conditions, in an attempt to find life in another part of the universe.

But what if alien life can exist in conditions drastically unlike those of Earth? Will scientists mistakenly overlook them?

Driven by this fear — and the admission that searching for Earth-like conditions as a precondition for life is a basic but incomplete strategy for finding it — an international team of researchers from NASA, SETI and several universities are working to develop a classification system that includes chemical and physical parameters that are theoretically conducive to life, even if they result in decidedly un-Earth-like conditions.

Washington State University astrobiologist Dirk Schulze-Makuch, University of Puerto Rico modeling expert Abel Mendez and seven more colleagues have developed two different indices — an Earth Similarity Index that categorizes a planet’s more Earth-like features, and a Planetary Habitability Index that includes theoretical parameters — that they say can help researchers more easily find patterns in large and complex datasets.

It’s the first attempt by scientists to categorize the potential of exoplanets and exomoons to harbor life, and should prevent Earth-bound researchers from overlooking conditions that are, ahem, alien to them in their search for life.

Their work will be published in the December issue of the journal Astrobiology.

Autor...Andrew Nusca | November 21, 2011, 7:09 AM PST
en la publicacion Smart Planet Daily.

jueves, 17 de noviembre de 2011

Neuroscience Challenges Old Ideas about Free Will

Celebrated neuroscientist Michael S. Gazzaniga explains the new science behind an ancient philosophical question

By Gareth Cook | Tuesday, November 15, 2011 | 41

'''''''''''''''''''''''''''''''''''''''''''''''''''

Do we have free will? It is an age-old question which has attracted the attention of philosophers, theologians, lawyers and political theorists. Now it is attracting the attention of neuroscience, explains Michael S. Gazzaniga, director of the SAGE Center for the Study of the Mind at the University of California, Santa Barbara, and author of the new book, “Who’s In Charge: Free Will and the Science of the Brain.” He spoke with Mind Matters editor Gareth Cook.

Cook: Why did you decide to tackle the question of free will?

Gazzaniga: I think the issue is on every thinking person’s mind. I can remember wondering about it 50 years ago when I was a student at Dartmouth. At that time, the issue was raw and simply stated. Physics and chemistry were king and while all of us were too young to shave, we saw the implications. For me, those were back in the days when I went to Church every Sunday, and sometimes on Monday if I had an exam coming up!

Now, after 50 years of studying the brain, listening to philosophers, and most recently being slowly educated about the law, the issue is back on my front burner. The question of whether we are responsible for our actions -- or robots that respond automatically -- has been around a long time but until recently the great scholars who spoke out on the issue didn’t know modern science with its deep knowledge and implications.

Cook: What makes you think that neuroscience can shed any light on what has long been a philosophical question?

Gazzaniga: Philosophers are the best at articulating the nature of a problem before anybody knows anything empirical. The modern philosophers of mind now seize on neuroscience and cognitive science to help illuminate age old questions and to this day are frequently ahead of the pack. Among other skills, they have time to think! The laboratory scientist is consumed with experimental details, analyzing data, and frequently does not have the time to place a scientific finding into a larger landscape. It is a constant tension.

Having said that, philosophers can’t have all the fun. Faced with the nature of biologic mechanisms morning, noon, and night, neuroscientists can’t help but think about such questions as the nature of “freedom of action in a mechanistic universe” as one great neuroscientist put it years ago. At a minimum, neuroscience directs one’s attention to the question of how does action come about.

Cook: Do you think that neuroscience, as a field, needs to tackle these questions? That is, do you consider free will an important scientific question?

Gazzaniga: We all need to understand more about free will, or more wisely put, the nature of action. Neuroscience is one highly relevant discipline to this issue. Whatever your beliefs about free will, everyone feels like they have it, even those who dispute that it exists. What neuroscience has been showing us, however, is that it all works differently than how we feel it must work. For instance, neuroscientific experiments indicate that human decisions for action are made before the individual is consciously aware of them. Instead of this finding answering the age-old question of whether the brain decides before the mind decides, it makes us wonder if that is even the way to think about how the brain works. Research is focused on many aspects of decision making and actions, such as where in the brain decisions to act are formed and executed, how a bunch of interacting neurons becomes a moral agent, and even how one’s beliefs about whether they have free will affect their actions. The list of issues where neuroscience will weigh in is endless.

Cook: Please explain what you mean by the idea of an "emergent mind," and the distinction you draw between this and the brain?

Gazzaniga: Leibnitz raised the question almost 300 years ago with his analogy of the mill. Imagine that you can blow the mill up in size such that all components are magnified and you can walk among them. All you find are individual mechanical components, a wheel here, a spindle there. By looking at the parts of the mill you cannot deduce its function. The physical brain can also be broken into parts and their interactions examined. We now understand neurons and how they fire and a bit about neurotransmitters and so forth. But somehow the mental properties are indivisible and can’t be described in terms of neuronal firings. They need to be understood in another vocabulary.

This is sometimes called the emergent mind. Emergence as a concept in general is widely accepted in physics, chemistry, biology, sociology, you name it. Neuroscientists, however, have a hard time with it because they are suspicious that this concept is sneaking a ghost into the machine. That is not it at all. The motivation for this suggestion is to conceptualize the actual architecture of the layered brain/mind interaction so it can be properly studied. It is lazy to stay locked into one layer of analysis and to dismiss the other.

Cook: How does the mind constrain the brain?

Gazzaniga: No one said this is going to be easy and here is where the going gets tough. Picking up on the last thought the idea: we are dealing with a layered system, and each layer has its own laws and protocols, just like in physics where Newton’s Laws apply to one layer of physics and quantum mechanics to another. Think of hardware-software layers. Hardware is useless without software and software is useless without hardware.

How are we to capture an understanding how the two layers interact? For now, no one really captures that reality and certainly no one has yet captured how mental states interact with the neurons that produce them. Yet we know the top mental layers and the layers beneath it, which produce it, interact. Patients suffering from depression can be aided by talk therapy (top-down). They can also be aided by pharmacological drugs (bottom up). When these two therapies are combined the therapy is even better. That is an example of the mind constraining the brain.

Cook: And how does this idea of the mind and brain interacting bring you to your position on free will?

Gazzaniga: For me, it captures the fact that we are trying to understand a layered system. One becomes cognizant there is a system on top of the personal mind/brain layers which is yet another layer--the social world. It interacts massively with our mental processes and vice versa. In many ways we humans, in achieving our robustness, have uploaded many of our critical needs to the social system around us so that the stuff we invent can survive our own fragile and vulnerable lives.

Cook: You talk about “abandoning” the idea of free will. Can you explain what you mean by this, and how you came to this conclusion?

Gazzaniga: As I see it, this is the way to think about it: If you were a Martian landing on Earth today and were gathering information how humans work, the idea of free will as commonly understood in folk psychology would not come up. The Martian would learn humans had learned about physics and chemistry and causation in the standard sense. They would be astonished to see the amount of information that has accumulated about how cells work, how brains work and would conclude, “OK, they are getting it. Just like cells are complex wonderful machines, so are brains. They work in cool ways even though there is this strong tug on them to think there is some little guy in their head calling the shots. There is not.”

The world is not flat. Before this truth was realized, people use to wonder what happened when you got to the end of the earth-- did you fall off? Once we knew the earth was round, the new perspective, made us see how the old questions were silly. New questions also seem silly many times until a new perspective is accepted. I think we will get over the idea of free will and and accept we are a special kind of machine, one with a moral agency which comes from living in social groups. This perspective will make us ask new kinds of questions.

Cook: Are there particular experiments which you think have shed important light on the question of free will?

Gazzaniga: All of neuroscience in one way or another is shining light on how the brain works. That is the reality of it and it is that knowledge, slowly accumulating that will drive us to think more deeply. One way to get going on this is to try and answer the simple question. Free from what? What does anybody want to be free from? I surely do not want to be free from the laws of nature.

Cook: Do you think this science is going to force philosophers to change how they think about free will? And how about the rest of us?

Gazzaniga: Human knowledge can’t help itself in the long run. Things slowly, gradually become more clear. As humans continue on their journey they will come to believe certain things about the nature of things and those abstractions will then be reflected in the rules that are set up to allow people to live together. Beliefs have consequences and we will see them reflected in all kinds of ways. Certainly how we come to think and understand human responsibility in the context of modern knowledge of biologic mechanisms will dictate how we choose our laws and our punishments. What could be more important?

''''''''''''''''''''''''''''''''''''''''''''''''''''''''

Fuente... http://www.scientificamerican.com/article.cfm?id=free-will-and-the-brain-michael-gazzaniga-interview

Extremófilos estelares

Noviembre 14, 2011: En los años '70, los biólogos se sorprendieron al descubrir una forma de vida que nunca esperaron que existiera. Pequeños microorganismos con un antiguo ADN vivían en los manantiales hirvientes del Parque Nacional Yellowstone. En vez de disolverse en aquellas aguas en ebullición, los microbios se desarrollaban con éxito, iluminando los manantiales con un color brillante.

Los científicos inventaron entonces el término "extremófilo", que significa "amante de las condiciones extremas", para describir a estas criaturas —y entonces comenzó la búsqueda de otras más. Pronto, se encontraron más organismos extremófilos viviendo a gran profundidad en el hielo de la Antártida, en los núcleos de los reactores nucleares y en otros lugares inesperados. La biología no ha sido la misma desde entonces.

¿Podría la astronomía estar a punto de experimentar una transformación similar?

Usando un telescopio de la NASA, llamado GALEX, los investigadores han descubierto un nuevo tipo de extremófilo: las estrellas amantes de las condiciones extremas.

"Hemos estado encontrando estrellas que viven en ambientes galácticos extremos, donde la formación estelar no se supone que suceda", explica Susan Neff, quien es científica del proyecto GALEX en el Centro Goddard para Vuelos Espaciales (Goddard Space Flight Center, en idioma inglés). "Esta es una situación absolutamente sorprendente".
Stellar Extremophiles (splash, 558px)
Esta imagen compuesta (radio + UV) muestra largos brazos, como los de un pulpo, donde se produce la formación de estrellas a gran distancia del disco principal de la galaxia espiral M83. [Más información] [Video].

GALEX, que es la sigla en idioma inglés de "Galaxy Evolution Explorer" ("Explorador de la Evolución Galáctica", en idioma español), es un telescopio espacial destinado a realizar observaciones en la zona ultravioleta del espectro, y tiene una habilidad especial: es super sensible al tipo de rayos UV (ultravioleta) que emiten las estrellas más jóvenes. Esto significa que el observatorio puede detectar estrellas que están naciendo a muy grandes distancias de la Tierra, a más de la mitad de la distancia que existe desde aquí hasta el extremo del universo. El observatorio fue lanzado al espacio en 2003 en una misión para estudiar cómo las galaxias cambian y evolucionan conforme nuevas estrellas se unen en su interior.

GALEX ha cumplido con dicha misión y ha hecho más también.

"En algunas imágenes proporcionadas por el telescopio GALEX, vemos estrellas que están formándose afuera de las galaxias, en lugares donde pensábamos que la densidad del gas sería demasiado baja como para permitir que se produzca el nacimiento de estrellas", dice Don Neil, de Caltech, quien es miembro del equipo GALEX.

Las estrellas nacen cuando las nubes de gas interestelar colapsan y se contraen bajo el tirón de su propia gravedad. Si una nube logra volverse lo suficientemente densa y caliente conforme colapsa, puede darse una fusión nuclear y ¡voilà!, una estrella ha nacido.

Los brazos espirales de la Vía Láctea son la zona denominada "Ricitos de Oro" para este proceso. "Aquí en la Vía Láctea, tenemos suficiente gas. Es un lugar cómodo para que se formen las estrellas", dice Neil.

Pero cuando el GALEX mira hacia otras galaxias espirales más lejanas, ve que se forman estrellas muy afuera del disco espiral gaseoso.
Stellar Extremophiles (signup)

"Quedé anonadado", dijo. "Estas estrellas de verdad están 'viviendo al extremo'".

Las galaxias espirales no son los únicos lugares con extremófilos estelares. El observatorio también ha encontrado estrellas que nacen en:

—galaxias elípticas e irregulares, de las cuales se pensaba que eran pobres en gas (por ejemplo 1, y 2),

—los residuos gaseosos de galaxias en colisión (1, y 2),

—vastas colas "de tipo cometario" que dejan atrás algunas galaxias al moverse a grandes velocidades (1, 2),

—nubes de frío gas primordial, las cuales son pequeñas y apenas lo suficientemente masivas como para sostenerse a sí mismas.

Adiós a la idea de la zona llamada "Ricitos de Oro". De acuerdo con las observaciones llevadas a cabo por el telescopio GALEX, los extremófilos estelares pueblan casi cualquier esquina o rincón del cosmos en donde haya una bocanada de gas que pueda juntarse para dar lugar a un nuevo sol.

"Esto podría estar diciéndonos que hay algo profundamente importante en el proceso de formación de las estrellas", relata Neff. "Podría haber maneras de que se formen estrellas en ambientes extremos que ni siquiera hemos imaginado todavía".

¿Transformarán los extremófilos a la astronomía, tal como lo hicieron con la biología? Es demasiado pronto para saberlo, insisten los investigadores. Pero el telescopio GALEX definitivamente les ha dado algo en qué pensar.

Tomado de NASA

Créditos y Contactos
Autor: Dr. Tony Phillips
Funcionaria Responsable de NASA: Ruth Netting
Editor de Producción: Dr. Tony Phillips
Traducción al Español: Carlos Román Zúñiga
Editora en Español: Angela Atadía de Borghetti
Formato: Carlos Román Zúñiga

Más información (en inglés)

GALEX —Portal

martes, 15 de noviembre de 2011

Planeta gigante expulsado del Sistema Solar ....

ASTRONOMÍA | Formación planetaria

Un planeta gigante expulsado del primitivo Sistema Solar. Se estima su expulsión evitó la destrucción de la Tierra hace 600 millones de años . Se formó en los orígenes del Sistema Solar, hoy con cuatro planetas gigantes

Un equipo de astrónomos acaba de publicar un trabajo que añade la existencia de un quinto planeta gigante al primitivo Sistema Solar. Este astro axplicaría uno de los misterios de nuestro sistema, que se refiere a la formación de las órbitas de los planetas.

Por lo que se conoce, cuando se formó el Sistema Solar, hace unos 4.500 millones de años, hubo un gran inestabilidad en las órbitas de los grandes planetas, hasta el punto que tendrían que haber acabado colisionando con la Tierra primigenia. Su conclusión es que si no ocurrió, se debe a que existía este misterioso cuerpo celeste.

La investigación, publicada en la revista 'Astrophysical Journal', se basa en simulaciones informáticas. Según David Nesvorny, del Southwest Research Institute, sus datos proceden del estudio de los muchos objetos pequeños que hay más allá de Neptuno, en el llamado 'Cinturón de Kuiper', y también del registro de cráteres que hay en la Luna.

De su ánalisis ya se había concluido que cuando el Sistema Solar tenía sólo unos 600 millones de años, había una gran inestabilidad en las órbitas de los planetas gigantes, de los que ahora hay cuatro: Júpiter, Saturno, Neptuno y Urano. Debido a ello, infinidad de cuerpos pequeños se dispersaron (algunos de conforman el Cinturón de Kuiper), pero otros se acercaron hacia el Sol, afectando a la Tierra y la Luna.

Y lo mismo pasó con los grandes. Júpiter, por ejemplo, se habría movido hacia dentro del Sistema lentamente. El problema es que ese movimiento habría afectado a las órbitas de los planetas rocosos como la Tierra, que habría colisionado con sus vecinos, Marte o Venus.

Los astrónomos, en trabajos previos, presentaron una alternativa que evitaba esta opción: propusieron que la órbita de Júpiter cambió con rapidez cuando se dispersó, alejandose de Urano o de Neptuno, durante ese periodo de inestabilidad. Este 'salto' de Júpiter habría sido menos dañino para el resto de los planetas pero, ¿qué lo causó?

Nesvorny realizó millones de simulaciones informáticas para encontrar la respuesta. Si efectivamente Júpiter saltó dispersando a sus dos vecinos gigantes, uno de los dos tendría que haber sido expulsado del Sistema Solar, algo que tampoco ocurrió. "Había algo claramente incorrecto", afirma el investigador.

La única alternativa que se le ocurrió era que había habido un quinto planeta gigante en nuestro entorno cósmico. Y Nesvorny acertó: comprobó que, con esa simulación, todo volvía a su lugar. Ese astro debía haber sido expulsado del Sistema Solar en sus inicios. "Es una explicación que parece bastante concebible debido al descubrimiento reciente de una gran cantidad de planetas que flotan libremente en el espacio ineterestelar, sin orbitar ninguna estrella, lo que indica que estas eyecciones de planetas podían ser comunes", afirma Nesvorny.

Autor: Rosa M. Tristán | Madrid , Actualizado lunes 14/11/2011 16:46 horas
Tomado de : El Pais, España.

viernes, 11 de noviembre de 2011

Las Cosmologías de Penrose y Hawking

Autor: Rafael Alemañ , Agrupación Astronómica de Alicante
Temas: Cosmología, Ciencia

Cerca ya del final de 2010 llegaron al el mercado editorial español dos fascinantes libros dedicados a la ciencia del universo, con interesantes consecuencias sobre nuestra comprensión del cosmos e incluso del papel que juega en él la conciencia humana. Ambos textos estaban escritos por dos de los más célebres especialistas británicos en el tema; Stephen Hawking en colaboración con Leonard Mlodinow, firmaba El Gran Diseño, en tanto su compatriota Roger Penrose nos presentaba sus últimas reflexiones sobre cosmología en Los Ciclos del Tiempo. Se trata de dos obras planteadas desde muy diferentes perspectivas, pero que sin embargo conviene considerar en conjunto ya que las diferencias pueden ser tan ilustrativas como las similitudes entre ellas.

El “diseño de Hawking”
La evolución de sus opiniones sobre la búsqueda de una teoría final para la unificación de las fuerzas fundamentales, parece haber llevado a Hawking –para decepción de sus seguidores– a una visión francamente opuesta a la que ha inspirado toda su carrera científica, cuestionando la metodología científica que él mismo siempre defendió. Eso se desprende del libro escrito a medias con Mlodinow, donde se manifiesta favorable a la más reciente versión ampliada de las supercuerdas, y a todas las repercusiones filosóficas que de ellas cabe extraer. Al final del primer capítulo, se puede leer un parágrafo sumamente interesante (Hawking y Mlodinow, 2010a)

Describiremos cómo la teoría M puede ofrecer respuestas a la cuestión de la creación. De acuerdo con la teoría M, el nuestro no es el único universo. En su lugar, la teoría Mpredice que una gran cantidad de universos fueron creados de la nada. Su creación no requirió la intervención de algún ser sobrenatural o dios. Más bien, esos múltiples universos surgieron naturalmente de la ley física. Son una predicción de la ciencia. Cada universo tiene muchas historias posibles y muchos posibles estados en tiempos posteriores, es decir, en tiempo como el presente, mucho después de su creación. La mayoría de estos estados serán muy diferentes del universo que observamos e inadecuados para la existencia de cualquier forma de vida. Solo unos pocos permitirían existir a criaturas como nosotros. Así, nuestra presencia selecciona en este vasto repertorio solo aquellos universos que sean compatibles con nuestra existencia. Aunque somos endebles e insignificantes en la escala del cosmos, esto nos hace en cierto sentido los señores de la creación.


Además de la confusa mezcla de hipótesis físicas y premisas metafísicas oculta en estas líneas, el fragmento anterior revela dos puntos capitales que han levantado las reticencias de una parte considerable de los colegas de Hawking. En primer lugar, es obvio que las esperanzas de Hawking sobre una posible unificación de las fuerzas fundamentales de la naturaleza –empresa a la que él se dedicó con optimismo durante muchos años– se han depositado en la teoría M. Esta teoría es en realidad una familia de modelos que contiene una cantidad abrumadora (entre 10100y 101000) de versiones distintas. Aunque dispusiésemos de los medios técnicos para comprobarlas todas –y no los tenemos a causa de las exorbitantes energías necesarias– sería prácticamente imposible decidir si alguno de ellos, o ninguno, se corresponde con el cosmos real. Por 2esos motivos, los defensores de la teoría M arguyen que la ciencia debe abandonar su método, basado en al corroboración experimental de las especulaciones teóricas, y aceptar simplemente lo que dicen ellos –la teoría M– por razones tan difusas y discutibles como la estética formal, la belleza matemática o la versatilidad explicativa. Afortunadamente la mayoría de la comunidad científica no admite –de momento– que se ofrezca una completa destrucción de la racionalidad científica a cambio de apuntalar una teoría que cada vez más parece sustentarse únicamente sobre las aspiraciones profesionales de quienes trabajan en ella.


(Sigue en el PDF adjunto) Tomado de Red Cientifica, España.-

jueves, 10 de noviembre de 2011

A Brief Guide to Embodied Cognition: Why You Are Not Your Brain

By Samuel McNerney | November 4, 2011 | Scientific American


////////////////////////////////////////////////////////////////////////////////

Embodied cognition, the idea that the mind is not only connected to the body but that the body influences the mind, is one of the more counter-intuitive ideas in cognitive science. In sharp contrast is dualism, a theory of mind famously put forth by Rene Descartes in the 17th century when he claimed that “there is a great difference between mind and body, inasmuch as body is by nature always divisible, and the mind is entirely indivisible… the mind or soul of man is entirely different from the body.” In the proceeding centuries, the notion of the disembodied mind flourished. From it, western thought developed two basic ideas: reason is disembodied because the mind is disembodied and reason is transcendent and universal. However, as George Lakoff​ and Rafeal Núñez explain:

Cognitive science calls this entire philosophical worldview into serious question on empirical grounds… [the mind] arises from the nature of our brains, bodies, and bodily experiences. This is not just the innocuous and obvious claim that we need a body to reason; rather, it is the striking claim that the very structure of reason itself comes from the details of our embodiment… Thus, to understand reason we must understand the details of our visual system, our motor system, and the general mechanism of neural binding.

What exactly does this mean? It means that our cognition isn’t confined to our cortices. That is, our cognition is influenced, perhaps determined by, our experiences in the physical world. This is why we say that something is “over our heads” to express the idea that we do not understand; we are drawing upon the physical inability to not see something over our heads and the mental feeling of uncertainty. Or why we understand warmth with affection; as infants and children the subjective judgment of affection almost always corresponded with the sensation of warmth, thus giving way to metaphors such as “I’m warming up to her.”

Embodied cognition has a relatively short history. Its intellectual roots date back to early 20th century philosophers Martin Heidegger​, Maurice Merleau-Ponty​ and John Dewey​ and it has only been studied empirically in the last few decades. One of the key figures to empirically study embodiment is University of California at Berkeley​ professor George Lakoff.

Lakoff was kind enough to field some questions over a recent phone conversation, where I learned about his interesting history first hand. After taking linguistic courses in the 1960s under Chomsky at MIT, where he eventually majored in English and Mathematics, he studied linguistics in grad school at Indiana University. It was a different world back then, he explained, “it was the beginning of computer science and A.I and the idea that thought could be described with formal logic dominated much of philosophical thinking. Turing machines were popular discussion topics, and the brain was widely understood as a digital computational device.” Essentially, the mind was thought of as a computer program separate from the body with the brain as general-purpose hardware.

Chomsky’s theory of language as a series of meaningless symbols fit this paradigm. It was a view of language in which grammar was independent of meaning or communication. In contrast, Lakoff found examples showing that grammar was depended of meaning in 1963. From this observation he constructed a theory called Generative Semantics, which was also disembodied, where logical structures were built into grammar itself.

To be sure, cognitive scientists weren’t dualists like Descartes – they didn’t actually believe that the mind was physically separate from the body – but they didn’t think that the body influenced cognition. And it was during this time – throughout the 60s and 70s -Lakoff realized the flaws of thinking about the mind as a computer and began studying embodiment.

The tipping point came after attending four talks that hinted at embodied language at Berkeley in the summer of 1975. In his words, they forced him to “give up and rethink linguistics and the brain.” This prompted him and a group of colleagues to start cognitive linguistics, which contrary to Chomskyan theory and the entire mind as a computer paradigm, held that “semantics arose from the nature of the body.” Then, in 1978, he “discovered that we think metaphorically,” and spent the next year gathering as many metaphors as he could find.

Many cognitive scientists accepted his work on metaphors though it opposed much of mainstream thought in philosophy and linguistics. He caught a break on January 2nd 1979, when he got a call from Mark Johnson, who informed him that he was coming to Berkeley to replace someone in the philosophy department for six months. Johnson had just gotten his PhD from Chicago where he studied continental philosophy and called Lakoff to see if he was interested in studying metaphors. What came next was one of the more groundbreaking books in cognitive science. After co-writing a paper for the journal of philosophy in the spring of 1979, Lakoff and Johnson began working on Metaphors We Live By, and managed to finish it three months later.

Their book extensively examined how, when and why we use metaphors. Here are a few examples. We understand control as being UP and being subject to control as being DOWN: We say, “I have control over him,” “I am on top of the situation,” “He’s at the height of his power,” and, “He ranks above me in strength,” “He is under my control,” and “His power is on the decline.” Similarly, we describe love as being a physical force: “I could feel the electricity between us,” “There were sparks,” and “They gravitated to each other immediately.” Some of their examples reflected embodied experience. For example, Happy is Up and Sad is Down, as in “I’m feeling up today,” and “I’m feel down in the dumbs.” These metaphors are based on the physiology of emotions, which researchers such as Paul Eckman​ have discovered. It’s no surprise, then, that around the world, people who are happy tend to smile and perk up while people who are sad tend to droop.

Metaphors We Live By was a game changer. Not only did it illustrate how prevalent metaphors are in everyday language, it also suggested that a lot of the major tenets of western thought, including the idea that reason is conscious and passionless and that language is separate from the body aside from the organs of speech and hearing, were incorrect. In brief, it demonstrated that “our ordinary conceptual system, in terms of which we both think and act, is fundamentally metaphorical in nature.”


After Metaphors We Live By was published, embodiment slowly gained momentum in academia. In the 1990s dissertations by Christopher Johnson, Joseph Grady and Srini Narayanan led to a neural theory of primary metaphors. They argued that much of our language comes from physical interactions during the first several years of life, as the Affection is Warmth metaphor illustrated. There are many other examples; we equate up with control and down with being controlled because stronger people and objects tend to control us, and we understand anger metaphorically in terms of heat pressure and loss of physical control because when we are angry our physiology changes e.g., skin temperature increases, heart beat rises and physical control becomes more difficult.

This and other work prompted Lakoff and Johnson to publish Philosophy in the Flesh, a six hundred-page giant that challenges the foundations of western philosophy by discussing whole systems of embodied metaphors in great detail and furthermore arguing that philosophical theories themselves are constructed metaphorically. Specifically, they argued that the mind is inherently embodied, thought is mostly unconscious and abstract concepts are largely metaphorical. What’s left is the idea that reason is not based on abstract laws because cognition is grounded in bodily experience (A few years later Lakoff teamed with Rafael Núñez to publish Where Mathematics Comes From to argue at great length that higher mathematics is also grounded in the body and embodied metaphorical thought).

As Lakoff points out, metaphors are more than mere language and literary devices, they are conceptual in nature and represented physically in the brain. As a result, such metaphorical brain circuitry can affect behavior. For example, in a study done by Yale psychologist John Bargh​, participants holding warm as opposed to cold cups of coffee were more likely to judge a confederate as trustworthy after only a brief interaction. Similarly, at the University of Toronto, “subjects were asked to remember a time when they were either socially accepted or socially snubbed. Those with warm memories of acceptance judged the room to be 5 degrees warmer on the average than those who remembered being coldly snubbed. Another effect of Affection Is Warmth.” This means that we both physically and literary “warm up” to people.

The last few years have seen many complementary studies, all of which are grounded in primary experiences:

• Thinking about the future caused participants to lean slightly forward while thinking about the past caused participants to lean slightly backwards. Future is Ahead

• Squeezing a soft ball influenced subjects to perceive gender neutral faces as female while squeezing a hard ball influenced subjects to perceive gender neutral faces as male. Female is Soft

• Those who held heavier clipboards judged currencies to be more valuable and their opinions and leaders to be more important. Important is Heavy.

• Subjects asked to think about a moral transgression like adultery or cheating on a test were more likely to request an antiseptic cloth after the experiment than those who had thought about good deeds. Morality is Purity

Studies like these confirm Lakoff’s initial hunch – that our rationality is greatly influenced by our bodies in large part via an extensive system of metaphorical thought. How will the observation that ideas are shaped by the body help us to better understand the brain in the future?

I also spoke with Term Assistant Professor of Psychology Joshua Davis, who teaches at Barnard College and focuses on embodiment. I asked Davis what the future of embodiment studies looks like (he is relatively new to the game, having received his PhD in 2008). He explained to me that although “a lot of the ideas of embodiment have been around for a few decades, they’ve hit a critical mass… whereas sensory inputs and motor outputs were secondary, we now see them as integral to cognitive processes.” This is not to deny computational theories, or even behaviorism, as Davis said, “behaviorism and computational theories will still be valuable,” but, “I see embodiment as a new paradigm that we are shifting towards.”

What exactly will this paradigm look like? It’s unclear. But I was excited to hear from Lakoff that he is trying to “bring together neuroscience with the neural theory of language and thought,” through a new brain language and thought center at Berkeley. Hopefully his work there, along with the work of young professors like Davis, will allow us to understand the brain as part of a much greater dynamic system that isn’t confined to our cortices.

The author would like to personally thank Professors Lakoff and Davis for their time, thoughts, and insights. It was a real pleasure.
Samuel McNerneyAbout the Author: Sam McNerney recently graduated from the greatest school on Earth, Hamilton College, where he earned a bachelors in Philosophy. However, after reading too much Descartes and Nietzsche, he realized that his true passion is reading and writing about the psychology of decision making and the neuroscience of language. Now, he is trying to find a career as a science journalist who writes about philosophy, psychology, and neuroscience. His blog, whywereason.com tries to figure out how humans understand the world. He spends his free time listening to Lady Gaga​, dreaming about writing bestsellers, and tweeting @whywereason. Follow on Twitter @whywereason.

More »

The views expressed are those of the author and are not necessarily those of Scientific American.