Noviembre 14, 2011: En los años '70, los biólogos se sorprendieron al descubrir una forma de vida que nunca esperaron que existiera. Pequeños microorganismos con un antiguo ADN vivían en los manantiales hirvientes del Parque Nacional Yellowstone. En vez de disolverse en aquellas aguas en ebullición, los microbios se desarrollaban con éxito, iluminando los manantiales con un color brillante.
Los científicos inventaron entonces el término "extremófilo", que significa "amante de las condiciones extremas", para describir a estas criaturas —y entonces comenzó la búsqueda de otras más. Pronto, se encontraron más organismos extremófilos viviendo a gran profundidad en el hielo de la Antártida, en los núcleos de los reactores nucleares y en otros lugares inesperados. La biología no ha sido la misma desde entonces.
¿Podría la astronomía estar a punto de experimentar una transformación similar?
Usando un telescopio de la NASA, llamado GALEX, los investigadores han descubierto un nuevo tipo de extremófilo: las estrellas amantes de las condiciones extremas.
"Hemos estado encontrando estrellas que viven en ambientes galácticos extremos, donde la formación estelar no se supone que suceda", explica Susan Neff, quien es científica del proyecto GALEX en el Centro Goddard para Vuelos Espaciales (Goddard Space Flight Center, en idioma inglés). "Esta es una situación absolutamente sorprendente".
Stellar Extremophiles (splash, 558px)
Esta imagen compuesta (radio + UV) muestra largos brazos, como los de un pulpo, donde se produce la formación de estrellas a gran distancia del disco principal de la galaxia espiral M83. [Más información] [Video].
GALEX, que es la sigla en idioma inglés de "Galaxy Evolution Explorer" ("Explorador de la Evolución Galáctica", en idioma español), es un telescopio espacial destinado a realizar observaciones en la zona ultravioleta del espectro, y tiene una habilidad especial: es super sensible al tipo de rayos UV (ultravioleta) que emiten las estrellas más jóvenes. Esto significa que el observatorio puede detectar estrellas que están naciendo a muy grandes distancias de la Tierra, a más de la mitad de la distancia que existe desde aquí hasta el extremo del universo. El observatorio fue lanzado al espacio en 2003 en una misión para estudiar cómo las galaxias cambian y evolucionan conforme nuevas estrellas se unen en su interior.
GALEX ha cumplido con dicha misión y ha hecho más también.
"En algunas imágenes proporcionadas por el telescopio GALEX, vemos estrellas que están formándose afuera de las galaxias, en lugares donde pensábamos que la densidad del gas sería demasiado baja como para permitir que se produzca el nacimiento de estrellas", dice Don Neil, de Caltech, quien es miembro del equipo GALEX.
Las estrellas nacen cuando las nubes de gas interestelar colapsan y se contraen bajo el tirón de su propia gravedad. Si una nube logra volverse lo suficientemente densa y caliente conforme colapsa, puede darse una fusión nuclear y ¡voilà!, una estrella ha nacido.
Los brazos espirales de la Vía Láctea son la zona denominada "Ricitos de Oro" para este proceso. "Aquí en la Vía Láctea, tenemos suficiente gas. Es un lugar cómodo para que se formen las estrellas", dice Neil.
Pero cuando el GALEX mira hacia otras galaxias espirales más lejanas, ve que se forman estrellas muy afuera del disco espiral gaseoso.
Stellar Extremophiles (signup)
"Quedé anonadado", dijo. "Estas estrellas de verdad están 'viviendo al extremo'".
Las galaxias espirales no son los únicos lugares con extremófilos estelares. El observatorio también ha encontrado estrellas que nacen en:
—galaxias elípticas e irregulares, de las cuales se pensaba que eran pobres en gas (por ejemplo 1, y 2),
—los residuos gaseosos de galaxias en colisión (1, y 2),
—vastas colas "de tipo cometario" que dejan atrás algunas galaxias al moverse a grandes velocidades (1, 2),
—nubes de frío gas primordial, las cuales son pequeñas y apenas lo suficientemente masivas como para sostenerse a sí mismas.
Adiós a la idea de la zona llamada "Ricitos de Oro". De acuerdo con las observaciones llevadas a cabo por el telescopio GALEX, los extremófilos estelares pueblan casi cualquier esquina o rincón del cosmos en donde haya una bocanada de gas que pueda juntarse para dar lugar a un nuevo sol.
"Esto podría estar diciéndonos que hay algo profundamente importante en el proceso de formación de las estrellas", relata Neff. "Podría haber maneras de que se formen estrellas en ambientes extremos que ni siquiera hemos imaginado todavía".
¿Transformarán los extremófilos a la astronomía, tal como lo hicieron con la biología? Es demasiado pronto para saberlo, insisten los investigadores. Pero el telescopio GALEX definitivamente les ha dado algo en qué pensar.
Tomado de NASA
Créditos y Contactos
Autor: Dr. Tony Phillips
Funcionaria Responsable de NASA: Ruth Netting
Editor de Producción: Dr. Tony Phillips
Traducción al Español: Carlos Román Zúñiga
Editora en Español: Angela Atadía de Borghetti
Formato: Carlos Román Zúñiga
Más información (en inglés)
GALEX —Portal
jueves, 17 de noviembre de 2011
martes, 15 de noviembre de 2011
Planeta gigante expulsado del Sistema Solar ....
ASTRONOMÍA | Formación planetaria
Un planeta gigante expulsado del primitivo Sistema Solar. Se estima su expulsión evitó la destrucción de la Tierra hace 600 millones de años . Se formó en los orígenes del Sistema Solar, hoy con cuatro planetas gigantes
Un equipo de astrónomos acaba de publicar un trabajo que añade la existencia de un quinto planeta gigante al primitivo Sistema Solar. Este astro axplicaría uno de los misterios de nuestro sistema, que se refiere a la formación de las órbitas de los planetas.
Por lo que se conoce, cuando se formó el Sistema Solar, hace unos 4.500 millones de años, hubo un gran inestabilidad en las órbitas de los grandes planetas, hasta el punto que tendrían que haber acabado colisionando con la Tierra primigenia. Su conclusión es que si no ocurrió, se debe a que existía este misterioso cuerpo celeste.
La investigación, publicada en la revista 'Astrophysical Journal', se basa en simulaciones informáticas. Según David Nesvorny, del Southwest Research Institute, sus datos proceden del estudio de los muchos objetos pequeños que hay más allá de Neptuno, en el llamado 'Cinturón de Kuiper', y también del registro de cráteres que hay en la Luna.
De su ánalisis ya se había concluido que cuando el Sistema Solar tenía sólo unos 600 millones de años, había una gran inestabilidad en las órbitas de los planetas gigantes, de los que ahora hay cuatro: Júpiter, Saturno, Neptuno y Urano. Debido a ello, infinidad de cuerpos pequeños se dispersaron (algunos de conforman el Cinturón de Kuiper), pero otros se acercaron hacia el Sol, afectando a la Tierra y la Luna.
Y lo mismo pasó con los grandes. Júpiter, por ejemplo, se habría movido hacia dentro del Sistema lentamente. El problema es que ese movimiento habría afectado a las órbitas de los planetas rocosos como la Tierra, que habría colisionado con sus vecinos, Marte o Venus.
Los astrónomos, en trabajos previos, presentaron una alternativa que evitaba esta opción: propusieron que la órbita de Júpiter cambió con rapidez cuando se dispersó, alejandose de Urano o de Neptuno, durante ese periodo de inestabilidad. Este 'salto' de Júpiter habría sido menos dañino para el resto de los planetas pero, ¿qué lo causó?
Nesvorny realizó millones de simulaciones informáticas para encontrar la respuesta. Si efectivamente Júpiter saltó dispersando a sus dos vecinos gigantes, uno de los dos tendría que haber sido expulsado del Sistema Solar, algo que tampoco ocurrió. "Había algo claramente incorrecto", afirma el investigador.
La única alternativa que se le ocurrió era que había habido un quinto planeta gigante en nuestro entorno cósmico. Y Nesvorny acertó: comprobó que, con esa simulación, todo volvía a su lugar. Ese astro debía haber sido expulsado del Sistema Solar en sus inicios. "Es una explicación que parece bastante concebible debido al descubrimiento reciente de una gran cantidad de planetas que flotan libremente en el espacio ineterestelar, sin orbitar ninguna estrella, lo que indica que estas eyecciones de planetas podían ser comunes", afirma Nesvorny.
Autor: Rosa M. Tristán | Madrid , Actualizado lunes 14/11/2011 16:46 horas
Tomado de : El Pais, España.
Un planeta gigante expulsado del primitivo Sistema Solar. Se estima su expulsión evitó la destrucción de la Tierra hace 600 millones de años . Se formó en los orígenes del Sistema Solar, hoy con cuatro planetas gigantes
Un equipo de astrónomos acaba de publicar un trabajo que añade la existencia de un quinto planeta gigante al primitivo Sistema Solar. Este astro axplicaría uno de los misterios de nuestro sistema, que se refiere a la formación de las órbitas de los planetas.
Por lo que se conoce, cuando se formó el Sistema Solar, hace unos 4.500 millones de años, hubo un gran inestabilidad en las órbitas de los grandes planetas, hasta el punto que tendrían que haber acabado colisionando con la Tierra primigenia. Su conclusión es que si no ocurrió, se debe a que existía este misterioso cuerpo celeste.
La investigación, publicada en la revista 'Astrophysical Journal', se basa en simulaciones informáticas. Según David Nesvorny, del Southwest Research Institute, sus datos proceden del estudio de los muchos objetos pequeños que hay más allá de Neptuno, en el llamado 'Cinturón de Kuiper', y también del registro de cráteres que hay en la Luna.
De su ánalisis ya se había concluido que cuando el Sistema Solar tenía sólo unos 600 millones de años, había una gran inestabilidad en las órbitas de los planetas gigantes, de los que ahora hay cuatro: Júpiter, Saturno, Neptuno y Urano. Debido a ello, infinidad de cuerpos pequeños se dispersaron (algunos de conforman el Cinturón de Kuiper), pero otros se acercaron hacia el Sol, afectando a la Tierra y la Luna.
Y lo mismo pasó con los grandes. Júpiter, por ejemplo, se habría movido hacia dentro del Sistema lentamente. El problema es que ese movimiento habría afectado a las órbitas de los planetas rocosos como la Tierra, que habría colisionado con sus vecinos, Marte o Venus.
Los astrónomos, en trabajos previos, presentaron una alternativa que evitaba esta opción: propusieron que la órbita de Júpiter cambió con rapidez cuando se dispersó, alejandose de Urano o de Neptuno, durante ese periodo de inestabilidad. Este 'salto' de Júpiter habría sido menos dañino para el resto de los planetas pero, ¿qué lo causó?
Nesvorny realizó millones de simulaciones informáticas para encontrar la respuesta. Si efectivamente Júpiter saltó dispersando a sus dos vecinos gigantes, uno de los dos tendría que haber sido expulsado del Sistema Solar, algo que tampoco ocurrió. "Había algo claramente incorrecto", afirma el investigador.
La única alternativa que se le ocurrió era que había habido un quinto planeta gigante en nuestro entorno cósmico. Y Nesvorny acertó: comprobó que, con esa simulación, todo volvía a su lugar. Ese astro debía haber sido expulsado del Sistema Solar en sus inicios. "Es una explicación que parece bastante concebible debido al descubrimiento reciente de una gran cantidad de planetas que flotan libremente en el espacio ineterestelar, sin orbitar ninguna estrella, lo que indica que estas eyecciones de planetas podían ser comunes", afirma Nesvorny.
Autor: Rosa M. Tristán | Madrid , Actualizado lunes 14/11/2011 16:46 horas
Tomado de : El Pais, España.
viernes, 11 de noviembre de 2011
Las Cosmologías de Penrose y Hawking
Autor: Rafael Alemañ , Agrupación Astronómica de Alicante
Temas: Cosmología, Ciencia
Cerca ya del final de 2010 llegaron al el mercado editorial español dos fascinantes libros dedicados a la ciencia del universo, con interesantes consecuencias sobre nuestra comprensión del cosmos e incluso del papel que juega en él la conciencia humana. Ambos textos estaban escritos por dos de los más célebres especialistas británicos en el tema; Stephen Hawking en colaboración con Leonard Mlodinow, firmaba El Gran Diseño, en tanto su compatriota Roger Penrose nos presentaba sus últimas reflexiones sobre cosmología en Los Ciclos del Tiempo. Se trata de dos obras planteadas desde muy diferentes perspectivas, pero que sin embargo conviene considerar en conjunto ya que las diferencias pueden ser tan ilustrativas como las similitudes entre ellas.
El “diseño de Hawking”
La evolución de sus opiniones sobre la búsqueda de una teoría final para la unificación de las fuerzas fundamentales, parece haber llevado a Hawking –para decepción de sus seguidores– a una visión francamente opuesta a la que ha inspirado toda su carrera científica, cuestionando la metodología científica que él mismo siempre defendió. Eso se desprende del libro escrito a medias con Mlodinow, donde se manifiesta favorable a la más reciente versión ampliada de las supercuerdas, y a todas las repercusiones filosóficas que de ellas cabe extraer. Al final del primer capítulo, se puede leer un parágrafo sumamente interesante (Hawking y Mlodinow, 2010a)
Describiremos cómo la teoría M puede ofrecer respuestas a la cuestión de la creación. De acuerdo con la teoría M, el nuestro no es el único universo. En su lugar, la teoría Mpredice que una gran cantidad de universos fueron creados de la nada. Su creación no requirió la intervención de algún ser sobrenatural o dios. Más bien, esos múltiples universos surgieron naturalmente de la ley física. Son una predicción de la ciencia. Cada universo tiene muchas historias posibles y muchos posibles estados en tiempos posteriores, es decir, en tiempo como el presente, mucho después de su creación. La mayoría de estos estados serán muy diferentes del universo que observamos e inadecuados para la existencia de cualquier forma de vida. Solo unos pocos permitirían existir a criaturas como nosotros. Así, nuestra presencia selecciona en este vasto repertorio solo aquellos universos que sean compatibles con nuestra existencia. Aunque somos endebles e insignificantes en la escala del cosmos, esto nos hace en cierto sentido los señores de la creación.
Además de la confusa mezcla de hipótesis físicas y premisas metafísicas oculta en estas líneas, el fragmento anterior revela dos puntos capitales que han levantado las reticencias de una parte considerable de los colegas de Hawking. En primer lugar, es obvio que las esperanzas de Hawking sobre una posible unificación de las fuerzas fundamentales de la naturaleza –empresa a la que él se dedicó con optimismo durante muchos años– se han depositado en la teoría M. Esta teoría es en realidad una familia de modelos que contiene una cantidad abrumadora (entre 10100y 101000) de versiones distintas. Aunque dispusiésemos de los medios técnicos para comprobarlas todas –y no los tenemos a causa de las exorbitantes energías necesarias– sería prácticamente imposible decidir si alguno de ellos, o ninguno, se corresponde con el cosmos real. Por 2esos motivos, los defensores de la teoría M arguyen que la ciencia debe abandonar su método, basado en al corroboración experimental de las especulaciones teóricas, y aceptar simplemente lo que dicen ellos –la teoría M– por razones tan difusas y discutibles como la estética formal, la belleza matemática o la versatilidad explicativa. Afortunadamente la mayoría de la comunidad científica no admite –de momento– que se ofrezca una completa destrucción de la racionalidad científica a cambio de apuntalar una teoría que cada vez más parece sustentarse únicamente sobre las aspiraciones profesionales de quienes trabajan en ella.
(Sigue en el PDF adjunto) Tomado de Red Cientifica, España.-
Temas: Cosmología, Ciencia
Cerca ya del final de 2010 llegaron al el mercado editorial español dos fascinantes libros dedicados a la ciencia del universo, con interesantes consecuencias sobre nuestra comprensión del cosmos e incluso del papel que juega en él la conciencia humana. Ambos textos estaban escritos por dos de los más célebres especialistas británicos en el tema; Stephen Hawking en colaboración con Leonard Mlodinow, firmaba El Gran Diseño, en tanto su compatriota Roger Penrose nos presentaba sus últimas reflexiones sobre cosmología en Los Ciclos del Tiempo. Se trata de dos obras planteadas desde muy diferentes perspectivas, pero que sin embargo conviene considerar en conjunto ya que las diferencias pueden ser tan ilustrativas como las similitudes entre ellas.
El “diseño de Hawking”
La evolución de sus opiniones sobre la búsqueda de una teoría final para la unificación de las fuerzas fundamentales, parece haber llevado a Hawking –para decepción de sus seguidores– a una visión francamente opuesta a la que ha inspirado toda su carrera científica, cuestionando la metodología científica que él mismo siempre defendió. Eso se desprende del libro escrito a medias con Mlodinow, donde se manifiesta favorable a la más reciente versión ampliada de las supercuerdas, y a todas las repercusiones filosóficas que de ellas cabe extraer. Al final del primer capítulo, se puede leer un parágrafo sumamente interesante (Hawking y Mlodinow, 2010a)
Describiremos cómo la teoría M puede ofrecer respuestas a la cuestión de la creación. De acuerdo con la teoría M, el nuestro no es el único universo. En su lugar, la teoría Mpredice que una gran cantidad de universos fueron creados de la nada. Su creación no requirió la intervención de algún ser sobrenatural o dios. Más bien, esos múltiples universos surgieron naturalmente de la ley física. Son una predicción de la ciencia. Cada universo tiene muchas historias posibles y muchos posibles estados en tiempos posteriores, es decir, en tiempo como el presente, mucho después de su creación. La mayoría de estos estados serán muy diferentes del universo que observamos e inadecuados para la existencia de cualquier forma de vida. Solo unos pocos permitirían existir a criaturas como nosotros. Así, nuestra presencia selecciona en este vasto repertorio solo aquellos universos que sean compatibles con nuestra existencia. Aunque somos endebles e insignificantes en la escala del cosmos, esto nos hace en cierto sentido los señores de la creación.
Además de la confusa mezcla de hipótesis físicas y premisas metafísicas oculta en estas líneas, el fragmento anterior revela dos puntos capitales que han levantado las reticencias de una parte considerable de los colegas de Hawking. En primer lugar, es obvio que las esperanzas de Hawking sobre una posible unificación de las fuerzas fundamentales de la naturaleza –empresa a la que él se dedicó con optimismo durante muchos años– se han depositado en la teoría M. Esta teoría es en realidad una familia de modelos que contiene una cantidad abrumadora (entre 10100y 101000) de versiones distintas. Aunque dispusiésemos de los medios técnicos para comprobarlas todas –y no los tenemos a causa de las exorbitantes energías necesarias– sería prácticamente imposible decidir si alguno de ellos, o ninguno, se corresponde con el cosmos real. Por 2esos motivos, los defensores de la teoría M arguyen que la ciencia debe abandonar su método, basado en al corroboración experimental de las especulaciones teóricas, y aceptar simplemente lo que dicen ellos –la teoría M– por razones tan difusas y discutibles como la estética formal, la belleza matemática o la versatilidad explicativa. Afortunadamente la mayoría de la comunidad científica no admite –de momento– que se ofrezca una completa destrucción de la racionalidad científica a cambio de apuntalar una teoría que cada vez más parece sustentarse únicamente sobre las aspiraciones profesionales de quienes trabajan en ella.
(Sigue en el PDF adjunto) Tomado de Red Cientifica, España.-
Etiquetas:
Astronomia,
Cosmologia y Cosmogonia,
Fisica e Ignorancia
jueves, 10 de noviembre de 2011
A Brief Guide to Embodied Cognition: Why You Are Not Your Brain
By Samuel McNerney | November 4, 2011 | Scientific American
////////////////////////////////////////////////////////////////////////////////
Embodied cognition, the idea that the mind is not only connected to the body but that the body influences the mind, is one of the more counter-intuitive ideas in cognitive science. In sharp contrast is dualism, a theory of mind famously put forth by Rene Descartes in the 17th century when he claimed that “there is a great difference between mind and body, inasmuch as body is by nature always divisible, and the mind is entirely indivisible… the mind or soul of man is entirely different from the body.” In the proceeding centuries, the notion of the disembodied mind flourished. From it, western thought developed two basic ideas: reason is disembodied because the mind is disembodied and reason is transcendent and universal. However, as George Lakoff and Rafeal Núñez explain:
Cognitive science calls this entire philosophical worldview into serious question on empirical grounds… [the mind] arises from the nature of our brains, bodies, and bodily experiences. This is not just the innocuous and obvious claim that we need a body to reason; rather, it is the striking claim that the very structure of reason itself comes from the details of our embodiment… Thus, to understand reason we must understand the details of our visual system, our motor system, and the general mechanism of neural binding.
What exactly does this mean? It means that our cognition isn’t confined to our cortices. That is, our cognition is influenced, perhaps determined by, our experiences in the physical world. This is why we say that something is “over our heads” to express the idea that we do not understand; we are drawing upon the physical inability to not see something over our heads and the mental feeling of uncertainty. Or why we understand warmth with affection; as infants and children the subjective judgment of affection almost always corresponded with the sensation of warmth, thus giving way to metaphors such as “I’m warming up to her.”
Embodied cognition has a relatively short history. Its intellectual roots date back to early 20th century philosophers Martin Heidegger, Maurice Merleau-Ponty and John Dewey and it has only been studied empirically in the last few decades. One of the key figures to empirically study embodiment is University of California at Berkeley professor George Lakoff.
Lakoff was kind enough to field some questions over a recent phone conversation, where I learned about his interesting history first hand. After taking linguistic courses in the 1960s under Chomsky at MIT, where he eventually majored in English and Mathematics, he studied linguistics in grad school at Indiana University. It was a different world back then, he explained, “it was the beginning of computer science and A.I and the idea that thought could be described with formal logic dominated much of philosophical thinking. Turing machines were popular discussion topics, and the brain was widely understood as a digital computational device.” Essentially, the mind was thought of as a computer program separate from the body with the brain as general-purpose hardware.
Chomsky’s theory of language as a series of meaningless symbols fit this paradigm. It was a view of language in which grammar was independent of meaning or communication. In contrast, Lakoff found examples showing that grammar was depended of meaning in 1963. From this observation he constructed a theory called Generative Semantics, which was also disembodied, where logical structures were built into grammar itself.
To be sure, cognitive scientists weren’t dualists like Descartes – they didn’t actually believe that the mind was physically separate from the body – but they didn’t think that the body influenced cognition. And it was during this time – throughout the 60s and 70s -Lakoff realized the flaws of thinking about the mind as a computer and began studying embodiment.
The tipping point came after attending four talks that hinted at embodied language at Berkeley in the summer of 1975. In his words, they forced him to “give up and rethink linguistics and the brain.” This prompted him and a group of colleagues to start cognitive linguistics, which contrary to Chomskyan theory and the entire mind as a computer paradigm, held that “semantics arose from the nature of the body.” Then, in 1978, he “discovered that we think metaphorically,” and spent the next year gathering as many metaphors as he could find.
Many cognitive scientists accepted his work on metaphors though it opposed much of mainstream thought in philosophy and linguistics. He caught a break on January 2nd 1979, when he got a call from Mark Johnson, who informed him that he was coming to Berkeley to replace someone in the philosophy department for six months. Johnson had just gotten his PhD from Chicago where he studied continental philosophy and called Lakoff to see if he was interested in studying metaphors. What came next was one of the more groundbreaking books in cognitive science. After co-writing a paper for the journal of philosophy in the spring of 1979, Lakoff and Johnson began working on Metaphors We Live By, and managed to finish it three months later.
Their book extensively examined how, when and why we use metaphors. Here are a few examples. We understand control as being UP and being subject to control as being DOWN: We say, “I have control over him,” “I am on top of the situation,” “He’s at the height of his power,” and, “He ranks above me in strength,” “He is under my control,” and “His power is on the decline.” Similarly, we describe love as being a physical force: “I could feel the electricity between us,” “There were sparks,” and “They gravitated to each other immediately.” Some of their examples reflected embodied experience. For example, Happy is Up and Sad is Down, as in “I’m feeling up today,” and “I’m feel down in the dumbs.” These metaphors are based on the physiology of emotions, which researchers such as Paul Eckman have discovered. It’s no surprise, then, that around the world, people who are happy tend to smile and perk up while people who are sad tend to droop.
Metaphors We Live By was a game changer. Not only did it illustrate how prevalent metaphors are in everyday language, it also suggested that a lot of the major tenets of western thought, including the idea that reason is conscious and passionless and that language is separate from the body aside from the organs of speech and hearing, were incorrect. In brief, it demonstrated that “our ordinary conceptual system, in terms of which we both think and act, is fundamentally metaphorical in nature.”
After Metaphors We Live By was published, embodiment slowly gained momentum in academia. In the 1990s dissertations by Christopher Johnson, Joseph Grady and Srini Narayanan led to a neural theory of primary metaphors. They argued that much of our language comes from physical interactions during the first several years of life, as the Affection is Warmth metaphor illustrated. There are many other examples; we equate up with control and down with being controlled because stronger people and objects tend to control us, and we understand anger metaphorically in terms of heat pressure and loss of physical control because when we are angry our physiology changes e.g., skin temperature increases, heart beat rises and physical control becomes more difficult.
This and other work prompted Lakoff and Johnson to publish Philosophy in the Flesh, a six hundred-page giant that challenges the foundations of western philosophy by discussing whole systems of embodied metaphors in great detail and furthermore arguing that philosophical theories themselves are constructed metaphorically. Specifically, they argued that the mind is inherently embodied, thought is mostly unconscious and abstract concepts are largely metaphorical. What’s left is the idea that reason is not based on abstract laws because cognition is grounded in bodily experience (A few years later Lakoff teamed with Rafael Núñez to publish Where Mathematics Comes From to argue at great length that higher mathematics is also grounded in the body and embodied metaphorical thought).
As Lakoff points out, metaphors are more than mere language and literary devices, they are conceptual in nature and represented physically in the brain. As a result, such metaphorical brain circuitry can affect behavior. For example, in a study done by Yale psychologist John Bargh, participants holding warm as opposed to cold cups of coffee were more likely to judge a confederate as trustworthy after only a brief interaction. Similarly, at the University of Toronto, “subjects were asked to remember a time when they were either socially accepted or socially snubbed. Those with warm memories of acceptance judged the room to be 5 degrees warmer on the average than those who remembered being coldly snubbed. Another effect of Affection Is Warmth.” This means that we both physically and literary “warm up” to people.
The last few years have seen many complementary studies, all of which are grounded in primary experiences:
• Thinking about the future caused participants to lean slightly forward while thinking about the past caused participants to lean slightly backwards. Future is Ahead
• Squeezing a soft ball influenced subjects to perceive gender neutral faces as female while squeezing a hard ball influenced subjects to perceive gender neutral faces as male. Female is Soft
• Those who held heavier clipboards judged currencies to be more valuable and their opinions and leaders to be more important. Important is Heavy.
• Subjects asked to think about a moral transgression like adultery or cheating on a test were more likely to request an antiseptic cloth after the experiment than those who had thought about good deeds. Morality is Purity
Studies like these confirm Lakoff’s initial hunch – that our rationality is greatly influenced by our bodies in large part via an extensive system of metaphorical thought. How will the observation that ideas are shaped by the body help us to better understand the brain in the future?
I also spoke with Term Assistant Professor of Psychology Joshua Davis, who teaches at Barnard College and focuses on embodiment. I asked Davis what the future of embodiment studies looks like (he is relatively new to the game, having received his PhD in 2008). He explained to me that although “a lot of the ideas of embodiment have been around for a few decades, they’ve hit a critical mass… whereas sensory inputs and motor outputs were secondary, we now see them as integral to cognitive processes.” This is not to deny computational theories, or even behaviorism, as Davis said, “behaviorism and computational theories will still be valuable,” but, “I see embodiment as a new paradigm that we are shifting towards.”
What exactly will this paradigm look like? It’s unclear. But I was excited to hear from Lakoff that he is trying to “bring together neuroscience with the neural theory of language and thought,” through a new brain language and thought center at Berkeley. Hopefully his work there, along with the work of young professors like Davis, will allow us to understand the brain as part of a much greater dynamic system that isn’t confined to our cortices.
The author would like to personally thank Professors Lakoff and Davis for their time, thoughts, and insights. It was a real pleasure.
Samuel McNerneyAbout the Author: Sam McNerney recently graduated from the greatest school on Earth, Hamilton College, where he earned a bachelors in Philosophy. However, after reading too much Descartes and Nietzsche, he realized that his true passion is reading and writing about the psychology of decision making and the neuroscience of language. Now, he is trying to find a career as a science journalist who writes about philosophy, psychology, and neuroscience. His blog, whywereason.com tries to figure out how humans understand the world. He spends his free time listening to Lady Gaga, dreaming about writing bestsellers, and tweeting @whywereason. Follow on Twitter @whywereason.
More »
The views expressed are those of the author and are not necessarily those of Scientific American.
By Samuel McNerney | November 4, 2011 | Scientific American
////////////////////////////////////////////////////////////////////////////////
Embodied cognition, the idea that the mind is not only connected to the body but that the body influences the mind, is one of the more counter-intuitive ideas in cognitive science. In sharp contrast is dualism, a theory of mind famously put forth by Rene Descartes in the 17th century when he claimed that “there is a great difference between mind and body, inasmuch as body is by nature always divisible, and the mind is entirely indivisible… the mind or soul of man is entirely different from the body.” In the proceeding centuries, the notion of the disembodied mind flourished. From it, western thought developed two basic ideas: reason is disembodied because the mind is disembodied and reason is transcendent and universal. However, as George Lakoff and Rafeal Núñez explain:
Cognitive science calls this entire philosophical worldview into serious question on empirical grounds… [the mind] arises from the nature of our brains, bodies, and bodily experiences. This is not just the innocuous and obvious claim that we need a body to reason; rather, it is the striking claim that the very structure of reason itself comes from the details of our embodiment… Thus, to understand reason we must understand the details of our visual system, our motor system, and the general mechanism of neural binding.
What exactly does this mean? It means that our cognition isn’t confined to our cortices. That is, our cognition is influenced, perhaps determined by, our experiences in the physical world. This is why we say that something is “over our heads” to express the idea that we do not understand; we are drawing upon the physical inability to not see something over our heads and the mental feeling of uncertainty. Or why we understand warmth with affection; as infants and children the subjective judgment of affection almost always corresponded with the sensation of warmth, thus giving way to metaphors such as “I’m warming up to her.”
Embodied cognition has a relatively short history. Its intellectual roots date back to early 20th century philosophers Martin Heidegger, Maurice Merleau-Ponty and John Dewey and it has only been studied empirically in the last few decades. One of the key figures to empirically study embodiment is University of California at Berkeley professor George Lakoff.
Lakoff was kind enough to field some questions over a recent phone conversation, where I learned about his interesting history first hand. After taking linguistic courses in the 1960s under Chomsky at MIT, where he eventually majored in English and Mathematics, he studied linguistics in grad school at Indiana University. It was a different world back then, he explained, “it was the beginning of computer science and A.I and the idea that thought could be described with formal logic dominated much of philosophical thinking. Turing machines were popular discussion topics, and the brain was widely understood as a digital computational device.” Essentially, the mind was thought of as a computer program separate from the body with the brain as general-purpose hardware.
Chomsky’s theory of language as a series of meaningless symbols fit this paradigm. It was a view of language in which grammar was independent of meaning or communication. In contrast, Lakoff found examples showing that grammar was depended of meaning in 1963. From this observation he constructed a theory called Generative Semantics, which was also disembodied, where logical structures were built into grammar itself.
To be sure, cognitive scientists weren’t dualists like Descartes – they didn’t actually believe that the mind was physically separate from the body – but they didn’t think that the body influenced cognition. And it was during this time – throughout the 60s and 70s -Lakoff realized the flaws of thinking about the mind as a computer and began studying embodiment.
The tipping point came after attending four talks that hinted at embodied language at Berkeley in the summer of 1975. In his words, they forced him to “give up and rethink linguistics and the brain.” This prompted him and a group of colleagues to start cognitive linguistics, which contrary to Chomskyan theory and the entire mind as a computer paradigm, held that “semantics arose from the nature of the body.” Then, in 1978, he “discovered that we think metaphorically,” and spent the next year gathering as many metaphors as he could find.
Many cognitive scientists accepted his work on metaphors though it opposed much of mainstream thought in philosophy and linguistics. He caught a break on January 2nd 1979, when he got a call from Mark Johnson, who informed him that he was coming to Berkeley to replace someone in the philosophy department for six months. Johnson had just gotten his PhD from Chicago where he studied continental philosophy and called Lakoff to see if he was interested in studying metaphors. What came next was one of the more groundbreaking books in cognitive science. After co-writing a paper for the journal of philosophy in the spring of 1979, Lakoff and Johnson began working on Metaphors We Live By, and managed to finish it three months later.
Their book extensively examined how, when and why we use metaphors. Here are a few examples. We understand control as being UP and being subject to control as being DOWN: We say, “I have control over him,” “I am on top of the situation,” “He’s at the height of his power,” and, “He ranks above me in strength,” “He is under my control,” and “His power is on the decline.” Similarly, we describe love as being a physical force: “I could feel the electricity between us,” “There were sparks,” and “They gravitated to each other immediately.” Some of their examples reflected embodied experience. For example, Happy is Up and Sad is Down, as in “I’m feeling up today,” and “I’m feel down in the dumbs.” These metaphors are based on the physiology of emotions, which researchers such as Paul Eckman have discovered. It’s no surprise, then, that around the world, people who are happy tend to smile and perk up while people who are sad tend to droop.
Metaphors We Live By was a game changer. Not only did it illustrate how prevalent metaphors are in everyday language, it also suggested that a lot of the major tenets of western thought, including the idea that reason is conscious and passionless and that language is separate from the body aside from the organs of speech and hearing, were incorrect. In brief, it demonstrated that “our ordinary conceptual system, in terms of which we both think and act, is fundamentally metaphorical in nature.”
After Metaphors We Live By was published, embodiment slowly gained momentum in academia. In the 1990s dissertations by Christopher Johnson, Joseph Grady and Srini Narayanan led to a neural theory of primary metaphors. They argued that much of our language comes from physical interactions during the first several years of life, as the Affection is Warmth metaphor illustrated. There are many other examples; we equate up with control and down with being controlled because stronger people and objects tend to control us, and we understand anger metaphorically in terms of heat pressure and loss of physical control because when we are angry our physiology changes e.g., skin temperature increases, heart beat rises and physical control becomes more difficult.
This and other work prompted Lakoff and Johnson to publish Philosophy in the Flesh, a six hundred-page giant that challenges the foundations of western philosophy by discussing whole systems of embodied metaphors in great detail and furthermore arguing that philosophical theories themselves are constructed metaphorically. Specifically, they argued that the mind is inherently embodied, thought is mostly unconscious and abstract concepts are largely metaphorical. What’s left is the idea that reason is not based on abstract laws because cognition is grounded in bodily experience (A few years later Lakoff teamed with Rafael Núñez to publish Where Mathematics Comes From to argue at great length that higher mathematics is also grounded in the body and embodied metaphorical thought).
As Lakoff points out, metaphors are more than mere language and literary devices, they are conceptual in nature and represented physically in the brain. As a result, such metaphorical brain circuitry can affect behavior. For example, in a study done by Yale psychologist John Bargh, participants holding warm as opposed to cold cups of coffee were more likely to judge a confederate as trustworthy after only a brief interaction. Similarly, at the University of Toronto, “subjects were asked to remember a time when they were either socially accepted or socially snubbed. Those with warm memories of acceptance judged the room to be 5 degrees warmer on the average than those who remembered being coldly snubbed. Another effect of Affection Is Warmth.” This means that we both physically and literary “warm up” to people.
The last few years have seen many complementary studies, all of which are grounded in primary experiences:
• Thinking about the future caused participants to lean slightly forward while thinking about the past caused participants to lean slightly backwards. Future is Ahead
• Squeezing a soft ball influenced subjects to perceive gender neutral faces as female while squeezing a hard ball influenced subjects to perceive gender neutral faces as male. Female is Soft
• Those who held heavier clipboards judged currencies to be more valuable and their opinions and leaders to be more important. Important is Heavy.
• Subjects asked to think about a moral transgression like adultery or cheating on a test were more likely to request an antiseptic cloth after the experiment than those who had thought about good deeds. Morality is Purity
Studies like these confirm Lakoff’s initial hunch – that our rationality is greatly influenced by our bodies in large part via an extensive system of metaphorical thought. How will the observation that ideas are shaped by the body help us to better understand the brain in the future?
I also spoke with Term Assistant Professor of Psychology Joshua Davis, who teaches at Barnard College and focuses on embodiment. I asked Davis what the future of embodiment studies looks like (he is relatively new to the game, having received his PhD in 2008). He explained to me that although “a lot of the ideas of embodiment have been around for a few decades, they’ve hit a critical mass… whereas sensory inputs and motor outputs were secondary, we now see them as integral to cognitive processes.” This is not to deny computational theories, or even behaviorism, as Davis said, “behaviorism and computational theories will still be valuable,” but, “I see embodiment as a new paradigm that we are shifting towards.”
What exactly will this paradigm look like? It’s unclear. But I was excited to hear from Lakoff that he is trying to “bring together neuroscience with the neural theory of language and thought,” through a new brain language and thought center at Berkeley. Hopefully his work there, along with the work of young professors like Davis, will allow us to understand the brain as part of a much greater dynamic system that isn’t confined to our cortices.
The author would like to personally thank Professors Lakoff and Davis for their time, thoughts, and insights. It was a real pleasure.
Samuel McNerneyAbout the Author: Sam McNerney recently graduated from the greatest school on Earth, Hamilton College, where he earned a bachelors in Philosophy. However, after reading too much Descartes and Nietzsche, he realized that his true passion is reading and writing about the psychology of decision making and the neuroscience of language. Now, he is trying to find a career as a science journalist who writes about philosophy, psychology, and neuroscience. His blog, whywereason.com tries to figure out how humans understand the world. He spends his free time listening to Lady Gaga, dreaming about writing bestsellers, and tweeting @whywereason. Follow on Twitter @whywereason.
More »
The views expressed are those of the author and are not necessarily those of Scientific American.
Suscribirse a:
Entradas (Atom)