sábado, 6 de agosto de 2011

Origen de los elementos quimicos

Basic Space
Space and astrophysics research made simple
Basic Space HomeAboutContact

On the origin of chemical elements



We take it for granted that there exists a periodic table with numerous elements (at last count, 118) from which we can construct the world around us. But when the universe began with a big bang, it started out with no elements at all. Many of the elements that make up Earth and the people on it had to be created in the nuclear furnaces inside stars and were only released once the star reached the end of its life. In fact, only light elements, like hydrogen and helium, were created at the start of the universe. We can use our knowledge of how particles react to work out how these elements formed just a few minutes after the big bang.
Alpher, Bethe, Gamow…
“It seemed unfair to the Greek alphabet to have the article signed by Alpher and Gamow only, and so the name of Dr. Hans A. Bethe (in absentia) was inserted in preparing the manuscript for print”
George Gamow, The Creation of the Universe (1952)
When Ralph Alpher defended his PhD thesis in 1948, over 300 people came to watch. Thesis defences are not usually a source of so much excitement, at least not beyond the defender’s immediate family, but this one was different.
Before finishing his PhD, Alpher, along with his supervisor George Gamow, had written and published a paper arguing that the Big Bang would have created hydrogen, helium and other elements in certain abundances. Gamow, ever the humorist, felt it was inappropriate to publish a paper with author names so similar to “alpha” and “gamma” without including a “beta” — luckily, Gamow’s friend Hans Bethe was happy to oblige, and had his name added to the paper. Bethe did look over the manuscript and later worked on theories that made up for the shortcomings of the initial paper.
The paper was published in Physical Review on April 1st 1948. Titled “The Origin of Chemical Elements”, it described a process by which all of the known elements in the universe could have come into existence shortly after the big bang. It built on previous work by Gamow that suggested the elements originated “as a consequence of a continuous building-up process arrested by a rapid expansion and cooling of the primordial matter” — in other words, different atoms were made by adding one nucleon at a time to the nucleus, before the process was stopped when the universe became too cool.
Alpher and Gamow (with a little help from Bethe) set out a vision of the early universe in which all matter was a highly compressed “soup” of neutrons, some of which were able to escape and decay into protons and electrons as the universe expanded and became less dense. They believed that these new protons could then capture neutrons, together making deuterium nuclei — an isotope of hydrogen that has one proton and one neutron. They then extrapolated this idea and said that all that had to be done to create heavier nuclei was the capture of another nucleon.
But it’s a little more complicated than that. Their idea works for elements up to helium — and does produce hydrogen and helium, which together make up 99% of the matter in the universe, in the correct proportions to explain their abundances — but it fails when you try to put five nucleons together. There is no stable isotope of any element that has five nucleons. Alpher’s and Gamow’s theory relied on using each element as a stepping stone to the next, so it was stopped in its tracks by this piece of information.
Nevertheless, it was an important step in the right direction, and did describe most of the universe by virtue of the fact that hydrogen and helium make up such a large portion of it. The theory was recognised as significant at the time, too. Among the 300 people in the room at Alpher’s thesis defence, it seems, were the Washington Post. After his presentation, they ran an article with the headline “World Began in 5 Minutes, New Theory”.
Big Bang Nucleosynthesis
Since Alpher, Bethe and Gamow published their paper, cosmologists have done a lot more work on the formation of the light elements in the early universe. The process now has a name: big bang nucleosynthesis.
Timeline of the expansion of the universe. The light elements were created on the far left of this diagram at the beginning of the universe, and became neutral atoms at around 380,000 years after the big bang. Credit: NASA/WMAP Science Team
In the first few seconds after the big bang, the universe was very hot and dense, making it fully ionised — all of the protons, neutrons and electrons moved about freely and did not come together to make atoms. Only three minutes later, when the universe had cooled from 1032 to 109 °C, could light element formation begin.
At this point, electrons were still roaming free and only atomic nuclei could form. Protons were technically the first nuclei (when combined with an electron they make a hydrogen atom) and deuterons were the second. Deuterons are the nuclei of deuterium and are made when protons and neutrons fuse and emit photons.
Deuterons and neutrons can fuse to create a tritium nuclei with one proton and two neutrons. When the tritium nuclei comes across a proton the two can combine into a helium nuclei with two protons and two neutrons, known as He-4. Another path that leads to helium is the combination of a deuteron and a proton into a helium nuclei with two protons but only one neutron, He-3. When He-3 comes across a neutron, they can fuse to form a full helium nuclei, He-4. Each step in these reactions also emits a photon.
Photon emission can be a slow process, and there is a set of reactions that take deuterons and create helium nuclei faster because they bypass the emission of photons. They start by fusing two deuterons and the end result is a He-4 nuclei and either a proton or a neutron, depending on the specific path.
Lithium and beryllium were also made in very small amounts. This whole process was over 20 minutes after the big bang, when the universe became too cool and sparse for nuclei to form.
The abundance of the light elements can be predicted using just one quantity — the density of baryons at the time of nucleosynthesis. Baryons are particles made with three quarks, such as protons and neutrons. Using the baryon density predicted by big bang nucleosynthesis, the total mass of the universe would have been 25% helium, 0.01% deuterium and even less than that would have been lithium. These primordial abundances can be tested, and, of course, have been. Nowhere in the universe is helium seen with an abundance less than 23%. This is a major piece of evidence for the big bang.
The CMB and its fluctuations as seen by the WMAP mission in 2010. Credit: NASA / WMAP Science Team
The nuclei formed in big bang nucleosynthesis had to wait a long time before they could team up with electrons to make neutral atoms. When neutral hydrogen was finally made 380,000 years after the big bang, the cosmic microwave background (CMB) radiation was emitted.
Alpher and his colleague Robert Herman predicted the existence of the CMB in the late 1940s, when they realised that the relic radiation would be a side effect of the recombination of electrons with atomic nuclei. The CMB now provides us with a way to double check our working with an independent measurement on the baryon density. By looking at fluctuations in the CMB, we find a baryon density that would give the correct light element abundances — it seems we really do understand what went on only a few minutes after the universe began.
Reference
Alpher, R., Bethe, H., & Gamow, G. (1948). The Origin of Chemical Elements Physical Review, 73 (7), 803-804 DOI: 10.1103/PhysRev.73.803
Kelly OakesAbout the Author: Kelly Oakes has just finished a physics degree at Imperial College London, and is taking the summer off to recover before going back to start a masters in science communication. In her spare time she writes about science and drinks cocktails. Follow on Twitter  @kahoakes.
The views expressed are those of the author and are not necessarily those of Scientific American.
Tags: , , , ,

martes, 2 de agosto de 2011

Busqueda de vida inteligente: estadistica estandard vs. bayesiana...

1.- La Búsqueda de vida inteligente según los métodos convencionales se basa  en los cálculos de Drake. ..pero cada día mas y mas los astrónomos se desilusionan de la búsqueda por la via que se basa en el calculo estadístico según probabilidades (obtenido aplicando la estadista normal o estandard) ....:

Un proyecto científico destinado a la búsqueda de vida inteligente en el universo, como el Seti, es necesariamente muy complejo y costoso, porque exige la utilización de instrumentos muy avanzados, como, por ejemplo, los modernos radiotelescopios para indagar en las profundidades del cosmos.

Más allá de la poderosa motivación que impulsa al hombre a realizar este tipo de investigaciones, es necesario ante todo que el proyecto descanse sobre sólidads bases científicas y que ofrezca unas posibilidades razonables de éxito para justificar su financiación.

Estos requisitos estaban sin duda muy claros para el más famoso pionero en este campo: el radioastrónomo estadounidense Frank Drake, que en 1960 inició el primer proyecto SETI, orientando la antena de su instrumento hacia dos estrellas semjantes al Sol.
En efecto, Drake concibió una fórmula para calcular la cantidad de civilizaciones tenológicas que puede haber actualmente en nuestra galaxia y, de hecho, el número resultante es extraordinariamente elevado, tanto que la inversión en un proyecto como el SETI queda plenamente justificada. Aun así, la fórmula está basada en factores que no son científicamente determinables de manéra unívoca, sino que son fruto de especulaciones, y ésta es precisamente su limitación.

Así pues, la fórmula de Drake permite calcular el número de civilizaciones tecnológicas contemporáneas (N) presentes en la galaxia, como un producto de diferentes factores, cada uno de los cuales expresa la probabilidad de que se verifiquen ciertas condiciones consideradas fundamentales para el desarrollo de tales culturas. La fórmula es la siguiente:

N = Ns X Fs X Fp X Nt X Fv X Fvl X Fct X VMct

El primer término, Ns, es el número de estrellas existentes en nuestra galaxia, y es, probablemente, el que se puede establecer con mayor exactitud: se sitúa entre 100.000 y 300.000 millones, según los distintos cálculos.

Fs indica la proporción de estrellas simples de tipo solar y Fp, el porcentaje de estas estrellas que pueden tener un sistema planetario. Se considera que las características fundamentales para que una estrella posea un sistema planetario con planetas situados a la distancia oportuna (de manera que exista un ambiente apto para la vida, es decir, ni demasiado frío ni demasiado cálido, según nuestros conocimientos biológicos) son las de nuestro Sol: una estrella simple, enana amarilla de baja temperatura superficial, que gira lentamente y posee abundantes elementos pesados.

El término Nt, representa la fracción de estrellas con un planeta en la posición oportuna, es decir, a una distancia que garantice variaciones térmicas reducidas, y con condiciones fisicoquímicas semejantes a las de la Tierra (como la presencia de atmósfera de composición análoga y de agua); por lo tanto, planetas "habitables".

Fv indica el porcentaje de estrellas con un planeta "habitable" donde se ha desarrollado la vida; pero sólo en una fracción de estos planetas (Fvl) puede haber vida inteligente; finalmente, la evolución hacia una civilización tecnológica sólo puede haberse verificado en un porcentaje Fct de estos últimos.

En realidad, sabemos muy poco acerca de las probabilidades de que se desarrolle la vida en ambientes ajenos al nuestro, y todavía menos sobre la posibilidad de que no supere la fase de vida bacteriana o, por el contrario, evolucione hacia la vida inteligente, capaz de aprovechar los recursos del ambiente.

El último factor (VMct) hace referencia a la duración media de una civilización tecnológica, como fracción de la edad de la galaxia. Evidentemente, es preciso que las otras culturas sean contemporáneas a la nuestra para que existan posibilidades de contacto. También este factor es fruto de extrapolaciones basadas en nuestra historia.

Atribuyendo valores considerados realistas a los diversos factores de la fórmula de Drake, obtenemos un número enorme de civilizaciones tecnológicas contemporáneas a la nuestra: tal vez decenas de millones. Sin embargo, si consideramos que el volumen ocupado por ellas se reduce al plano galáctico, podemos calcular que la distancia media entre una y otra debe de ser del orden del centenar de años luz, una distancia insalvable con las tecnologías actuales, incluso para el simple intercambio de mensajes.

El factor distancia, que no está contemplado en la fórmula de Drake, es muy importante: aunque el número de civilizaciones con las que podríamos entablar contactos sea enorme, las distancias son tan impresionantes que el diálogo se vuelve imposible.

Esta dificultad se añade a la incertidumbre que pesa sobre muchos de los parámetros de la fórmula, que pueden elegirse de manera bastante arbitraria, para llegar finalmente a resultados muy diferentes.

Aun así, si consideramos que en el universo hay por lo menos 100.000 millones de galaxias, cada una de las cuales está compuesta por unos 100.000 millones de estrellas, resulta imposible no confiar en la posible existencia de otras civilizaciones. Un cálculo optimista permite pensar en varios billones de planetas con vida inteligente. En cuanto a las posibilidades de contacto... ésa es otra historia que probablemente nunca conoceremos.

 ¿Cuántas civilizaciones tecnológicas pueden existir? (según fórmula de Drake)
 FactorCálculo pesimista Cálculo optimista
Número de estrellas en la galaxia 100.000 millones 300.000 millones
Estrellas de rotación lenta 93.000 millones 279.000 millones
Estrellas de tipo solar 23.200 millones 69.700 millones
Estrellas simples 9.300 millones 27.900 millones
Estrellas de población I 930 millones 2.790 millones
Estrellas con planeta en posición oportuna 465 millones 1.390 millones
Estrellas con planeta semejante a la Tierra 46,5 millones 698 millones
Estrellas con planeta "habitable" 23,2 millones 349 millones
Estrellas con planeta y vida bacteriana 697.500 320,8 millones
Estrellas con planeta y vida evolucionada 13.950 193 millones
Distancia media entre las civilizaciones
tecnológicas de la galaxia
1.790 años luz 75 años luz

Esta tabla representa el cálculo del número de civilizaciones tecnológicas contemporáneas a la nuestra presentes en la galaxia según la fórmula de Drake. Algunos parámetros d ela fórmula clásica han sido disociados posteriormente para demostrar la enorme influencia de las diversis hipótesis adoptadas sobre el resultado obtenido

Naturalmente, hoy por hoy no tenemos forma de averiguar si estos cálculos y suposiciones están en lo cierto

2.- Asi las cosas ,ha surgido una propuesta: variar el uso de la formula de Drake -basado en la estadistica normal o estandard para utilizar la estadisica bayesiana .

 
La estadistica  tradicional sólo admite probabilidades basadas en experimentos repetibles y que tengan una confirmación empírica mientras que la estadistica bayesiana  permite probabilidades subjetivas. Pues bien,los astronomos  que buscan vida en otros sitios del Universo estan tentado a dejar la probabilidad estadistica  normal por la probababilidad estadística bayesiana..Por que: Desde los 70 todas las exploracione para buscar vida en la galaxia han sido contraproducentes…cambiando de metodo podría  tenerse mas suerte…lo que implica cambiar-igualmente de parámetros de búsqueda ,según se detalla en este articulo: 


The search for extraterrestrial intelligence could be a waste of time according to a recent statistical analysis of the likelihood of life arising spontaneously on habitable-zone exoplanets out there in the wider universe (and when have predictive statistics ever got it wrong?). Credit: SETI Institute.
History has proved time and again that mathematical modelling is no substitute for a telescope (or other data collection device). Nonetheless, some theoreticians have recently put forward a statistical analysis which suggests that life is probably very rare in the universe – despite the apparent prevalence of habitable-zone exoplanets, being found by the Kepler mission and other exoplanet search techniques.

You would be right to be skeptical, given the Bayesian analysis undertaken is based on our singular experience of abiogenesis – being the origin of life from non-life, here on Earth. Indeed, the seemingly rapid abiogenesis that occurred on Earth soon after its formation is suggested to be the clinching proof that abiogenesis on habitable-zone exoplanets must be rare. Hmm…
Bayes theorem provides a basis for estimating the likelihood that a prior assumption or hypothesis (e.g. that abiogenesis is common on habitable-zone exoplanets) is correct, using whatever evidence is available. Its usage is nicely demonstrated in solving the Monty Hall problem.
Go here for the detail, but in a nutshell:
There are three doors, one with a car behind it and the other two have goats. You announce which door you will pick – knowing that it carries a 1/3 probability of hiding the car. Then Monty Hall, who knows where the car is, opens another door to reveal a goat. So, now you know that door always had a zero probability of hiding the car. So, the likelihood of the remaining door hiding the car carries the remaining 2/3 probability of the system, since there was always an absolute 1/1 probability that the car was behind one of the three doors. So, it makes more sense for you to open that remaining door, instead of the first one you picked.
In this story, Monty Hall opening the door with a goat represents new data. It doesn’t allow you to definitively determine where the car is, but it does allow you to recalculate the likelihood of your prior hypothesis (that the car is behind the first door you picked) being correct.
Applying Bayesian analysis to the problem of abiogenesis on habitable-zone exoplanets is a bit of a stretch. Speigel and Turner argue that the evidence we have available to us – that life began quite soon after the Earth became habitable – contributes nothing to estimating the likelihood that life arises routinely on habitable-zone exoplanets.
We need to acknowledge the anthropic nature of the observation we are making. We are here after 3.5 billion years of evolution – which has given us the capacity to gather together the evidence that life began here 3.5 billion years ago, shortly after the Earth became habitable. But that is only because this is how things unfolded here on Earth. In the absence of more data, the apparent rapidity of abiogenesis here on Earth could just be a fluke.
This is a fair point, but a largely philosophical one. It informs the subsequent six pages of Spiegel and Turner’s Bayesian analysis, but it is not a conclusion of that analysis.
The authors seek to remind us that interviewing one person and finding that she or he likes baked beans does not allow us to conclude that most people like baked beans. Yes agree, but that’s just statistics – it’s not really Bayesian statistics.
If we are ever able to closely study an exoplanet that has been in a habitable state for 3.5 billion years and discover that either it has life, or that it does not – that will be equivalent to Monty Hall opening another door.
But for now, we might just be a fluke… or we might not be. We need more data.


Fuente de 1.- http://www.cielodeguadaira.org/index.php?option=com_content&task=view&id=186&Itemid=26
Fuente de 2:  Universe Today

Ultimate logic: To infinity and beyond



Ultimate logic: To infinity and beyond

Many levels of infinity (Image: Emm.A/Marie Emmermann)
Many levels of infinity (Image: Emm.A/Marie Emmermann)
The mysteries of infinity could lead us to a fantastic structure above and beyond mathematics as we know it
WHEN David Hilbert left the podium at the Sorbonne in Paris, France, on 8 August 1900, few of the assembled delegates seemed overly impressed. According to one contemporary report, the discussion following his address to the second International Congress of Mathematicians was "rather desultory". Passions seem to have been more inflamed by a subsequent debate on whether Esperanto should be adopted as mathematics' working language.
Yet Hilbert's address set the mathematical agenda for the 20th century. It crystallised into a list of 23 crucial unanswered questions, including how to pack spheres to make best use of the available space, and whether the Riemann hypothesis, which concerns how the prime numbers are distributed, is true.
Today many of these problems have been resolved, sphere-packing among them. Others, such as the Riemann hypothesis, have seen little or no progress. But the first item on Hilbert's list stands out for the sheer oddness of the answer supplied by generations of mathematicians since: that mathematics is simply not equipped to provide an answer.
This curiously intractable riddle is known as the continuum hypothesis, and it concerns that most enigmatic quantity, infinity. Now, 140 years after the problem was formulated, a respected US mathematician believes he has cracked it. What's more, he claims to have arrived at the solution not by using mathematics as we know it, but by building a new, radically stronger logical structure: a structure he dubs "ultimate L".
The journey to this point began in the early 1870s, when the German Georg Cantor was laying the foundations of set theory. Set theory deals with the counting and manipulation of collections of objects, and provides the crucial logical underpinnings of mathematics: because numbers can be associated with the size of sets, the rules for manipulating sets also determine the logic of arithmetic and everything that builds on it.
These dry, slightly insipid logical considerations gained a new tang when Cantor asked a critical question: how big can sets get? The obvious answer - infinitely big - turned out to have a shocking twist: infinity is not one entity, but comes in many levels.
How so? You can get a flavour of why by counting up the set of whole numbers: 1, 2, 3, 4, 5... How far can you go? Why, infinitely far, of course - there is no biggest whole number. This is one sort of infinity, the smallest, "countable" level, where the action of arithmetic takes place.
Now consider the question "how many points are there on a line?" A line is perfectly straight and smooth, with no holes or gaps; it contains infinitely many points. But this is not the countable infinity of the whole numbers, where you bound upwards in a series of defined, well-separated steps. This is a smooth, continuous infinity that describes geometrical objects. It is characterised not by the whole numbers, but by the real numbers: the whole numbers plus all the numbers in between that have as many decimal places as you please - 0.1, 0.01, √2, π and so on.
Cantor showed that this "continuum" infinity is in fact infinitely bigger than the countable, whole-number variety. What's more, it is merely a step in a staircase leading to ever-higher levels of infinities stretching up as far as, well, infinity.
While the precise structure of these higher infinities remained nebulous, a more immediate question frustrated Cantor. Was there an intermediate level between the countable infinity and the continuum? He suspected not, but was unable to prove it. His hunch about the non-existence of this mathematical mezzanine became known as the continuum hypothesis.
Attempts to prove or disprove the continuum hypothesis depend on analysing all possible infinite subsets of the real numbers. If every one is either countable or has the same size as the full continuum, then it is correct. Conversely, even one subset of intermediate size would render it false.
A similar technique using subsets of the whole numbers shows that there is no level of infinity below the countable. Tempting as it might be to think that there are half as many even numbers as there are whole numbers in total, the two collections can in fact be paired off exactly. Indeed, every set of whole numbers is either finite or countably infinite.
Applied to the real numbers, though, this approach bore little fruit, for reasons that soon became clear. In 1885, the Swedish mathematician Gösta Mittag-Leffler had blocked publication of one of Cantor's papers on the basis that it was "about 100 years too soon". And as the British mathematician and philosopher Bertrand Russell showed in 1901, Cantor had indeed jumped the gun. Although his conclusions about infinity were sound, the logical basis of his set theory was flawed, resting on an informal and ultimately paradoxical conception of what sets are.
It was not until 1922 that two German mathematicians, Ernst Zermelo and Abraham Fraenkel, devised a series of rules for manipulating sets that was seemingly robust enough to support Cantor's tower of infinities and stabilise the foundations of mathematics. Unfortunately, though, these rules delivered no clear answer to the continuum hypothesis. In fact, they seemed strongly to suggest there might even not be an answer.

Agony of choice

The immediate stumbling block was a rule known as the "axiom of choice". It was not part of Zermelo and Fraenkel's original rules, but was soon bolted on when it became clear that some essential mathematics, such as the ability to compare different sizes of infinity, would be impossible without it.
The axiom of choice states that if you have a collection of sets, you can always form a new set by choosing one object from each of them. That sounds anodyne, but it comes with a sting: you can dream up some twisted initial sets that produce even stranger sets when you choose one element from each. The Polish mathematicians Stefan Banach and Alfred Tarski soon showed how the axiom could be used to divide the set of points defining a spherical ball into six subsets which could then be slid around to produce two balls of the same size as the original. That was a symptom of a fundamental problem: the axiom allowed peculiarly perverse sets of real numbers to exist whose properties could never be determined. If so, this was a grim portent for ever proving the continuum hypothesis.
This news came at a time when the concept of "unprovability" was just coming into vogue. In 1931, the Austrian logician Kurt Gödel proved his notorious "incompleteness theorem". It shows that even with the most tightly knit basic rules, there will always be statements about sets or numbers that mathematics can neither verify nor disprove.
At the same time, though, Gödel had a crazy-sounding hunch about how you might fill in most of these cracks in mathematics' underlying logical structure: you simply build more levels of infinity on top of it. That goes against anything we might think of as a sound building code, yet Gödel's guess turned out to be inspired. He proved his point in 1938. By starting from a simple conception of sets compatible with Zermelo and Fraenkel's rules and then carefully tailoring its infinite superstructure, he created a mathematical environment in which both the axiom of choice and the continuum hypothesis are simultaneously true. He dubbed his new world the "constructible universe" - or simply "L".
L was an attractive environment in which to do mathematics, but there were soon reasons to doubt it was the "right" one. For a start, its infinite staircase did not extend high enough to fill in all the gaps known to exist in the underlying structure. In 1963 Paul Cohen of Stanford University in California put things into context when he developed a method for producing a multitude of mathematical universes to order, all of them compatible with Zermelo and Fraenkel's rules.
This was the beginning of a construction boom. "Over the past half-century, set theorists have discovered a vast diversity of models of set theory, a chaotic jumble of set-theoretic possibilities," says Joel Hamkins at the City University of New York. Some are "L-type worlds" with superstructures like Gödel's L, differing only in the range of extra levels of infinity they contain; others have wildly varying architectural styles with completely different levels and infinite staircases leading in all sorts of directions.
For most purposes, life within these structures is the same: most everyday mathematics does not differ between them, and nor do the laws of physics. But the existence of this mathematical "multiverse" also seemed to dash any notion of ever getting to grips with the continuum hypothesis. As Cohen was able to show, in some logically possible worlds the hypothesis is true and there is no intermediate level of infinity between the countable and the continuum; in others, there is one; in still others, there are infinitely many. With mathematical logic as we know it, there is simply no way of finding out which sort of world we occupy.
That's where Hugh Woodin of the University of California, Berkeley, has a suggestion. The answer, he says, can be found by stepping outside our conventional mathematical world and moving on to a higher plane.
Woodin is no "turn on, tune in" guru. A highly respected set theorist, he has already achieved his subject's ultimate accolade: a level on the infinite staircase named after him. This level, which lies far higher than anything envisaged in Gödel's L, is inhabited by gigantic entities known as Woodin cardinals.
Woodin cardinals illustrate how adding penthouse suites to the structure of mathematics can solve problems on less rarefied levels below. In 1988 the American mathematicians Donald Martin and John Steel showed that if Woodin cardinals exist, then all "projective" subsets of the real numbers have a measurable size. Almost all ordinary geometrical objects can be described in terms of this particular type of set, so this was just the buttress needed to keep uncomfortable apparitions such as Banach and Tarski's ball out of mainstream mathematics.
Such successes left Woodin unsatisfied, however. "What sense is there in a conception of the universe of sets in which very large sets exist, if you can't even figure out basic properties of small sets?" he asks. Even 90 years after Zermelo and Fraenkel had supposedly fixed the foundations of mathematics, cracks were rife. "Set theory is riddled with unsolvability. Almost any question you want to ask is unsolvable," says Woodin. And right at the heart of that lay the continuum hypothesis.

Ultimate L

Woodin and others spotted the germ of a new, more radical approach while investigating particular patterns of real numbers that pop up in various L-type worlds. The patterns, known as universally Baire sets, subtly changed the geometry possible in each of the worlds and seemed to act as a kind of identifying code for it. And the more Woodin looked, the more it became clear that relationships existed between the patterns in seemingly disparate worlds. By patching the patterns together, the boundaries that had seemed to exist between the worlds began to dissolve, and a map of a single mathematical superuniverse was slowly revealed. In tribute to Gödel's original invention, Woodin dubbed this gigantic logical structure "ultimate L".
Among other things, ultimate L provides for the first time a definitive account of the spectrum of subsets of the real numbers: for every forking point between worlds that Cohen's methods open up, only one possible route is compatible with Woodin's map. In particular it implies Cantor's hypothesis to be true, ruling out anything between countable infinity and the continuum. That would mark not only the end of a 140-year-old conundrum, but a personal turnaround for Woodin: 10 years ago, he was arguing that the continuum hypothesis should be considered false.
Ultimate L does not rest there. Its wide, airy space allows extra steps to be bolted to the top of the infinite staircase as necessary to fill in gaps below, making good on Gödel's hunch about rooting out the unsolvability that riddles mathematics. Gödel's incompleteness theorem would not be dead, but you could chase it as far as you pleased up the staircase into the infinite attic of mathematics.
The prospect of finally removing the logical incompleteness that has bedevilled even basic areas such as number theory is enough to get many mathematicians salivating. There is just one question. Is ultimate L ultimately true?
Andrés Caicedo, a logician at Boise State University in Idaho, is cautiously optimistic. "It would be reasonable to say that this is the 'correct' way of going about completing the rules of set theory," he says. "But there are still several technical issues to be clarified before saying confidently that it will succeed."
Others are less convinced. Hamkins, who is a former student of Woodin's, holds to the idea that there simply are as many legitimate logical constructions for mathematics as we have found so far. He thinks mathematicians should learn to embrace the diversity of the mathematical multiverse, with spaces where the continuum hypothesis is true and others where it is false. The choice of which space to work in would then be a matter of personal taste and convenience. "The answer consists of our detailed understanding of how the continuum hypothesis both holds and fails throughout the multiverse," he says.
Woodin's ideas need not put paid to this choice entirely, though: aspects of many of these diverse universes will survive inside ultimate L. "One goal is to show that any universe attainable by means we can currently foresee can be obtained from the theory," says Caicedo. "If so, then ultimate L is all we need."
In 2010, Woodin presented his ideas to the same forum that Hilbert had addressed over a century earlier, the International Congress of Mathematicians, this time in Hyderabad, India. Hilbert famously once defended set theory by proclaiming that "no one shall expel us from the paradise that Cantor has created". But we have been stumbling around that paradise with no clear idea of where we are. Perhaps now a guide is within our grasp - one that will take us through this century and beyond.
Richard Elwes is a teaching fellow at the University of Leeds in the UK and the author of Maths 1001: Absolutely Everything That Matters in Mathematics (Quercus, 2010) and How to Build a Brain (Quercus, 2011)
Issue 2823 of New Scientist magazine
Magazine issue 2823