jueves, 12 de agosto de 2010

QUE ES EL INFINITO...?

Una explicación sencilla acerca del concepto “ Infinito” ( algo que no se puede medir) y una cronología sobre los pensadores mas representativos que trataron de definirlo ,entre ellos :
a) Los antiguos Euclides (300 A.C) y Diofanto de Alejandria,de quien se cree es el inventor del algebra y la fecha exacta de nacimiento y muerte solo se presume con base en un epitafio en su tumba que dice de forma enigmática: “Transeúnte, esta es la tumba de Diofanto: es él quien con esta sorprendente distribución te dice el número de años que vivió. Su niñez ocupó la sexta parte de su vida; después, durante la doceava parte su mejilla se cubrió con el primer bozo. Pasó aún una séptima parte de su vida antes de tomar esposa y, cinco años después, tuvo un precioso niño que, una vez alcanzada la mitad de la edad de su padre, pereció de una muerte desgraciada. Su padre tuvo que sobrevivirle, llorándole, durante cuatro años”. De todo esto se deduce su edad ( 84 años) y se presume vivio cerca de 365 A.C. (Wikipedia).
b) Los  matematicos modernos que han aportado sus ideas acerca del termino, entre quienes destacan:  Peano, Cantor, Kronecker, Hilbert,  Wilis, Church, Godel, Paris, Harrington, Friedman, Zeilberger, Hausdorff,han aportado ideas distintas o bien complementarias que muchas veces no son faciles de aceptar,toda vez que siendo infinito una cantidad no contable o computable, es a la vez una cantidad integrada por varios sets de infinitos…

Escribe  Richard Elwes:
“If you  were forced to learn long division at school, you might have had cause to curse whoever invented arithmetic. A wearisome whirl of divisors and dividends, of bringing the next digit down and multiplying by the number you first thought of, it almost always went wrong somewhere. And all the while you were plagued by that subversive thought- provided you were at school when such things existed- that any sensible person would just use a calculator.
Well, here's an even more subversive thought: are the rules of arithmetic, the basic logical premises underlying things like long division, unsound? Implausible, you might think.
After all, human error aside, our number system delivers pretty reliable results. Yet the closer mathematicians peer beneath the hood of arithmetic, the more they are becoming convinced that something about numbers doesn't quite add up. The motor might be still running, but some essential parts seem to be missing- and we're not sure where to find the spares.
From the 11-dimensional geometry of superstrings to the subtleties of game theory, mathematicians investigate many strange and exotic things. But the system of natural numbers- 0, 1, 2, 3, 4 and so on ad infinitum- and the arithmetical rules used to manipulate them retain an exalted status as mathematics' oldest and most fundamental tool.

Thinkers such as Euclid around 300 BC and Diophantus of Alexandria in the 3rd century AD were already probing the deeper reaches of number theory. It was not until the late 19th century, though, that the Italian Giuseppe Peano produced something like a complete set of rules for arithmetic: precise logical axioms from which the more complex behaviour of numbers can be derived. For the most part, Peano's rules seem self-evident, consisting of assertions such as if x = y, then y = x and x + 1 = y + 1. It was nevertheless a historic achievement, and it unleashed a wave of interest in the logical foundations of number theory that persists to this day.

It was 1931 when a young Austrian mathematician called Kurt Gödel threw an almighty spanner in the works. He proved the existence of "undecidable" statements about numbers that could neither be proved nor disproved starting from Peano's rules. What was worse, no conceivable extension of the rules would be able to deal with all of these statements. No matter how many carefully drafted clauses you added to the rule book, undecidable statements would always be there (see "Bound not to work").

Gödel's now-notorious incompleteness theorems were a disconcerting blow. Mathematics prides itself on being the purest route to knowledge of the world around us. It formulates basic axioms and, applying the tools of uncompromising logic, uses them to deduce a succession of ever grander theorems. Yet this approach was doomed to failure when applied to the basic system of natural numbers, Gödel showed. There could be no assumption that a "true" or "false" answer exists. Instead, there was always the awkward possibility that the laws of arithmetic might not supply a definitive answer at all.
Gödel revealed the awkward possibility that arithmetic sometimes could not supply any answers at all

A blow though it was, at first it seemed it was not a mortal one. Although several examples of undecidable statements were unearthed in the years that followed, they were all rather technical and abstruse: fascinating to logicians, to be sure, but of seemingly little relevance to everyday arithmetic. One plus one was still equal to two; Peano's rules, though technically incomplete, were adequate for all practical purposes.
more ......

In 1977, though, Jeff Paris of the University of Manchester, UK, and Leo Harrington of the University of California, Berkeley, unearthed a statement concerning the different ways collections of numbers could be assigned a colour. It could be simply expressed in the language of arithmetic, but proving it to be true for all the infinitely many possible collections of numbers and colourings turned out to be impossible starting from Peano's axioms (see "The colour of numbers").
The immediate question was how far beyond Peano's rules the statement lay. The answer seemed reassuring: only a slight extension of the rule book was needed to encompass it. It was a close thing, but Gödel's chickens had once again missed the roost.

Now, though, they seem finally to have found their way home. In a forthcoming book, Boolean Relation Theory and Incompleteness, the distinguished logician Harvey Friedman of Ohio State University in Columbus identifies an entirely new form of arithmetical incompleteness. Like Paris and Harrington's theorem, these new instances, the culmination of more than ten years' work, involve simple statements about familiar items from number theory. Unlike Paris and Harrington's theorem, they lie completely out of sight of Peano's rule book.

To begin to understand what this new incompleteness is about, we must delve into the world of functions. In this context, a function is any rule that takes one or a string of natural numbers as an input and gives another number as an output. If we have the numbers x = 14, y = 201 and z = 876 as the input, for example, the function x + y + z + 1 will produce the output 1092, and the function xyz + 1 will give 2,465,065.

These simple functions belong to a sub-class known as strictly dominating functions, meaning that their output is always bigger than their inputs. A striking fact, known as the complementation theorem, holds for all such functions. It says there is always an infinite collection of inputs that when fed into the function will produce a collection of outputs that is precisely the non-inputs. That is to say, the inputs and outputs do not overlap- they are "disjoint sets" - and can be combined to form the entire collection of natural numbers.
Delayed triumph

As an example, consider the basic strictly dominating function that takes a single number as its input and adds 1 to it. Here, if you take the infinite set of even numbers 0, 2, 4, 6, 8, 10... as the inputs, the corresponding outputs are the odd numbers 1, 3, 5, 7, 9, 11... Between them, these inputs and outputs cover every natural number with no overlap. The complementation theorem assures us that a configuration like this always exists for any strictly dominating function, a fact that can be deduced from Peano's rules.

Friedman's work entails adjusting the complementation theorem to pairs of a specific class of strictly dominating function known as expansive linear growth (ELG) functions. Friedman identified 6561 relationships between inputs and outputs that a pair of ELG functions could exhibit in principle. For every one of these relationships, he tested the hypothesis that it would be shown by every possible pair of ELG functions.

Friedman found that Peano's rules gave a definitive yes or no answer in almost all cases. The relationship either popped up with every pair of ELG functions, or he found a specific pair whose inputs and outputs could not be linked in that way. In 12 cases, however, he drew a blank: the hypothesis could neither be proved nor disproved using Peano's axioms. What's more, it could not be proved using any reasonable extension of conventional arithmetic. With Friedman's work, it seems Gödel's delayed triumph has arrived: the final proof that if there is a universal grammar of numbers in which all facets of their behaviour can be expressed, it lies beyond our ken.

What does this mean for mathematics, and for fields such as physics that rely on the exactitude of mathematics? In the case of physics, probably not much. "Friedman's work is beautiful stuff, and it is obviously an important step to find unprovable statements that refer to concrete structures rather than to logical abstractions," says theoretical physicist Freeman Dyson of the Institute for Advanced Studies in Princeton, New Jersey. "But mathematics and physics are both open systems with many uncertainties, and I do not see the uncertainties as being the same for both." The clocks won't stop or apples cease to fall just because there are questions we cannot answer about numbers.

The most severe implications are philosophical. The result means that the rules we use to manipulate numbers cannot be assumed to represent the pure and perfect truth. Rather, they are something more akin to a scientific theory such as the "standard model" that particle physicists use to predict the workings of particles and forces: our best approximation to reality, well supported by experimental data, but at the same time manifestly incomplete and subject to continuous and possibly radical reappraisal as fresh information comes in.
The rules we use to manipulate numbers might not be universal truths, but just our best approximation of reality

That is an undoubted strike at mathematicians' self-image. Friedman's work does offer a face-saving measure, but it too is something that many mathematicians are reluctant to countenance. The only way that Friedman's undecidable statements can be tamed, and the integrity of arithmetic restored, is to expand Peano's rule book to include "large cardinals"- monstrous infinite quantities whose existence can only ever be assumed rather than logically deduced (see "A ladder of infinities").

Large cardinals have been studied by logicians for a century, but their intangibility means they have seldom featured in mainstream mathematics. A notable exception is the most celebrated result of recent years, the proof of Fermat's last theorem by the British mathematician Andrew Wiles in 1994. This theorem states that Pythagoras's formula for determining the hypotenuse of a right angled triangle, a2+ b2 = c2, does not work for any set of whole numbers a, b and c when the power is increased to 3 or any larger number.

To complete his proof, Wiles assumed the existence of a type of large cardinal known as an inaccessible cardinal, technically overstepping the bounds of conventional arithmetic. But there is a general consensus among mathematicians that this was just a convenient short cut rather than a logical necessity. With a little work, Wiles's proof should be translatable into Peano arithmetic or some slight extension of it.

Friedman's configurations, on the other hand, lay down an ultimatum: either admit large cardinals into the axioms of arithmetic, or accept that those axioms will always contain glaring holes. Friedman's own answer is unequivocal. "In the future, large cardinals will be systematically used for a wide variety of concrete mathematics in an essential, unremovable way," he says.

Not everyone is happy to take that lying down. "Friedman's work is beautiful mathematics, but pure fiction," says Doron Zeilberger of Rutgers University in Piscataway, New Jersey. He has a radically different take. The problems highlighted by Friedman and others, he says, start when they consider infinite collections of objects and realise they need ever more grotesque infinite quantities to patch the resulting logical holes. The answer, he says, is that the concept of infinity itself is wrong. "Infinite sets are a paradise of fools," he says. "Infinite mathematics is meaningless because it is abstract nonsense."

Rather than patching each hole with ever more dubious infinities, Zeilberger says we should focus our efforts on the only place where we really be sure of our footholds - strictly finite mathematics. When we do that, the incompleteness that creeps in at the infinite level will dissolve, and we can hope for a complete and consistent, albeit truncated, theory of arithmetic. "We have to kick the misleading word 'undecidable' from the mathematical lingo, since it tacitly assumes that infinity is real," he says. "We should rather replace it by the phrase 'not even wrong'. In other words, 'utter nonsense' ".

Such "finitist" views are nothing new. They appeared as soon as Georg Cantor started to investigate the nature of infinity back in the late 19th century. It was a contemporary of Cantor's, Leopold Kronecker, who coined the finitist motto: "God created the integers; all else is the work of man." But can we dismiss infinity that easily? Many mathematicians believe not, but we now know that even by accepting even the lowliest, most manageable form of infinity- that embodied by the "countable" set of natural numbers- we usher in a legion of undecidable statements, which in turn can only be tamed by introducing the true giants of the infinite world, the large cardinals.

The debate will rage on. The two possible conclusions are equally unpalatable. We can deny the existence of infinity, a quantity that pervades modern mathematics, or we must resign ourselves to the idea that there are certain things about numbers we are destined never to know.

In the 1920s, David Hilbert laid down a grand challenge to his fellow mathematicians: to produce a framework for studying arithmetic, meaning the natural numbers together with addition, subtraction, multiplication and division, with Giuseppe Peano's axioms as its backbone. Such a framework, Hilbert said, should be consistent, so it should never produce a contradiction such as 2 + 2 = 3. And it should be complete, meaning that every true statement about numbers should be provable within the framework.

Kurt Gödel's first incompleteness theorem, published in 1931, killed that aspiration dead by encoding in arithmetical terms the statement "this statement is unprovable". If the statement could be proved using arithmetical rules, then the statement itself is untrue, so the underlying framework is inconsistent. If it could not be proved, the statement is undeniably true, but that means the framework is incomplete.

In a further blow, Gödel showed that even mere consistency is too much to ask for. His second incompleteness theorem says that no consistent framework for arithmetic can ever be proved consistent under its own rules. The coup de grâce was delivered a few years later, when Briton Alan Turing and American Alonzo Church independently proved that another of Hilbert's demands, that of "computability", could not be fulfilled: it turns out to be impossible to devise a general computational procedure that can determine whether any statement in number theory is true or false.

When Jeff Paris and Leo Harrington got their glimpse of arithmetical incompleteness in 1977, they were considering a variant on a classic mathematical result called Ramsey's theorem. Suppose we have some scheme for assigning one of two colours, either red or blue, to every possible set of four natural numbers. So {1, 5, 8, 101} might be red for example, and {101, 187, 188, 189} might be blue. It is quite possible, then, that any given number will occur in some red sets and some blue sets. What Ramsey's theorem says is that, despite this, we can always find an infinite collection of numbers that is monochromatic - coloured entirely red or blue. There's nothing magic about sets of four numbers or two colours: change those to any figures you like, and the same thing works.

The theorem means order can be recovered even from highly disordered situations: even if you invent some horribly complex rule to colour your sets of numbers, you will always be able to extract an infinite monochromatic set. In theoretical computer science, for example, that permits algorithms to be constructed that allow the transfer of information through noisy channels where errors can creep in.

The variant of Ramsey's theorem considered by Paris and Harrington deals with sets of numbers that are "big", meaning that their smallest entry is less than the number of members in the set. So the set of four numbers {5, 7, 8, 100} is not deemed big as its smallest entry is 5, while the set {3, 8, 12, 100} is. If we start with a very big (but not infinite) set of natural numbers A, and again assign every set of four numbers within A either the colour red or blue, the modified version of Ramsey's theorem says we can find a monochromatic subset of A that is big. Again, the same result should hold with the numbers four and two replaced with any other numbers.
Therein lies the problem. Paris and Harrington showed that for the theorem to hold, the set A must be mind-bogglingly large- too huge, in fact, to be described by arithmetical procedures stemming only from Peano's rules.

How big is infinity? A silly question, you might say, as infinity is infinitely big. Perhaps, but as the 19th-century German mathematician Georg Cantor proved to his contemporaries' dismay, the infinite comes in different sizes.
Take the natural numbers: 0, 1, 2, 3, 4, 5... You can go on counting these till kingdom come, so there's no doubting that the set of natural numbers is infinite. But this "countable" infinity occupies only the lowest rung of an infinite ladder. Ironically, larger infinities arise when you break down the natural numbers into subsets: the numbers 1 to 1,000,000, for example, or the odd numbers, the prime numbers, or pairs of numbers such as four and 1296.

How many such subsets are there altogether? An infinite number, of course. Cantor was able to prove that this infinity is bigger than the original countable set. This second level of infinity is the "continuum", and it is where many important mathematical objects live: the set of real numbers (the integers and all the fractional and irrational numbers that lie between them) and the complex numbers too.
And so it goes on. By looking at the collection of all possible subsets of real numbers, you find a still higher level of infinity, and so on ad infinitum. Infinity is not a single entity, but an infinite ladder of infinities, with each rung infinitely higher than the one below. Mathematicians call these different levels the "infinite cardinals".

In 1908, another German mathematician, Felix Hausdorff, conceived the idea of "large cardinals". These dwarf even the hugest of Cantor's original cardinals and are blessed with a hierarchy all their own. They are too far up even to be seen from below, and whether or not they exist is a question utterly beyond the range of all the ordinary rules of mathematics. Small wonder, then, that many mathematicians baulk at the claim that large cardinals could rescue the logical foundations of arithmetic

Autor: Richard Elwes is a visiting fellow at the University of Leeds in the UK and the author of Maths 1001: Absolutely Everything That Matters in Mathematics

Fuente: News Scientist  :http://www.newscientist.com/article/mg20727731.300-to-infinity-and-beyond-the-struggle-to-save-arithmetic.html

No hay comentarios: