Mathematicians Measure Infinities and Find They’re
Equal
Two mathematicians have proved that two different infinities are equal
in size, settling a long-standing question. Their proof rests on a surprising
link between the sizes of infinities and the complexity of mathematical
theories.
In a breakthrough that disproves decades of conventional wisdom, two mathematicians have shown that two different variants of infinity are actually the same size. The advance touches on one of the most famous and intractable problems in mathematics: whether there exist infinities between the infinite size of the natural numbers and the larger infinite size of the real numbers.
The problem was first identified over a century ago. At the time, mathematicians knew that “the real numbers are bigger than the natural numbers, but not how much bigger. Is it the next biggest size, or is there a size in between?” said Maryanthe Malliaris of the University of Chicago, co-author of the new work along with Saharon Shelah of the Hebrew University of Jerusalem and Rutgers University.
In their new work, Malliaris and Shelah resolve a related 70-year-old question about whether one infinity (call it p) is smaller than another infinity (call it t). They proved the two are in fact equal, much to the surprise of mathematicians.
“It was certainly my opinion, and the general opinion, that p should be less than t,” Shelah said.
Malliaris and Shelah published their proof last year in the Journal of the American Mathematical Society and were honored this past Julywith one of the top prizes in the field of set theory. But their work has ramifications far beyond the specific question of how those two infinities are related. It opens an unexpected link between the sizes of infinite sets and a parallel effort to map the complexity of mathematical theories.
Many Infinities
The notion of infinity is mind-bending. But the idea that there can be different sizes of infinity? That’s perhaps the most counterintuitive mathematical discovery ever made. It emerges, however, from a matching game even kids could understand.
Suppose you have two groups of objects, or two “sets,” as mathematicians would call them: a set of cars and a set of drivers. If there is exactly one driver for each car, with no empty cars and no drivers left behind, then you know that the number of cars equals the number of drivers (even if you don’t know what that number is).
In the late 19th century, the German mathematician Georg Cantor captured the spirit of this matching strategy in the formal language of mathematics. He proved that two sets have the same size, or “cardinality,” when they can be put into one-to-one correspondence with each other — when there is exactly one driver for every car. Perhaps more surprisingly, he showed that this approach works for infinitely large sets as well.
Consider the natural numbers: 1, 2, 3 and so on. The set of natural numbers is infinite. But what about the set of just the even numbers, or just the prime numbers? Each of these sets would at first seem to be a smaller subset of the natural numbers. And indeed, over any finite stretch of the number line, there are about half as many even numbers as natural numbers, and still fewer primes.
Yet infinite sets behave differently. Cantor showed that there’s a one-to-one correspondence between the elements of each of these infinite sets.
1
|
2
|
3
|
4
|
5
|
…
|
(natural numbers)
|
2
|
4
|
6
|
8
|
10
|
…
|
(evens)
|
2
|
3
|
5
|
7
|
11
|
…
|
(primes)
|
Because of this, Cantor concluded that all three sets are the same size. Mathematicians call sets of this size “countable,” because you can assign one counting number to each element in each set.
After he established that the sizes of infinite sets can be compared by putting them into one-to-one correspondence with each other, Cantor made an even bigger leap: He proved that some infinite sets are even larger than the set of natural numbers.
Consider the real numbers, which are all the points on the number line. The real numbers are sometimes referred to as the “continuum,” reflecting their continuous nature: There’s no space between one real number and the next. Cantor was able to show that the real numbers can’t be put into a one-to-one correspondence with the natural numbers: Even after you create an infinite list pairing natural numbers with real numbers, it’s always possible to come up with another real number that’s not on your list. Because of this, he concluded that the set of real numbers is larger than the set of natural numbers. Thus, a second kind of infinity was born: the uncountably infinite.
What Cantor couldn’t figure out was whether there exists an intermediate size of infinity — something between the size of the countable natural numbers and the uncountable real numbers. He guessed not, a conjecture now known as the continuum hypothesis.
In 1900, the German mathematician David Hilbert made a list of 23 of the most important problems in mathematics. He put the continuum hypothesis at the top. “It seemed like an obviously urgent question to answer,” Malliaris said.
In the century since, the question has proved itself to be almost uniquely resistant to mathematicians’ best efforts. Do in-between infinities exist? We may never know.
Forced Out
Throughout
the first half of the 20th century, mathematicians tried to resolve the
continuum hypothesis by studying various infinite sets that appeared in many
areas of mathematics. They hoped that by comparing these infinities, they might
start to understand the possibly non-empty space between the size of the
natural numbers and the size of the real numbers.
Many of the
comparisons proved to be hard to draw. In the 1960s, the mathematician Paul
Cohen explained why. Cohen developed a method called “forcing” that
demonstrated that the continuum hypothesis is independent of the axioms of
mathematics — that is, it couldn’t be proved within the framework of set
theory. (Cohen’s work complemented work by Kurt Gödel in 1940 that showed that
the continuum hypothesis couldn’t be disproved within the usual axioms of
mathematics.)
Cohen’s work
won him the Fields Medal (one of math’s highest honors) in 1966. Mathematicians
subsequently used forcing to resolve many of the comparisons between infinities
that had been posed over the previous half-century, showing that these too
could not be answered within the framework of set theory. (Specifically,
Zermelo-Fraenkel set theory plus the axiom of choice.)
Some problems
remained, though, including a question from the 1940s about whether p is
equal to t. Both p and t are
orders of infinity that quantify the minimum size of collections of subsets of
the natural numbers in precise (and seemingly unique) ways.
Briefly, p is the minimum size of
a collection of infinite sets of the natural numbers that have a “strong finite
intersection property” and no “pseudointersection,” which means the subsets
overlap each other in a particular way; t is called the “tower
number” and is the minimum size of a collection of subsets of the natural
numbers that is ordered in a way called “reverse almost inclusion” and has no
pseudointersection.
The details
of the two sizes don’t much matter. What’s more important is that
mathematicians quickly figured out two things about the sizes of p and t.
First, both sets are larger than the natural numbers. Second, p is
always less than or equal to t. Therefore, if p is
less than t, then p would be an intermediate
infinity — something between the size of the natural numbers and the size of
the real numbers. The continuum hypothesis would be false.
Briefly, p is the minimum size of a collection of infinite sets of the natural numbers that have a “strong finite intersection property” and no “pseudointersection,” which means the subsets overlap each other in a particular way; t is called the “tower number” and is the minimum size of a collection of subsets of the natural numbers that is ordered in a way called “reverse almost inclusion” and has no pseudointersection.
Mathematicians
tended to assume that the relationship between p and t couldn’t
be proved within the framework of set theory, but they couldn’t establish the
independence of the problem either. The relationship between p and t remained
in this undetermined state for decades. When Malliaris and Shelah found a way
to solve it, it was only because they were looking for something else.
An Order of Complexity
Around the
same time that Paul Cohen was forcing the continuum hypothesis beyond the reach
of mathematics, a very different line of work was getting under way in the
field of model theory.
H. Jerome Keisler invented “Keisler’s order.”
Courtesy
of H. Jerome Keisler
For a model theorist,
a “theory” is the set of axioms, or rules, that define an area of mathematics.
You can think of model theory as a way to classify mathematical theories — an
exploration of the source code of mathematics. “I think the reason people are
interested in classifying theories is they want to understand what is really
causing certain things to happen in very different areas of mathematics,” said
H. Jerome Keisler, emeritus professor of mathematics at the University of
Wisconsin, Madison.
In 1967,
Keisler introduced what’s now called Keisler’s order, which seeks to classify
mathematical theories on the basis of their complexity. He proposed a technique
for measuring complexity and managed to prove that mathematical theories can be
sorted into at least two classes: those that are minimally complex and those
that are maximally complex. “It was a small starting point, but my feeling at
that point was there would be infinitely many classes,” Keisler said.
It isn’t
always obvious what it means for a theory to be complex. Much work in the field
is motivated in part by a desire to understand that question. Keisler describes
complexity as the range of things that can happen in a theory — and theories
where more things can happen are more complex than theories where fewer things
can happen.
A little more
than a decade after Keisler introduced his order, Shelah published an
influential book, which included an important chapter showing that there are
naturally occurring jumps in complexity — dividing lines that distinguish more
complex theories from less complex ones. After that, little progress was made
on Keisler’s order for 30 years.
Saharon Shelah is a co-author of the new proof.
Yael
Shelah
Then, in her
2009 doctoral thesis and other early papers, Malliaris reopened the work on
Keisler’s order and provided new evidence for its power as a classification
program. In 2011, she and Shelah started working together to better understand
the structure of the order. One of their goals was to identify more of the
properties that make a theory maximally complex according to Keisler’s
criterion.
Malliaris and
Shelah eyed two properties in particular. They already knew that the first one
causes maximal complexity. They wanted to know whether the second one did as
well. As their work progressed, they realized that this question was parallel
to the question of whether p and t are equal.
In 2016, Malliaris and Shelah published a 60-page paper that solved both
problems: They proved that the two properties are equally complex (they both
cause maximal complexity), and they proved that p equals t.
“Somehow
everything lined up,” Malliaris said. “It’s a constellation of things that got
solved.”
This past
July, Malliaris and Shelah were awarded the Hausdorff medal, one of the top
prizes in set theory. The honor reflects the surprising, and surprisingly
powerful, nature of their proof. Most mathematicians had expected that p was
less than t, and that a proof of that inequality would be
impossible within the framework of set theory. Malliaris and Shelah proved that
the two infinities are equal. Their work also revealed that the relationship
between p and t has much more depth to it
than mathematicians had realized.
“I think
people thought that if by chance the two cardinals were provably equal, the
proof would maybe be surprising, but it would be some short, clever argument
that doesn’t involve building any real machinery,” said Justin Moore, a
mathematician at Cornell University who has published a brief
overview of Malliaris and
Shelah’s proof.
1. Instead,
Malliaris and Shelah proved that p and t are
equal by cutting a path between model theory and set theory that is already
opening new frontiers of research in both fields. Their work also finally puts
to rest a problem that mathematicians had hoped would help settle the continuum
hypothesis. Still, the overwhelming feeling among experts is that this
apparently unresolvable proposition is false: While infinity is strange in many
ways, it would be almost too strange if there weren’t many more sizes of it
than the ones we’ve already found.
------------------------------------------------------------------------------------
Related:
Clarification: On September 12, this article was revised to clarify that
mathematicians in the first half of the 20th century wondered if the continuum
hypothesis was true. As the article states, the question was largely put to
rest with the work of Paul Cohen.
SOURCE: https://www.quantamagazine.org/mathematicians-measure-infinities-find-theyre-equal-20170912/?utm_source=Quanta+Magazine&utm_campaign=aeff1d0f67-EMAIL_CAMPAIGN_2017_09_14&utm_medium=email&utm_term=0_f0cb61321c-aeff1d0f67-389390733
No hay comentarios:
Publicar un comentario