[FOM] Atlanta Meeting
Harvey Friedman
friedman at math.ohio-state.edu
Sun Jan 16 21:09:35 EST 2005
I was at the Atlanta AMS Meeting for 3 days during January 4-6, 2005, and
gave a talk at the Special Session on Reverse Mathematics on January 6,
2005, 1:00PM - 1:45PM.
I enjoyed this meeting very much. I attended the following:
1. The RM sessions. Jeff HIrst and Reed Solomon were organizers and did a
splendid job.
2. Panel on the Continuum Hypothesis, Keith Devlin moderator. Panelists:
Paul J. Cohen, Tony Martin, and Hugh Woodin.
3. Panel on "is it a proof yet?", Keith Devlin moderator. Panelists:
Richard DeMillo, Georgia Institute of Technology
Robin Wilson, The Open University
Avi Wigderson, Institute for Advanced Study
Bruce A. Kleiner, University of Michigan, Ann Arbor
Thomas C. Hales, University of Pittsburgh
**************************
1. RM SESSIONS.
I changed the title of my talk to "Strict Reverse Mathematics". This talk
had evolved and became more ambitious during the month preceding the
meeting.
I will place an expanded version of my talk with the title "Strict Reverse
Mathematics" on my website in the next few days. I will announce its
presence on the FOM.
The idea of SRM = Strict Reverse Mathematics goes back to some unpublished
papers of mine from the 1970's, which will be scanned in and put on my
website for historical reasons.
The idea of SRM is do Reverse Mathematics over no base theory whatsoever,
without relying on any coding mechanisms, and where every statement
considered is to be strictly mathematical.
This leads to quite a number of more focused and manageable subprojects,
some of which use RCA0 (and ACA0) as a base theory. I had to go to some
serious effort after the meeting in order to get clear about these more
focused and manageable subprojects. They are laid out clearly in "Strict
Reverse Mathematics", to appear on my website.
>From listening to the RM sessions, it was obvious that RM has never been in
better shape, and that its future is bright - even without expanding its
horizon with SRM. It is clear that it is now accepted as one of the major
thrusts of recursion theory. Deep recursion theoretic methods are being
applied to RM at an ever increasing rate. RM has proved to be a tremendous
problem generator, with no signs of weakening. All of this is as I expected
when I set it up in the 70's - only I did expect this to be crystal clear
before, say, 1980. On this, I was wrong.
I won't go in to the uniformly excellent talks, but just list them here:
Ksenija Simic, Defining products of Lp functions in subsystems of second
order arithmetic.
Peter Cholak, Alberto Marcone, Reed Solomon, Reverse mathematics and the
equivalence of definitions for well and better quasi orderings.
Douglas Cenzer, Jeff Remmel, Proof-Theoretic Strength of the Stable Marriage
Theorem.
Alberto Marcone, Finite better quasi orders.
Denis Hirschfeldt, Degrees of Infinite Homogenous Sets for Computable Stable
Colorings of Pairs.
Antonio Montalban, On the proof theoretic strength of Julien's results.
Steve Simpson, Reverse Topology.
Carl Mummert, Representing Second Countable Topological Spaces in
Second-Order Arithmetic.
Natasha Dobrinen, Steve Simpson, Almost everywhere domination and the
reverse mathematics of measure theory.
Barbara Csima, Denis Hirschfeldt, Richard Shore, The atomic model theorem.
Douglas Brown, An Introduction to Reverse Mathematics.
Noam Greenberg, Antonio Montalban, Superatomic Boolean algebras and ATR0.
Timothy McNicholl, Jeff Hirst, Ramsey Theory on Trees.
There was considerable informal discussion at the meeting about the
unavailability of a text on RM. Simpson's 1999 SOSOA book (Subsystems of
second order arithmetic) had served as such a text until it went out of
print shortly after it appeared. However, there is no definite timetable for
its republication.
The upshot is that there appears to be urgent need for a textbook
specifically on Reverse Mathematics, for students and scholars interested
in, or thinking of working in, Reverse Mathematics.
2. CONTINUUM HYPOTHESIS PANEL.
Paul J. Cohen retired from Stanford in June(?), 2004, at around 70. He had a
serious fall in his house in the Fall of 2004. He was hospitalized in
critical condition, and has since had four surgeries. The smart money was
that he would not show up for the panel discussion.
Paul showed up in totally coherent and very good form with a cap on his
head, accompanied by his two sons, who are making a video documentary of
their father. They videotaped the proceeding.
The panelists, Cohen, Tony Martin, and Hugh Woodin, opened the session with
15 minute statements outlining their positions.
Cohen adopted the position that there is something very special about the
usual axioms of set theory, ZFC. That that is all that is inherent in the
notion of set. In this vein, Cohen emphasized that the accepted axioms for
set theory are clearly a kind of extrapolation from the finite to the
infinite. That there is no hint of any missing axiom, in the sense of a
statement that could have the special status of an axiom for set theory.
Therefore, the continuum hypothesis has been solved - according to any
appropriate notion of solve - by Godel's work (consistency) and his work
(unprovability). [My work on transfer principles, on my website, is an
attempt to formalize the transfer from finite set theory to transfinite set
theory. This was not mentioned at the session].
Martin adopted an intermediate position between Woodin and Cohen. Martin
said that the question of whether the continuum hypothesis has a definite
truth value is, to him, entirely open. Martin said that the would not be
surprised either way, and that we do not have enough evidence yet to make a
decision regarding this matter. Martin asserted that every first order
statement about the ring of integers has a definite truth value, thereby
drawing a major distinction between the situation with the ring of integers
and the situation with the cumulative hierarchy of sets. Martin also offered
that he is not a Platonist, in the sense of believing in an objective
reality of real objects called sets - although he could
conceivably be persuaded about this. Martin said that Woodin has succeeded
in showing that one can do interesting research relevant to determining the
truth value of the continuum hypothesis. Martin said that Woodin has not, in
his opinion, succeeded in determining the truth value of the continuum
hypothesis, and does not know if Woodin's research will lead to such a
determination.
Woodin opened by declaring himself a fanatic. Woodin believes the cumulative
hierarchy of sets is just as real as the ring of integers. Woodin stressed
that, to him, there is no difference in kind between the two structures.
Woodin said that when we contemplate the ring of integers, one transcends
physical reality - rhetorically, Woodin said that if the physicists succeed
in establishing that the physical universe is entirely finite, then would we
then reject the ring of integers? Woodin said that once we are no longer
tied to any kind of physical reality, then he sees no defensible middle
ground short of the entire cumulative hierarchy of sets. Rhetorically, where
do you stop? Woodin then argued against a common argument that the continuum
hypothesis has no definite truth value - that every model of set theory has
a forcing extension in which the continuum hypothesis is true, and a forcing
extension in which the continuum hypothesis is false. Woodin then outlined
his approach to showing that the continuum hypothesis is false, stating at
the outset that one has to start with a healthy supply of large cardinals.
Woodin did not try to justify the necessary large cardinals in his
discussion.
The three then answered some questions of the moderator, Keith Devlin, for
about 15 minutes, before turning matters over to the audience for questions.
During this discussion period, Martin first made the point that set
theorists have successfully determined the truth value of questions in
classical descriptive set theory regarding projective sets of real numbers,
even though these questions cannot be answered within the usual ZFC axioms.
This position was enthusiastically endorsed by Woodin. Cohen did not comment
on this claim. Furthermore, both Martin and Woodin emphatically stated that
we know the correct theory of second order arithmetic. Neither Martin nor
Woodin explained what "correct theory" means here, and why these truth value
determinations are so compelling. [They are referring to "all projective
sets of real numbers are determined in the sense of infinite game theory".]
Martin added that not all set theorists accept these truth value
determinations, but nevertheless they are obviously correct. Cohen said that
he was surprised to see that generic extensions, or forcing extensions, were
being used as fundamental notions in their own right, rather than just
technical artifacts of his (Cohen's) method of proof. He stated that he
generally regarded forcing and generic sets as mere technical devices,
although he did say that he had vague ideas concerning the prior existence
of generic sets as he developed and worked with the notion. Woodin stated
that treating forcing and generic sets and extensions as fundamental has
become commonplace. Cohen asked specifically whether people had
an alternative proof, without forcing, of the fundamental independence
results. Woodin equivocated on this somewhat, but did say that there was a
theorem to the effect that the only way to construct a model of ZF in which
the reals cannot be well ordered was to use forcing. Woodin did not
reference or state a precise result to this effect. Martin said explicitly
that there was no alternative proof known for the independence results, that
did not use forcing.
Devlin then opened up the discussion to audience questions.
I reminded the panel about a result that goes in the direction of forcing
being the only method: that almost says that every model of ZFC is a generic
extension of its hereditarily definable part. In particular, that if V =
L[V(kappa)], then V is a generic extension of its hereditarily definable
part. Woodin gave the equivalent formulation "every set is generic over the
hereditarily definable part", and properly attributed these results to
Vopenka.
I read the following closing of a paper by Martin concerning the continuum
hypothesis:
"THROUGHOUT THE LATTER PART OF MY DISCUSSION, I HAVE BEEN ASSUMING A NAÏVE
AND UNCRITICAL ATTITUDE TOWARD CH. WHILE THIS IS IN FACT MY ATTITUDE, I BY
NO MEANS WISH TO DISMISS THE OPPOSITE VIEWPOINT. THOSE WHO ARGUE THAT THE
CONCEPT OF SET IS NOT SUFFICIENTLY CLEAR TO FIX THE TRUTH VALUE OF CH HAVE A
POSITION WHICH IS AT PRESENT DIFFICULT TO ASSAIL. AS LONG AS NO NEW AXIOM IS
FOUND WHICH DECIDES CH, THEIR CASE WILL CONTINUE TO GROW STRONGER, AND OUR
ASSERTION THAT THE MEANING OF CH IS CLEAR WILL SOUND MORE AND MORE EMPTY."
DONALD A. MARTIN, "HILBERT'S FIRST PROBLEM: THE CONTINUUM HYPOTHESIS", IN:
MATHEMATICAL DEVELOPMENTS ARISING FROM HILBERT PROBLEMS, AMS, 1976.
Martin responded that when he wrote this, there was no reasonable line of
investigation aiming towards determining the truth value of CH, whereas now
there is something reasonable - the Woodin line - even though the truth
value of CH has not yet been determined. Martin said that he would not write
this statement today.
I said that the Woodin enterprise is not any kind of attempt to come up with
simple a priori axioms in the traditional sense, but rather involved
investigations looking for evidence that are in the style of what goes on in
physical science - such as cosmology. I said that the insistence that axioms
of a simple a priori nature are required for determining the truth value of
set theoretic statements not provable in ZFC, versus the physical science
investigation approach, is the major bifurcation that determines how
people will relate to efforts such as Woodin. If one takes the simple a
priori approach, then Martin's 1976 pessimistic statement is fully
applicable today.
I made it clear that I, personally, do not relate to the physical science
approach, and regard efforts to determine the truth value of the continuum
hypothesis as unpromising. [A major problem with the physical science
approach is the apparent inability to refute statements: in physical
science, experiments routinely refute statements].
I said that I would much rather work on proving that there is no simple
axiom which settles the continuum hypothesis. I didn't have time to
elaborate on just what I had in mind, or to offer up some precise
conjectures.
I asked the panel members whether they were interested in this line of
investigation: no simple axiom settling the continuum hypothesis.
Woodin responded by saying that, overwhelmingly, he really wanted to know
whether the continuum hypothesis is true or false. He is far more interested
in pursuing that, as he is now, than any considerations of simplicity, which
for him, was a side issue.
Martin responded by saying that the projective determinacy experience showed
that one could have axioms with simple statements, but with very complicated
explanations as to why they are correct.
I did not have an opportunity to respond to Martin's statement - I would
have said that by the standards of axioms for set theory, projective
determinacy is NOT simple. It is far more complicated than any accepted
axiom for set theory.
I did ask Cohen specifically to comment on whether simplicity (of a new
axiom to settle the continuum hypothesis) was important for him. Cohen
responded by saying that such an axiom, for him, must be simple.
Let me end here with something concrete. In my papers in Fund. Math., and in
J. Math. Logic, it is proved that all 3 quantifier sentences in set theory
(epsilon,=) are decided in a weak fragment of ZF, and there is a 5
quantifier sentence that is not decided in ZFC (it is equivalent to a large
cardinal axiom over ZFC). All of the axioms of ZF are an at most four
quantifier sentence and an at most five quantifier axiom scheme. It has been
shown that AxC over ZF is equivalent to a five quantifier sentence (see
Notre Dame Journal, not me). Show that over ZFC, any equivalence of the
continuum hypothesis requires a lot more quantifiers. Show that over ZFC,
any statement consistent with ZFC that settles the continuum hypothesis,
requires a lot more quantifiers.
3. IS IT A PROOF YET? PANEL.
The discussion opened up with around 10 minute presentations by the five
panelists.
Kleiner is a differntial geometer who is deeply involved, with others, in
verifying the recent Perelman manuscripts purported to prove the Poincare
Conjecture in dimension 3 and the Geometrization Conjecture. Of course, his
methodology for doing this is the standard modus operandi for long deep
works in pure mathematics. That is, several big experts commit themselves to
the project, and read line by line, and run seminars in different parts of
the world, getting graduate students involved by presenting talks,
re-expositing portions, communicating with the author, etcetera. At a
certain point in the process, perhaps with new manuscripts from the author
and/or others, clarifying and/or correcting things, enough key experts have
given enough talks, written enough expository material, and said enough
times that they are totally comfortable with the proof. At that point, the
mathematics community accepts the proof. A particularly potent element of
this is if papers are written by experts which extend or sharpen the
original work. Kleiner indicated, to my surprise, that acceptance by the
mathematics community in this case is many years into the future (I don't
remember the figure but I thought it might have been at least ten years!).
He mentioned one reason: Perelman leaves many claims as exercises for the
reader. These exercises are nontrivial.
Robin Wilson is a British graph theorist who has written extensively about
the proof of the four color theorem. He talked about how people have come to
accept computer assisted proofs like the four color theorem and Kepler's
Conjecture (on sphere packing) by Thomas Hales. Wilson, and others on the
panel, say that there still is a distinction between an explanatory proof,
which tells us *why* something is true, versus a proof, which may only tell
us *that* something is true. [To put this point in a practical way,
mathematicians will still get lots of credit for finding an explanatory
proof in contexts where only a proof has been found.]
Thomas Hales is most known for his computer assisted proof of Kepler's
Conjecture on optimal sphere packing. He described how his experience with
publishing his proof in the Annals of Mathematics led him to his current
main project of "verifying" Kepler's Conjecture, in the sense of giving a
proof that is absolutely rigorous. The issue that the Annals raised was that
the Annals could not vouch for the correctness of the computer interaction
component. Prima facie, this would require the Annals to referee computer
code, and possibly software/hardware environments. The Annals specifically
decided that it would not get into the business of verifying software or
hardware. It seemed that Hales interest in verification extends to the
entire proof, not just the computerized component. He estimates that it will
take 20 man years to formally verify Kepler's Conjecture. He calls the
project to verify Kepler's Conjecture, FLYSPECK. See:
http://www.math.pitt.edu/~thales/flyspeck/index.html
It is based on a formal system called HOL light, of John Harrison. See
http://www.cl.cam.ac.uk/users/jrh/hol-light/index.html
Avi Wigderson is a very well known theoretical computer scientist at
Princeton IAS. He said that he is not an expert in proof verification
issues, but was invited to talk about PCP = probabilistically checkable
proofs. This is a topic in theoretical computer science concerning
certificates for NP problems. The key result in PCP is that there is a
general method for checking a certificate that only looks at a few lines,
say 30, in certificates of reasonable size. There is also a closely related
topic called zero knowledge proofs, whereby one can convince the world to a
high probability that one has a certificate without revealing any
information about the contents of the certificate. Wigderson stressed that
these are results in theoretical computer science that, at least yet, have
no practical significance for verification of mathematical proofs. Wigderson
asserted forcefully that the world has learned to live with uncertainty,
computer scientists have learned to live with uncertainty, and also
mathematicians can certainly live with uncertainty without damaging their
field.
Richard DeMillo is a computer scientist who is most known for his influence
in the development of the Internet. DeMillo expressed skepticism that
verification of mathematical proofs will ever become feasible for large
proofs. He cited difficulties in the verification of the computer code used
for the verification systems, the relevant operating systems, the hardware
operation, etcetera. He also expressed confidence that really long
mathematical proofs, over time, will get substantially simplified to a point
where interest in verification will wane.
Keith Devlin, moderator, then asked the panelists to comment on each other's
presentation. It was clear that this panel was a very diverse set of
experts, and to some extent, did not easily communicate with each other
about this rather subtle and open ended topic.
Devlin then opened the proceedings to the audience for questions.
A member of the audience who is an expert on finite group theory said that
the classification of finite simple groups presently is about 10,000 pages
of difficult mathematics scattered around, with plenty of known errors. The
Lyons/Solomon project hopes to straighten this out, but will at best result
in about 5000 pages of difficult mathematics, which will still be impossible
for the mathematics community to have an appropriate level of confidence in.
He said that the usual way difficult mathematics gets verified, through
seminars, re-expositions, etcetera, does not scale upward. It may work for a
one or two hundred page proof, but not for a 10,000 page proof. He said that
he has been advocating the use of formal methods for a long time. He called
the classification of finite simple groups the "grand challenge" to formal
verification methods.
I introduced myself as a collaborator of Avigad (who Hales mentioned as the
person who verified the prime number theorem using Isabelle) who is trying
to come up with tools that would make, in retrospect, Avigad's verification
of the prime number theorem far less painful that it was. I said that to
make verification sufficiently friendly to the working mathematician
requires major advances in language development, user interface, and
algorithms, involving insights from working mathematicians, logicians, and
computer scientists. I conjectured that the tools would be in place within
20 years for formal verification methods to become readily useable and
attractive for professional mathematicians, including finite group
theorists. At that point, the finite group theory community could at least
begin formal verification in earnest.
I addressed the skepticism concerning the achievement of certainty among
some in the panel (especially DeMillo) and some in the audience. I said that
there will be a certification system at the bit level that avoids any need
for verifying the correctness of any large software/hardware system. The
tools that the user interacts with only need to eventually produce a
(totally unreadable) bit file that forms a low level certificate. If this
low level certificate passes certification, then even if there are bugs in
the interactive system, one can be "certain" of the correctness of the
theorem being proved. One still has to do something highly nontrivial. One
has to set this bit level certification system up, with a single low level
program, and a single hardware implementation. One has to discuss just how
one goes about "verifying" this set up. This may include analysis of the
physics of the ultimate hardware in probabilistic terms. However, I said
that this was a nontrivial but highly manageable project that would create
an entirely new level of certainty for mathematics. It was incomparably
easier than verifying some large interactive computer system.
Harvey Friedman
More information about the FOM
mailing list