[FOM] Re: pom/fom directions
David Corfield
david.corfield at philosophy.oxford.ac.uk
Wed Oct 29 07:40:40 EST 2003
Forced to earn some of my keep outside the ivory tower, I don't have
the time to keep up with pace of the FOM list, but I would like to make
this reply to portions of a message Harvey Friedman wrote a while ago
<BBBA3D53.6D69%friedman at math.ohio-state.edu>
>E.g., in the vast preponderance of real world computer programming, any
>proof one has in mind that the program is correct is generally
>mathematically uninspiring, made of obvious steps, one after another.
>> I found the case illuminating in that even in the unpromising, albeit
very
>> short, syntactical trace of the EQP prover, a real mathematician such as
>> Louis Kauffman desperately searches for meaning. It throws into stark
>> contrast the drive for certainty and the drive for explanation.
>Please elaborate on your point, even if it has been spelled out in your
>book.
>What you call the "drive for certainty" is an extremely rich, deep, and
>clear motivation of great practical import which is not only a motivating
>theme for work on proof assistants, but also in verification generally, and
>also computer security.
List members interested in this topic may enjoy reading Donald Mackenzie's
'Mechanizing Proofs' MIT Press. Although Mackenzie's associations with
Bloor and The Strong Programme may ward some people off, there is
plenty of interesting history here about computer security and chip
verification.
A dispute as to whether a chip had been rigorously proved to meet its
specification was nearly taken into the court room. We came close to seeing
lawyers arguing about the nature of proof!
[snip]
>In the cases you wish to cite, what does the "drive for meaning" mean and
>what has it led to, and what will it lead to?
Later posts have touched on this distinction, e.g., in the case of
probabilistic
proofs, even if reverse mathematics established the strength requirements to
be no greater, practitioners have a sense, even in the finite combinatorial
case, that there's an enormous power to the probabilistic method. Now, you
can take this to be *merely* a matter of psychology. Such a view has been
taken by one side of a
somewhat inconclusive discussion in philosophy of science in the case, for
instance, of Hamiltonian versus Lagrangian methods of mechanics. Feynman
points out how they lead physicists to think in very different ways when,
say, elaborating classical mechanics to the quantum case, but what to make
of this?
I think we can contribute to this discussion by examining the situation in
mathematics. This is part of a notion I share with Polya that, though this
may seem absurd to people who think of mathematics as a very difficult
subject, many issues in epistemology can be profitably broached there
because the relevant features are crisper.
A case of the search for meaning which crops up through the chapters of my
book might begin with us looking over Euler's shoulder as he ponders the
resemblance between sin x/x and a complex polynomial. He notes that they
both have the right number of zeros for their Taylor series, that the zeros
don't accumulate, that there are no poles, that there's symmetry at +/-
infinity. On the other hand, sin x/x tends to 0 at +/- infinity. Is there
enough of a resemblance to expect sin x/x to split like a complex
polynomial? Well, it works giving the sum of the reciprocals of the squares.
But why? Further work in the 19th century leads to the notion of an entire
function and the Weierstrass Product Theorem, so the *hope for a common
ground* Polya mentions has been satisfied.
But the drive for meaning doesn't end there. Do we understand when things
split in other situations. Yes, for the integers. How about other
collections of number-like things, such as Z[i], the Gaussian integers? Yes,
and plenty of number theoretic results follow. Then we find that in other
such *rings* the splitting need not be unique, so we try to restore
uniqueness by introducing prime ideals.
What do such splittings have in common? Dedekind and Weber tell us the
situation in algebraic number fields and complex function fields is very
similar, equating prime ideals with points of a Riemann surface. Entire
functions are then associated with algebraic integers. The analogy is pushed
harder producing delights such as p-adic numbers.
Tracking the story on into the 20th century we'd hear about adeles and
ideles, Arakelov geometry, the Weil conjectures, Grothendieck's toposes,
motives, and so on.
Are we dealing here with a never-ending quest? Do we ever hit conceptual
rock bottom? Might we have ended up with a very different web of concepts?
If we take it that there's a best way of organising concepts, would it pay
to revisit earlier decisions? E.g., Baez and his group are suggesting that
the entire function-algebraic integer connection wasn't quite made
*correctly*. Revising the connection using groupoids (for these you might
look at chap.9 of my book) apparently explains some otherwise weird features
of "split" versus "inert" primes.
What do we make of the distinction Weil draws between
1) Formulating the axioms for uniform spaces, which he likens to a sculptor
moulding snow,
and
2) Working on the function field/number field analogy which he likens to a
sculptor working on hard piece of rock, realising the form within?
If we make sufficiently precise sense of these questions, do we bring in
f.o.m. methods? How likely are they to involve category theoretic tools?
So many questions, so little time.
David Corfield
http://users.ox.ac.uk/~sfop0076/
More information about the FOM
mailing list