FOM: Re: Godel, f.o.m.
Steve Stevenson
steve at cs.clemson.edu
Wed Jan 26 08:32:58 EST 2000
charles silver writes:
>
> Steve Stevenson wrote:
> > James Fetzer has a really great article on this particular view of
> > computing.
>
> Geez, I remember thinking about 8 years ago when I read the article that
> it had little content. Wasn't the paper an extended argument against
> program verification, and wasn't his main point just the standard Cartesian
> criticism that we can always make a error, no matter how much we try to
> verify a program with a formal proof? The proof that the program is
> correct may itself wrong, and other things can go wrong as well.
I believe that his point is something different, especially given the
"formal systems" tenor of Vladmir's comment. The machine is inherently
different, period. Your criticism were used by Hoare and the rest of
the formalist part of computer science.
It would seem to me that 21st Century mathematics must include the
computer as part of the tool kit. If this is not the case, then the
principles of computing will lie elsewhere. If it is the case, then we
must recognize the limits to what it does and does not do. For
example, is it possible to put enough constraints on a computation to
correct machine errors when they occur?
Turing machines are really boring. And there was a time when people
actually used mathematics for things ....
> Please correct me if I am wrong, but as I recall, his arguments apply
> equally well to mathematics (except for his specific comments about
> compilers, of course). That is, consider a formal proof checker that's been
> created to check the accuracy of math proofs. It is possible for a proof to
> be be wrong and for the proof checker to err (for any number of reasons) and
> incorrectly label it as a good proof. (And, if we thought we could shore
> this up with a proof checker to check the proof checker, then *this* very
> proof checker would need a proof checker, etc. etc.) Again, this overly
> general fact about fallibility doesn't seem to cut any ice about computer
> science specifically. Here, the Fetzer conclusion would be that math itself
> is not formal, since one can't (in Descartes's sense) be *absolutely
> certain* that a mistake in a math proof hasn't been made.
I think you're dodging here. For platonic logicians, I'm sure these are
arguments that are easy to accept ... for those of us who use this
stuff the field, there is a question about all this.
Let me bore you with a reality. Nuclear weapons policy is being made
by regarding the output of simulations as 100% correct and that the
physics is 100% correct based on old test data. That's really bad
science, let alone policy. This need to reason about scientific
simulations is guided by the mathematics, numerical and otherwise;
it is called validation. Now, it makes a lot of difference whether
those simulations are *right* or *wrong*.
The mathematical point is not that "Well, it could be right or it
could be wrong and that's a philosophical point not worth caring
about". The mathematical point is to ask "How reliable is your
conclusion?" Historically, we let a countable number of grad students
pore over proofs looking for a dissertation. Not very practical any
more. Abel and Cauchy got into the foundations game because they saw
unreliable computations (divergent series) being used without reliable
ways of knowing when these computations were right or wrong. [Of
course, we now understand asymptotic series well enough that physics
uses them reliably quite often.] Time to revisit that reliability
point, IMHO.
steve
More information about the FOM
mailing list