[FOM] New formal models of computation.
Alex Berka
alex.berka at a-r-a-m.org
Mon May 31 02:46:42 EDT 2010
May I bring fomers attention to some new work in foundational
informatics.
The set theoretical and logical definition of procedures for
assembling algebraic constructions in mathematics, and the
constructions themselves, are normally considered to reside in a
universe of discourse, which is neutral and abstract from any
computational implementation. Recent work suggests such discourse
exhibits implicit notions of computation, which are rooted in tree
based formalisms divorced from abstract denotational environments. In
particular, it is argued such discourse has a sequential nature, and
is asynchronous and recursion oriented.
Consider the hypothesis that natural language's structural defects
(subexpression repetition, the lack of information concerning which
subexpressions may be semantically processed simultaneously, and
where), render it inappropriate as a template for formal and
programming languages, and that consequences include difficulties
encountered include introducing an explicit notion of computation into
mathematical discourse, and in devising viable parallel models of
computation.
Harvey Friedman in a thread from 2003 ([FOM] 187:Grand Unification 2 -
saving human lives) wrote
-What is needed is an appropriate mathematically friendly functional
-programming language, with clear mathematically friendly semantics,
-which does not sacrifice much in the way of efficiency. Complete
-modularity, no side effects, where semantics is inherited from the
-standard semantics of mathematics, etc.
A new family of computation models have been developed, which do not
the have the space/time complexity overheads associated with the
Turing Machine and lambda calculus, in supporting high level of
abstraction in programming. I would like to propose the Synchronic B-
Ram and Space language as possible candidates for deriving a
mathematically friendly applicative language. You are invited to view
a 4 page introduction and longer preprint report on the homepage of www.isynchronise.com
. May I suggest fomers look at chapters 2,3 and 9 in the first
instance. A summary appears below.
Interlanguages and Synchronic Models of Computation.
A novel language system has been developed, which has given rise to
promising alternatives to standard formal and Von Neumann network
models of computation. A textual structure called the interstring is
proposed, which is a string of strings of simple expressions of fixed
length. Unlike a conventional tree language expression, an interstring
linked with an abstract machine environment, can represent sharing of
sub-expressions in a dataflow, and a program incorporating data
transfers and spatial allocation of resources, for the parallel
evaluation of dataflow output. Formal computation models called the
alpha-Ram family, are introduced, comprising synchronous,
reconfigurable machines with finite and infinite memories, designed to
support interstring based programming languages (interlanguages).
Distinct from dataflow, visual programming styles, and graph
rewriting, alpha-Ram machines’ instructions are bit level and execute
in situ without fetch. They support high level sequential and parallel
languages without the space/time overheads associated with the Turing
Machine and lambda calculus, enabling massive programs to be
simulated. The elemental devices of one alpha-Ram machine, called the
Synchronic A-Ram, are fully connected and simpler than multi-input
boolean operations. With the addition of a mechanism for expressing
propagation delay, the machine may be seen as a formal model for
boolean circuits and sequential digital circuits incorporating states.
A compiler for an applicative-style, interlanguage called Space, has
been developed for the Synchronic A-Ram. Space can express coarse to
very fine grained MIMD parallelism, is modular, strictly typed, and
deterministic. Barring operations associated with memory allocation
and compilation, Space modules are referentially transparent. A range
of massively parallel modules have been simulated on the Synchronic A-
Ram, with outputs as expected. Space is more flexible than, and has
advantages over existing graph, dataflow, functional, and multi-
threaded programming paradigms. At a high level of abstraction,
modules exhibit a small, sequential state transition system, aiding
verification. Composable data structures and parallel iteration are
straightforward to implement, and allocations of parallel sub-
processes and communications to machine resources are implicit.
Alex Berka
Principal Researcher
Isynchonise Ltd
More information about the FOM
mailing list