In this note we discuss what we have illustrated that we can do - parsing and interpretation, and show how this leads to results about what is impossible.
The languages we use are functional languages, i.e., langauges where functions are first-class objects, so we can have functions of functions and so on, and where definitions can be recursive. Indeed recursion is the only form of looping we have. Idris is certainly a functional langauge, but so is the HackyLang
that we have created.
Note: We have extending HackyLang
adding expressions for pairs - Cons
to form pairs, Left: Exp -> Exp
and Right: Exp -> Exp
for projections and introduced Null
to deal with invalid cases. We have also added a data type Program
, which is a context with a main expression (to be though of as a function), and a run
function that takes a program and an input expression, applies the main expression to the input and interprets in the context.
A rich type system makes programming much easier and more pleasant, but the power of a weakly/dynamically typed functional language is no less. So while we have illustrated various things with Idris
code, we can presumaby write code in HackyLang
. In practice it would be best to use say Scheme
(or a subset of it) to replace both Idris
and HackyLang
, and it is a nice project to port this part of the course to Scheme
.
In functional langauges parsing can be very pleasant. We view parsers as functions taking an input and giving an output, or rather a ParseResult
. The key point is we can define parsers by combinations of other parsers - sequencing (i.e. ++
), alternatives (i.e. ||
), mapping, repetition and flat-mapping. Starting with basic parsers that just match a single character based on a predicate, we can combine mutually recursively to parse a language.
We can presumably do all this in HackyLang
, with ParseResult
replaced by a pair with
Null
(it can be anything).A string can be represented by an iterated pair of characters (with characters represented by natural numbers). One can readily implement basic parsers as well as combinations. Doing this in Scheme
is an enjoyable exercise.
HackyLang
We have written in code and illustrated in lectures
ExpressionParser
, for a language smaller but more complex that HackyLang
(since we needed to flat-map to deal with context dependence of parsing).HackyLang
built by recursive application of simplification (warning: simplification may actually complicate).HackyLang
that can compute the factorial function.HackyLang
to include pairs and a notion of running programs.HackyLang
.We hope this is enough to make all the claimed constructions below plausible.
We sketch how diagonal arguments let us prove some fundamental results showing the limits of computation.
We postulate that we have a language and associated objects that satisfy the following:
run
function run: Program -> String -> String
quote
programs, i.e. represent them by strings.eval : String -> String -> String
so that if the program P
when run with input s
terminates with output t
, then eval (quote P) s = t
(by this we mean the lhs is defined and equals the rhs). We say eval
is a total interpreter if it is defined in all cases, though there is no restriction on what it outputs except for programs and terminating input.Theorem 1: A language as postulated above cannot have a total interpreter defined in itself.
Proof:
eval
.evil: String -> String
given by evil s = (eval s s) + " contrariwise"
evilProg
be the program giving this definition.eval
is total, evilProg
also terminates with all inputs, i.e. runEvilprog s
is always defineds
, eval (quote evil) s = evil s
evil (quote evil) = (eval (quote Evil) (quote Evil)) + " contrariwise"
= (evil (quote evil)) + " contrariwise"
We can bootstrap this result to prove some fundamental results. We say that a function f
is computable if there is a program in our language that when run with input arguments to f
outputs the result. This depends on our language, but if we can interpret another language in ours, then computability in the other language implies computability in ours.
Let halt: String -> String -> Bool
be the function such that halt(q)(s)
is true if and only if q = quote P
and P
halts when run with input s
.
Theorem 2: The function halt
is not computable.
Proof: Assume that halt
is computable. Then we can modify the (partially defined) eval
function to be a total interpreter, namely define
safeEval : String -> String -> String
safeEval q s = if (halt q s) then eval q s else "farewell and thanks for all the fish".
It follows that there is a function that grows faster than any computable function.
Theorem 3: There is a function $f: \mathbb{N} \to \mathbb{N}$
such that if $g: \mathbb{N} \to \mathbb{N}$
is a computable function, for some $n \in \mathbb{N}$
we have $f(n) > g(n)$
.
Proof:
For a natural number n
, consider all programs P
of length n
and strings s
with length n
so that the program P
with input s
terminates. Given such a P
and s
, let t(P, s)
be the number of simplification steps needed when running the program P
with input s
. Let f(n)
be the maximum of t(P, s)
over all such pairs.
We see that $f$
is as claimed. Suppose not. Then there is a computable function $g: \mathbb{N} \to \mathbb{N}$
such that $f(n)\leq g(n)\ \forall n\in\mathbb{N}$
. Then it is easy to define a computable function halt
using $g$
, contracting Theorem 2.