Expanded HTML version, with added links, derived from Gödel's theorems for minds and computers, published in the special issue of Informatica, vol. 19, no. 4, Nov. 1995, pp. 627-34, MIND not equal COMPUTER
On the other hand, it could be argued that it is precisely because we do not know enough about ourselves and our minds that we can make comparisons with computers and try to design computational models. This is especially so because we also do not know exactly what computers are incapable of, although we have some abstract, general results about their limitations, such as Turing's theorem about the inability of an idealized computer to determine for itself whether its computation terminates or not. This theorem, and related results by Gödel and Church, are frequently used in arguments about the existence of formal models of the mind; interestingly enough, they have been used to argue both for and against that possibility. As a preliminary observation, it can be noted that the "negative" use of limitative theorems, as these meta-mathematical results are called, is less productive in the sense that the faculty by which mind is supposed to transcend "mere" computation remains essentially mysterious. The "positive" use of the theorems promotes a more definite, less exalted view of the mind as something which has its own limitations, similar to those which formal systems have. The present paper argues for this latter view, exploring the common feature of all these theorems, namely self-reference, and focusing on Gödel's theorems.
Gödel's theorems are actually special, self-referential
consequences of the requirement of consistency: in a consistent
system, something must remain unprovable. One unprovable statement
is the statement of that very fact, namely the statement which
says of itself that it is
unprovable (first theorem): you cannot prove a sentence
which says that it can't be proved (and remain consistent). Another
unprovable statement in a consistent system is the statement
of consistency itself (second theorem). In addition, if the
formal system has a certain stronger form of consistency, the
sentence which asserts its own unprovability, called the Gödel
sentence, is also not refutable in the system. Rosser later
constructed a more complicated sentence for which simple consistency
is sufficient both for its unprovability and for its unrefutability.
Similar sentences were constructed by others (Rogers, Jeroslow
[Boolos 79, pp. 65-6]), showing that consistent formal systems cannot
prove many things about themselves. On the other hand, a formal
system can retain all the insight into itself that is compatible
with consistency: thus, although it cannot prove its Gödel
sentence, if it is to remain consistent, it can prove that very
fact, namely the fact that it cannot prove its Gödel sentence if
it is consistent [Robbin 69, p. 114].
Gödel's theorems uncover a fundamental limitation of
formalization, but they say that this limitation could be overcome only
at the price of consistency; we might thus say that the
limitation is so fundamental as to be no limitation at all. The
theorems do not reveal any weakness or deficiency of formalization,
but only show that the supposed ideal of formalization - proving
all and only all true sentences - is self-contradictory and
actually undesirable:
Gödel's theorems show that the notions of truth and
provability cannot coincide completely, which at first appears
disturbing, since, as Quine says,
Self-reference in Gödel's theorems
The application of Gödel's theorems
to fields outside
meta-mathematics,
notably the philosophy of mind, was initiated by
Gödel himself. He had a strong philosophical bent towards
realism/platonism which also
motivated his (meta)mathematical discoveries [Feferman 88, p. 96],
[Weibel &
Schimanovich 86]. Gödel first
thought that his theorems established the superiority of mind
over machine [Wang 90, pp. 28-9]. Later, he came to a less decisive,
conditional view: if machine can equal mind, the fact that it
does cannot be proved [Weibel &
Schimanovich 86], [Casti 89, p. 321].
This view also parallels the logical
form of Gödel's second theorem: if a formal system of a certain
kind is consistent, the fact that it is cannot be proved within
the system. Gödel's more famous first theorem says that if a
formal system (of a certain kind) is consistent, a specific
sentence of the system cannot be proved in it. Implications of Gödel's theorems
The fact that a particular sentence is neither provable nor
disprovable within a system only means that it is logically
independent of the axioms: they are not strong enough to either
establish or refute it - they don't say enough about it one way
or the other. Saying more, by adding additional axioms (or rules
of inference) might make the sentence provable. But in Gödel's
cases, this does not work: even if Gödel's sentence is added as
an additional axiom, the new system would contain another unprovable
sentence, saying of itself that it is not provable in the
new system. This form of self-perpetuating
incompleteness
might
be called, following [Hofstadter 79, p. 468] and [Mendelson 64,
p. 147], essential
incompleteness.
On the positive side, the theorems show that certain formal
systems have a much more intricate, reflexive structure then
formerly suspected, containing much of their own meta-theory.
we used to think that mathematical truth consisted in
provability [Ways of Paradox, p. 17].
Gödel's theorems undermine the customary identification of truth
with provability by connecting truth with unprovability:
the first theorem presents a case of
(*)
(if the sentence asserting its own unprovability is not provable,
then it is true); the second theorem presents a case of
(if the sentence asserting the consistency of the system is true,
then it is not provable). However, the notion of truth has a
problem of its own, namely the liar paradox, of which Gödel's
sentence is a restatement in proof-theoretic terms. Thus,
Gödel's theorems do not actually establish any disturbing
discrepancy between provability and truth. Furthermore, the
implication (*) above is an oversimplification: assuming consistency,
Gödel's sentence is not simply true, because it is not
always true i.e. not in all interpretations. If it were, it would
be provable, by the completeness theorem (also proved by
Gödel), as noted in [Penrose 94, p. 116, note 5]:
provability is truth in all interpretations). The first
theorem shows that if the system is consistent, it can be
consistently extended with the negation of the Gödel
sentence, which means that the sentence is actually false in some
models of the system. Intuitively, without going into details,
this could be explained by saying that in those models the
Gödel sentence acquires a certain stronger sense of
unprovability which those models do not support
[Bojadziev
95a,
p. 391]. Gödel's
theorem thus shows that there must
always exist such unusual, unintended interpretations of the
system; as
Henkin says, quoted in [Turquette 50]:
We tend to reinterpret Gödel's incompleteness result as asserting not primarily a limitation on our ability to prove but rather on our ability to specify what we mean ... when we use a symbolic system in accordance with recursive rules [Gödel & the synthetic a priori].Similarly, Polanyi says, though only in connection with the second theorem:
we never know altogether what our axioms mean [Personal Knowledge, p. 259]. We must commit ourselves to the risk of talking complete nonsense if we are to say anything at all within any such system [p. 94].This characterization of formal language sounds more like something that might be said about ordinary, natural language. Thus, if we take as a characteristic of ordinary language its peculiar inexhaustibility and the frequent discrepancy between intended and expressed meaning ("we never know altogether what our sentences mean; we must risk talking nonsense if we are to say anything at all"), Gödel's theorems would show that, in this respect, some formal languages are not so far removed from natural ones. Certain similarities between the self-reference in natural language and in Gödel's sentence and theorems have also been noticed at the lexical and pragmatic level (indexicals [Smullyan 84], performatives [Hofstadter 79, p. 709]). This line of thought, namely that the self-reference which leads to Gödel's theorems makes a formal system more human, so to speak, will be followed here to the conclusion that such systems are indeed suitable for modelling the mind.
The Lucas
argument, especially in the form advanced by
Penrose
[Penrose 94], now centers on how far into the transfinite can a Gödel
operator follow the mind's ability to produce the Gödel sentence
of any system in the sequence
Generalizing meta-circular interpretation, provability can be
specified in a separate meta-language, and reflection principles
defined for relating and mixing proofs in both languages. Such
meta-level architectures [Yonezawa & Smith 92] can be used
to implement reflective
or introspective systems, which also include an internal
representation of themselves and can use it to shift from normal
computation about a domain to computation about themselves
[Maes & Nardi 88] in
order to achieve greater flexibility. Meta-level architectures
are useful for knowledge representation, allowing the expression
and use of meta-knowledge, and opening the possibility of
computational treatment of introspection and self-consciousness
[Giunchiglia & Smaill 89, p. 128]. For example, Perry suggested
an architecture of
self-knowledge and self in which indexicals mediate between bottom
level representations, in which the organism is not itself
represented, and higher levels at which it is represented generically,
as any other individual [Perry 85].
It may be that, as Webb says, the phrase 'the Gödel
sentence of a man' is an implausible construction [Webb 80, p. x], but
certain interpretations might be imagined, such as
self-falsifying beliefs. On a humorous note, the Gödel sentence
for a human could work like a recipe for self-destruction,
activated in the process of its comprehension or articulation
("self-convulsive", "self-asphyxiative",
"self-ignitive", ...) A more
elaborate interpretation, as the paralysing effect of some
self-referential cognitive structure, is presented in Cherniak's story
[Hofstadter & Dennett 81, p. 269]. The history of logic itself
records lethal cases
(Philetas) and cases of multiple hospitalization (Cantor,
Gödel). Of course, this is all anecdotal, speculative and
inconclusive, but it does suggest that the apparent gap between
minds and machines could be bridged, in two related ways:
Finally, taking speculation one literal step further, the
self-reference in Gödel's sentence can be compared to a formal
way of self-recognition in the mirror,
by noticing the
parallelism between things (posture, gesture, movement) and their
mirror images. The basis for this comparison is the way the
Gödel code functions as a numerical
mirror in which
sentences can refer to, "see" themselves or other
sentences "through" their Gödel numbers.
The comparison, developed in [Bojadziev 95b], covers the stages
of construction of Gödel's
sentence and relates them to the irreflexivity of vision and the
ways of overcoming it. The comparison attempts to turn
arithmetical self-reference into an idealized formal model of
self-recognition and the conception(s) of self based on that
capacity. The motivation for this is the cognitive
significance of the capacity for self-recognition, in mirrors and
otherwise. The ability to recognize the mirror image, present in
various degrees in higher primates and human infants, has been
proposed as an objective test of self-awareness [Gregory 87, p. 493].
Self-recognition
in the mirror
is a basic, even paradigmatic case of
self-recognition, the general case being the recognition of
effects on the environment of our own presence in it.
Self-recognition in this wider sense is the common theme of Dennett's
conditions for ascribing and having a self-concept and
consciousness [Hofstadter & Dennett 81, p. 267].
Self-recognition is also the common theme of
the self-referential mechanisms which, according to
[Smith 86],
constitute the self:
Non-implications of Gödel's theorems
Certain authors, especially some of those who attempt to
apply Gödel's
theorems to disciplines other than meta-mathematics, are
handicapped by a more or less severe misunderstanding of the theorems.
For example, Watzlawick, Beavin and Jackson state:
Gödel was able to show that it is possible to
construct a sentence G which
Of course, this is completely garbled, but the authors nevertheless
have very interesting ideas about applications of Gödel's
theorems.
This means that if G be provable in the system, its
unprovability (which is what it says of itself) would
also be provable. But if both provability and unprovability
can be derived from the axioms of the system,
and the axioms themselves are consistent (which is part
of Gödel's proof), then G is undecidable in terms of
the system [Pragmatics of Human Communication, p. 269].
Formal models of the mind
Gödel's (first) incompleteness theorem can be expressed in the
form: a sufficiently expressive formal system cannot be both
consistent and complete. With this form, the attempt to use such
formal systems as models of the mind invites the following brush
off:
Since human beings are neither complete nor consistent,
proving that computers can't be both doesn't really
help [Roger
B. Jones,
"quoted" from
sci.logic,
May 1995].
A different intuition was followed by Wandschneider: the
limitations of formalization
revealed by Gödel's theorems prevent the
use of formal systems as models of the mind [Wandschneider 75].
Most authors,
however, accept the comparison between mind and formal systems of
the kind considered by Gödel, but reach different conclusions.
For example, according to Haugeland
most people are agreed ... that [Gödel's] result does
not make any difference to cognitive science [Mind
Design, p. 23].
According to [Kirk 86], arguments against mechanism based on
Gödel's theorems are agreed to be mistaken, though for different
reasons; cf. [Dennett 72] and especially [Webb 80]. These
arguments try to establish the superiority of mind by suggesting that
mind can reach conclusions which a formal system cannot, such as
Gödel's sentence. The basic incompleteness argument
Arguments about the relative cognitive strength of minds and
machines usually invoke only the first Gödel theorem, although
the second theorem also establishes the existence of a sentence
which, if true, is not provable. The premise of both theorems is
consistency, and it frequently appears neglected in the basic
version of the argument from incompleteness:
since any formal system (of a certain kind)
contains a true but unprovable sentence, mind transcends formalism
because mind can "see "
that the unprovable sentence is true. This
conviction can be traced, in various forms, from [Penrose 94],
[Penrose 89],
through [Lucas 61] back to [Nagel & Newman 58, pp.
100-1]. For
example, Lucas says:
However complicated a machine we construct, it will ...
correspond to a formal system, which in turn will be
liable to the Gödel procedure for finding a formula
unprovable-in-that-system. This formula the machine
will be unable to produce as being true, although a
mind can see it is true. And so the machine will still
not be an adequate model of the mind [Minds, Machines
and Gödel].
The consistency premise is not very prominent here, but some
suspicious phrasing is: 'producing as being true', 'seeing to be
true', instead of the simpler and more to the point 'proving'.
This way of comparing cognitive strength in men and machines
leaves out an obvious symmetry while emphasizing a dubious
asymmetry. The symmetry is that, just as a formal system cannot
prove a sentence asserting its own unprovability, unless it is
inconsistent, so can a mind not do so, if it is consistent;
cf. [Casti 89, p. 321].
The doubtful asymmetry between mind and machine concerns their
possession of the notion of truth. The mind is supposed to have
this notion in addition to the notion of provability, and is
supposed to have no problems with it (but it does, namely the
liar paradox). On the other hand, the machine is only supposed to be able
to prove things (as its only means of establishing truth) without
having, and apparently without being able to have, an additional
notion of truth. But this is not so: for expressing the truth of
the Gödel sentence (as opposed to proving it), even the most
restricted definition of the truth predicate true1(x), covering
sentences containing at most one quantifier, is sufficient
[Webb 80, p. 197]. Mind over machine omega:1 ?
A more intricate version of the argument from incompleteness
considers adding a "Gödelizing operator"
to the system. This
form of the incompleteness argument was also first
advanced by Lucas:
The procedure whereby the Gödel formula is constructed
is a standard procedure ... then a machine should be
able to be programmed to carry it out too ... This
would correspond to having a system with an additional
rule of inference which allowed one to add, as a theorem,
the Gödel formula of the rest of the formal system,
and then the Gödel formula of this new, strengthened,
formal system, and so on ... We might expect a
mind, faced with a machine that possessed a Gödelizing
operator, to take this into account, and out-Gödel the
new machine, Gödelizing operator and all [Minds, Machines
and Gödel].
The sound part of this argument is already contained in the
notion of essential incompleteness: a Gödel operator only fills
a deductive "lack" of the system by creating a new one. Adding
the Gödel sentence of a system as a new axiom extends the notion
of provability and thereby sets the stage for a new Gödel
sentence, and so on. Thus, a Gödel operator only shifts the
original "lack" of the system through a series of
displacements,
without ever completing the system.
there is not the slightest reason to suppose that ... a
machine could not model the 'ingenuity' displayed by a
mind in getting as far as it can [Mechanism, Mentalism
and Metamathematics, p.173].
But for the purposes of this paper it is more interesting to
observe that it does not seem plausible that the argument about the
formalizability of mind should be decided by the outcome of the
race between mind and machine over remote reaches of transfinite
ordinality. And even if it makes sense to conceive of mind as
always being able to out-reflect a reflective formal model, it
would seem that the ability to perform the self-reflection is
more important than the question of how far does this ability (have
to) reach. Reflexive sequences of reflexive theories
A further possibility in the direction of making reflexive
formal models is to make the progression of reflexive theories
itself reflexive. The usual ways of extending a reflexive theory
by adding its Gödel sentence, or the statement of consistency
(Turing), or other reflection principles (Feferman) are themselves
not reflexive: what is added to a theory only says something
about that theory, and nothing about the one which its
addition produces. Thus, what is usually added to a theory does
not take into account the effect of that very addition, which is to
shift the incompleteness of the original theory to the extended
one. Of course, certain things about the extended theory cannot
be consistently stated; for example, the sentence stating that
its addition to a theory produces a consistent theory would lead
to contradiction, by the second Gödel theorem. But the sentence
which is added to a theory could make some other, weaker statement
about the theory which its addition produces. If the procedure
of theory extension operated not only on the theory it is to
extend but also on a representation of itself, it could build on
its own action and improve its effects. It might thus produce in
a single step an extension which is much further down the basic
sequence of extensions, produced by linear additions of Gödel
sentences; the size of this ordinal jump could then be taken as a
measure of the reflexivity of the procedure. This kind of procedure,
operating on something which contains a representation of
that procedure itself, is already familiar from the construction
of the Gödel sentence: the process of diagonalization operates
on a formula containing a representation of (the result of)
that very process [Hofstadter 79, p. 446],
[Bojadziev 90]. Another
example of a reflexive procedure of this kind would be the Prolog
meta-circular interpreter, which can execute itself, though only
to produce statements of iterated provability [Bratko 90, p. 536].
Self-reference in computers
In saying of itself that it is not provable, the Gödel
sentence combines three elements: the representation of provability,
self-reference and negation. In computer science, self-reference
is more productive in a positive form, and in programs, programming
systems and languages more than in individual sentences. The
first ingredient in Gödel's sentence, the representation of
provability, corresponds to the explicit definition of the provability
predicate of a logic programming language in that same language.
In the simplest case, specifying Prolog provability in
Prolog, the definition consists of just a few clauses [Bratko 90,
p. 536],
comparable to those which express the conditions on the provability
predicate under which Gödel's theorems apply. This definition
of Prolog provability is then used as a meta-circular interpreter
to extend the deductive power of the basic interpreter,
for example by detecting loops in its proof attempts. This use of
the meta-circular interpreter could be compared to the work of
the Gödel operator on extending the basic, incomplete theory.
Meta-circular interpretation is also applicable to other programming
languages, notably LISP [Smith 82]. Self-reference in minds
The basic lesson of Gödel theorems, namely that the
ability for self-reflection has certain limits, imposed by
consistency, does not seem to be less true of minds than it is of
formal systems. Applied to minds, it would translate to some
principled limitation of the reflexive cognitive abilities of the
subject: certain truths about oneself must remain unrecognized if
the self-image is to remain consistent [Hofstadter 79, p. 696].
This formulation
recalls the old philosophical imperative which admonishes the
subject to know himself. If this were simple or possible to do
completely, there would be no point to it; the same goes for the
explicit interrogative forms: who am I, where am I going, what do
I want, ... Hofstadter rhetorically asks:
Are there highly repetitious situations which occur in
our lives time and time again, and which we handle in
the identical stupid way each time, because we don't
have enough of an overview to perceive their sameness?
[Gödel, Escher, Bach, p. 614].
Such an overview can be hard to achieve, especially in regard to
oneself, as Laing's knots in which minds get entangled show
[Laing 70].
In a similar vein, Watzlavick, Beavin and Jackson suggest that
the limitative theorems show the mathematical form of the
pragmatic paradoxes to which humans are susceptible in communication
[Watzlavick et al 67, p. 221].
The mind-machine gap could thus be reduced by emphasizing the
formal, machine-like aspects of the mind and/or by building
mind-like machines.
The comparison between formal and specular self-reference and
self-recognition might also connect these contemporary attempts
to base the formation of a self(-concept) on the capacity for
self-recognition with the long philosophical tradition of
thinking about the subject in optical terms. Conclusion
It is not possible to see oneself completely,
in the literal,
metaphorical ("see=understand"), formal and computational sense
of the word. Gödel's theorems do not prevent the construction of
formal models of the mind, but support the conception of mind
(self,
consciousness) as something which has a special relation
to itself, marked by specific limitations.
View the references (with
additional links) or leave a
comment.