Sunday 1 October 2017

Grist for Leibniz's Mill (2017)

From The Philosopher, Volume CV No. 2 Autumn 2017


GRIST FOR LEIBNIZ’S MILL
By Danko Antolovic



“What do you see?” I ask a friend who is looking at a tree. ‘I see a tree, of course,’ comes the remarkable reply. I do not actually know what my friend is seeing – I experience only my own perceptions, and those of another could be wildly different – nor do I have any experience whatsoever of my friend's ‘I’, the self-awareness of another.

And yet, the conceptual framework behind this little conversation is so universal that a child would give me the same reply. There is a world out there, and there are individual minds, all of whom see that world more or less the same way, and they all have a direct experience of themselves and of the world. On the surface of it, the minds are immersed in the world and are of one piece with it. All is well.

But how does the mind, immersed in the sights and sounds of the world, actually perceive? We know that perception is mediated by the sense organs, and we can follow the optical and acoustic images of the world into our eyes and ears. We understand how light and sound trigger neural activity in the sensor cells, and how the resulting signals travel into the brain. Once there, following the stimuli becomes rather more complex, but we know, in broad strokes, that sense stimuli give rise to somewhat permanent neural imprints called memories; they trigger bursts of chemical changes associated with emotions; they are fed into decision-making systems and compared with existing memories; and they take part in initiating motor signals which make the body act upon the world.

This modern picture of the mind corresponds very well to what is sometimes known as Leibniz’ Mill metaphor. Gottfried Wilhelm Leibniz, philosopher and distinguished mathematician, proposed this in 1714 in an essay called The Monadology. Essentially, the idea is that should a machine have a mind (or might appear to have one), we should be able to scale it up in size, if needed, and walk into it, just as we would walk into a mill. Inside, we would see mechanical parts pushing and pulling at each other in mechanical ways; the machine could be arbitrarily complex and perform arbitrarily complex tasks, but in all the pushing and pulling we could never find a somebody, an entity aware of itself and of the world. And indeed, even today, all of our understanding of the workings of the nervous system has so far shed very little light on self-awareness: all we see inside are electrochemical signals running around in electrochemical ways!

And so, Leibniz argues, the mind cannot be mechanistic, since it does not arise from the causes and effects of the material world. It is not of one piece with the world at all: rather, it stands outside it. But if that is so, how can the mind perceive the world and act upon it? All interactions in the world are causes and effects, parts of the world pushing and pulling at each other, and an entity not taking part in this mechanism could not interact with the world at all. Leibniz observes, correctly,  that the material world is governed by the principles of conservation, and that the mind acting upon the world "from outside" would violate these principles.

Leibniz resolves this conundrum by postulating that God had arranged the world in such a way that every substance was created, in the beginning, with due regard for every other. In that picture, mind and matter are separate substances, following their separate laws, and in doing so they behave as if they were interacting, even though they are not. Leibniz called his primordial substances ‘monads’, and their parallel coexistence the ‘pre-established harmony’.

A ready objection to Leibniz’ Mill is that it is merely an argument from plausibility. It seems indeed very implausible that a clockwork mechanism from Leibniz' time, with its cogwheels, pegs and levers, could give rise to self-consciousness, but still: if we do not understand the nature of the goal, we cannot know whether it is attainable or not. Do we really know what to look for inside the mill?

Ostensibly scientific alternatives to the mill are occasionally proposed as phenomenal (material) explanations of self-consciousness that would resolve Leibniz’ mechanistic dilemma: quantum mechanics is a perennial favourite, as are things like emergent properties of complex systems. None of them are particularly convincing.


All of our understanding of the workings of the nervous system has so far shed very little light on self-awareness: all we see inside are electrochemical signals running around in electrochemical ways!


Quantum mechanics asserts certain non-intuitive things about the behaviour of matter, but it is fundamentally mechanistic: its causes and effects, albeit understood as probabilities rather than certainties, can be summed up in an equation of motion. It is unclear why quantum mechanics would be a more promising medium in which the conscious self could arise than the clockwork mechanisms of classical physics.

Complexity, on the other hand, is a vague concept, which can signify various things. There are properties that do ‘emerge’ from systems with many parts, things that are not obvious in the components but become visible in the system's behaviour as a whole. Some examples are ferromagnetism, coherent movements of schools of fish and flocks of birds, and self-organising of neural nets. But all of these phenomena are explicable mechanistically, in terms of the underlying physics, and if awareness ‘emerges’ in this fashion, it must also be so explicable; without a physical theory that could account for the conscious mind (and reconcile it with the known picture of the physical world) complexity alone explains nothing. In my view, interpreted strictly as an empirical observation Leibniz’ allegory of the mill has held out very well over time.

A remarkably factual insight into the question of the mechanistic mind was offered by Roger Penrose in an essay called ‘Setting the Scene: the Claim and the Issues’ (part of The Simulation of Human Intelligence, edited by Donald Broadbent, published by Blackwell in 1993). Penrose argues that human minds are capable of insights which must elude the machines because of the machines’ very nature. The argument goes as follows:

Suppose we wish to ascertain whether an algorithmic calculation, which we can envision as a computer program, stops and yields an answer, or goes on forever. Penrose offers two examples:
1) Find the smallest integer that is not a sum of squares of three integers (including zero). It would be easy to construct a program which, for each integer n, goes mechanically through all triplets of squares not larger than n, and tries to add them up to n. Will that search ever come to an end? If we try it, we find out that number 7 is not a sum of three squares, and, luckily, the program stops very soon.

2) Find the smallest odd integer which is a sum of two even integers. We could construct a simple searching program, along similar lines as in Example 1, but we know in advance that it would never stop: there is no such number, no matter how large an integer we check.
These examples show that the human mind can answer the halting question for some calculations; perhaps not for all of them, but if it can work out an answer, it can do so correctly and in finite time. Suppose now that this ability is itself implemented within the brain as an algorithm, A, which reads and processes any computation, any program, ‘C’, as its input. The algorithm A is required to stop if it finds out that C does not stop, i.e. it must give a correct verdict of C’s not halting. It should be able to read the program we wrote for Example 2, analyse it, and halt with the verdict that this calculation never halts.

It can be shown with a bit of clever and fairly technical reasoning (known as Gödel's theorem, after the logician Kurt Gödel), that, no matter what the actual details of A, it is always possible to find a C, with suitable inputs, such that A equals C, and must report on its own halting! Now, if A halts in this particular case, by assumption it has given the correct answer, and the answer is that it itself does not halt! This is a contradiction, and the only logical possibility is the opposite of our assumption: A does not halt, and therefore cannot give the answer for that case.

So, the proposed algorithmic calculation A inside the brain cannot answer at least that one question. But we know the answer: A does not halt. We have arrived at this answer by the reasoning of Gödel's theorem, and by deriving a contradiction by assuming that A does halt. It is very difficult to disagree with Roger Penrose’s conclusion that the mind cannot be completely reduced to an algorithmic computing machine: after all, we just did something an algorithm mathematically can't do!

Consider again Example 2: we solved its halting problem intuitively, by appealing to the concept of disparate sets (an integer is either even or odd, never both), and to the concept of distributing multiplication over addition. We intuit these concepts as applying to all integers, and we know immediately that we are looking for odd numbers where there can't be any. The very idea of checking every individual number, in a program-like fashion, strikes us as ridiculous.

We could envision adding such higher-level concepts to our program A, to form an enhanced program A*. This program would be able to solve a broader range of halting problems, but the crucial question is: how are the enhancements implemented? If they are implemented in the same algorithmic fashion as the original A, then A* is again fully algorithmic, and cannot answer its own halting question, for the same reasons as A. The human mind can again do so, for the same reasons as before, and we are back at the original quandary.

What can we do to escape this quandary? An informal assertion, known as the Church-Turing thesis (after mathematicians Alfonso Church and Alan Turing), says that all algorithms can be expressed and computed by means of so-called Turing machines. A Turing machine is a very simplified mock-up of a computer, capable of storing, retrieving and manipulating symbols according to pre-set rules. The important point is that it is physically realisable, and that every computing device ever made (or proposed to be made in a non-magical fashion) is functionally equivalent to a Turing machine. There is no a priori requirement that it be so, but no one has made a non-Turing computer yet, following the known laws of nature. Any device or program A, that we know how to make (or realistically imagine), will be algorithmic and land us in the same quandary again.

There is a striking similarity between Leibniz’ allegory of the mill and the Church-Turing thesis. Leibniz observes that all the machines we know how to make are mechanistic devices, capable only of mechanical functions. The Church-Turing thesis observes that all the computing machines we know how to make are Turing machines, capable only of algorithmic calculations. Where Leibniz asserts that a mill cannot have a mind, the more precise analysis by Penrose says that any known computing device, which is always a Turing machine, fails to match all the capabilities of the mind. Neither the mindlessness of the Leibniz’ Mill nor the Church-Turing thesis have ever been proven, but nor have they ever been contradicted. They seem to hint at a fundamental limitation on what can be expected of the phenomenal world as we know it.

Where does this leave us, regarding self-awareness and subjective experience? We have shown that there exists mathematical reasoning that no Turing machine can perform, but a mind can. This reasoning is itself wrapped in a larger experience of the self – it is I, a self-aware entity, who is doing the reasoning – and it is difficult to see how an algorithmic device could generate the conscious understanding of a process which it cannot even perform. Unlike the halting problem, awareness is not a concept we know how to define in precise terms, so we cannot prove or disprove its computability. Nevertheless, the above analysis of the limits of algorithmic computation strongly suggests that awareness is not such a computation.

But let us carry this a step further. Let us suppose that (some portion of) the brain is a non-Turing machine, based on some still unknown, non-algorithmic physical laws. Penrose speculates that that may be the case, and his supposition is a rather appealing way out of the halting-problem quandary. This hypothetical machine performs broadly conceptual reasoning, and it makes it possible for the human mind to solve the halting problem of program A. We don't know what its computational limitations might be, but the interesting question is: does it imply awareness?

Is self-awareness necessary in order to arrive at Gödel’s theorem and the solution of the halting problem? Or could a hypothetical non-algorithmic robot reach the same results, without any awareness of itself and of what it is doing? We are inclined to think so: after all, the self-aware human mind can run through algorithmic computations just fine, and understand what it is doing (for example, multiplication by hand), yet its self-awareness is immaterial for the computational process, and such tasks are routinely handed over to non-aware Turing machines (pocket calculators).

Furthermore, we could ask whether the emphasis on computation in the theory of mind is altogether misguided. Setting aside the mathematical arguments for a moment, we can ask ourselves what self-awareness feels like. At the risk of generalising on flimsiest grounds of introspection, it feels like the presence of a being – my own. Likewise perception, once all its neural mechanisms and signal processing are accounted for, feels like the presence of another being, presence of something that is.

In contrast with this immediate experience, what is computation, really? In the physical sense, it is a chain of events in which phenomenal objects undergo distinct changes, regularly and reliably. Such is a computation on a pocket calculator (or a supercomputer): electrical contacts being closed by the keypad buttons lead to switching of electrical circuits, resulting in light being emitted from the LED display. All of that is something more than a mere physical happening, such as wind or rain, only because the human mind interprets the inputs and the outputs as very good approximations of an abstraction. We think of pressed keys and displayed light patterns as numbers and arithmetic operations, and it is this interpretation that gives meaning and relevance to an otherwise indifferent physical process. It is the same interpretation that adds three pebbles to five pebbles to make eight of them, even though a pebble is not a number, but a piece of phenomenal world, immersed in the world and changing with it with every passing moment.

Physical computations of this kind need not be human-made, and can exist independently of any minds. For example, genetic code is a fairly crisp and unambiguous mechanism by means of which intricate molecular structure is maintained and propagated within unconscious matter. It precedes the appearance of the mind, and in the physical sense it does not differ from the wind and the rain: it became a ‘code’ in the human sense, i.e. an embodiment of an abstraction, only retroactively, by being understood by the mind.

As for non-algorithmic ‘computation’, such as proving a theorem, this usually begins with a specific, outstanding question; the question may be pressed by material reasons (for example, a technological advance with the prospect of marketable products) or emotional ones (professional prestige). The entire body of accepted mathematics stands available as the starting point, and the reasoning process links mathematical concepts, according to accepted rules of logic, into the chain of proof which, hopefully, ends with an answer to the stated question.  Deciding which concepts to consider is, of course, the core of the mathematician's talent, and that selection is often guided by an ‘intuition’, that is to say, by the ability to recognise similarities with past mathematical problems. Our intuitive solution of Example 2 illustrates this process; a conceptual solution of Example 1 is also known, but is far less obvious.

There is an overall similarity between algorithmic (Turing) computation and conceptual reasoning: both manipulate given inputs according to set rules, and yield a result. We accept the possibility of a non-algorithmic (non-Turing) component of the mind, even if it is far from obvious whether and how this component could be implemented in non-aware robots. However, nothing in the conceptual reasoning seems to indicate that self-awareness is a necessary component of it. In order to solve the halting problem in Example 2, a robot would have to manipulate the concepts of arithmetic, instead of following a Turing-like program that was built on these concepts. A large step for a robot, perhaps one relying on new and still unknown physics, but it is not self-evident that the robot would have to be aware of itself in order to take that step.

In fact, it is very difficult to see how these chains of symbol – and concept – processing operations could produce, or, in contemporary parlance, compute that ‘presence of a being’ which is the subjective experience. Insofar that they are of relevance to the conscious mind at all, they require purpose and interpretation that are extraneous to them, and which are given to them by the very same self-aware agency they are supposed to be the foundation of. They differ little in their nature from a mill, a machine made for the extraneous purpose of grinding wheat. And in the absence of a mind to call it a machine and envision a purpose for it, the mill, too, is just another indifferent manifestation of the phenomenal world.

With an eye on David Hume's wise admonition that all generalisations are only habits of the mind, we are reluctant to preclude in advance a phenomenal explanation of the self – perhaps there is one, hidden in the unknown depths of the phenomenal world. Still, at the present – and foreseeable – state of our understanding, all we can realistically hope to find in the neural labyrinths of the brain are Turing machines and more Turing machines. And these clockwork-like gadgets can't even match our capacity for mathematics, let alone loop back onto themselves somehow, and give rise to self-awareness.

It is possible that the Church-Turing thesis is saying something fundamental about the structure of the phenomenal world; or, perhaps it merely expresses an incompleteness of our knowledge of that world, a limitation that could be overcome in time. As things stand today, however, we cannot casually dismiss Leibniz’ insistence that the self, that one thing each of us knows truly is, must be something irreducible, a monad, and not an agglomeration of moving parts. Mindful of the questions which this leaves open, we conclude with Leibniz’ own words:
‘That which is not truly one being is not truly a being either.’



Read more:

Leibniz, Gottfried Wilhelm. The Monadology, 1714. Translated by Robert Latta, 1898, http://home.datacomm.ch/kerguelen/monadology/

About the author: Danko Antolovic is a scientist and technologist. He is the author of Whither Science? a collection of essays on the present and future of modern natural science. He has also written for The Philosopher in the past (Descartes’ Menagerie of Demons, Volume 103 No. 2, September 2015)

Address for correspondence: Danko Antolovic <dantolov@iu.edu>


2 comments:

  1. Thank you, Danko, for your superb take on this intriguing topic; you’ve woven a fascinating account.

    I do wonder if, perhaps, Penrose’s hypothesis that the human brain — its conscious thoughts — has a nonalgorithmic component that today’s computers (artificial intelligence) can’t simulate is premature and unnecessarily too limiting. That is, the hypothesis may be only half right: the current state of artificial intelligence is certainly not up to the task, with that everyone probably agrees; however, the future of artificial intelligence might be able to step up. The future state of machines — quantum computing and even beyond, including other substrates, as well as greatly advanced knowledge in neuroscience — might offer all the consciousness (of self, others, and the environment), cognition, and array of other attributes one associates with the human mind. A circumstance encapsulating the range of ‘other attributes’: imagination, creativity, sentience, presence in time and space, visions of alternative futures, analysis of the past, perceptiveness, emotions, ethics, self-optimization (self-programming), opinions, empathy, ‘qualia’-like experiences, and more. Indeed, there might be a tipping point — a catalytic moment, if you will — at which such a machine, by any measure (beyond Turing-like tests, of course), rises to ‘personhood’. Perhaps we’d stop calling it a ‘machine.’ In short, I wonder if neuroscience and artificial intelligence (and correlates like cognitive science) will eventually — the timeframe unknown at this juncture — have some answers that will (disruptively!) bear on these issues regarding the fundamental nature of consciousness, cognition, and ‘personhood’.

    ReplyDelete
    Replies
    1. Keith, thank you for the comment; please allow me to address the point(s) you raise.

      Penrose's argument really says only this: there are questions accessible to logical reasoning, to which no algorithm can give an answer, but the mind can. From there, it follows fairly inescapably that the mind cannot be reduced to an algorithm. Penrose allows for the possibility that the mind is some kind of non-algorithmic computation, based on some physical principle we do not yet understand.

      But here is the rub: every computation we know how to implement outside the mind is an algorithm, embodied in a device reducible to a Turing machine. Quantum computing is of Turing type; so are neural nets and self-modifying code; and so is everything we know about the functioning of the nervous system. At the present time at least, we simply do not know of any physics that can yield something other than a Turing machine. A device we'd know how to construct (i.e. comprehend in terms of known physics) could be very complex; in principle, it could in most of its aspects rise to 'personhood,' i.e. be a very good outward simulacrum of a person. But unless Penrose's argument is somehow flawed (which I don't see), there will always exist a body of reasoning that it can't perform (the halting problem), but the human mind can.

      The problem of self-awareness is more difficult because awareness appears as an immediate perception, rather than as a concept that could be analyzed; the above arguments certainly don't cover it. I would argue that both conceptual and (certainly) algorithmic computation can be done without self-awareness, and that it is the self-aware mind that is actually interested in the results. Even though this is not a proof, I find it difficult to explain self-awareness in terms of computation, or even in terms of conceptual reasoning.

      Danko

      Delete

Our authors very much value feedback from readers. Unfortunately, there is so much spam on the internet now that we now have to moderate posts on the older articles. Please accept our apologies for any extra time this may require of you.