Mediamatic Magazine Vol. 9#1 Hein Masseling 1 Jan 1998

Ford, Glymore, Hayes

Android Epistemology

Forty-six years ago, Alain Turing opened his article Computing machinery and intelligence by contemplating whether or not machines can think. At that time, this consideration must have seemed radical to most people, even natural scientists. Now, after more than forty years of research into the possibility of artificial intelligence, there are fewer people who take offense at the possibility of thinking, or rather, intelligent, machines. At least in the cognitive sciences, it is more or less generally assumed that there is no essential difference between natural (usually meaning human) cognition and artificial cognition. But, despite this consensus, the question of whether artificial intelligence is possible still has not been answered. There are numerous problems, both of the practical kind (how do you build a machine with consciousness?) and of a conceptual nature (what do we mean by consciousness?). Moreover, there is the question of whether we, human beings, will be able to accept and tolerate intelligent, conscious machines. And will man be able to keep these intelligent machines under control?

The compilers of the book Android Epistemology, which means something along the lines of 'the theory of knowledge of manlike machines', have focused their attention on conceptual problems. They characterize android epistemology as the business of exploring the space of possible machines and their capacities for knowledge, beliefs, desires, and action in accordance with their mental state. It is a project to which not only research in the area of artificial intelligence can contribute, but which also draws on disciplines such as cognitive psychology, robotics, artificial life and linguistics. However, with this characterization, the compilers seem to be particularly keen on introducing 'android epistemology' as a suggestive name for a long-standing area of cognitive scientific study. The cognitive sciences are a multidisciplinary field of study shared by all the above-mentioned disciplines. The study of conceptual problems has been part of the research from the very beginning. It would therefore seem that the name 'android epistemology' is intended to give extra emphasis to the compilers' conviction that natural and artificial intelligence are basically equivalent.

The sixteen contributions to the book are varied and provide a clear insight into the various problems and standpoints which play a role in the study of cognition. Both well-known and less known authors are represented.

Pioneer Herbert Simon presents an explicitly optimistic vision of the possibility of artificial intelligence. He commits himself to the traditional criterion of the cognitive sciences. In accordance with this tradition, human consciousness, the mind, must be described as a symbol-processing system. In this description, the building blocks of the mind are not taken into consideration. A description of the human mind at the neurophysiological level is barely, if at all relevant to an understanding of its functioning. But bear in mind: the criterion is that the mind, and therefore, thought, does indeed have a physical basis. The symbols of the mind are mental representations of external matters (objects, persons, relationships, the theory of evolution) and internal elements (your own body, behaviour, thoughts) which are processed serially. If the functioning of machines could be described analogously, as symbol processing, we could perhaps also attribute thoughts, intelligence and consciousness to these machines. Even though the material from which these machines are built up is totally different from that of our brains.

Simon is convinced that, from this point of view, artificial intelligence has already partially attained its goal. According to him, software programmes have been developed which, however limited their field of application, we can safely say that they think, often in the same way as humans - and, still according to Simon, this they have been able to do for some 35 years. Therefore, he sees no reason to doubt the possibility that more far-reaching results can be accomplished within the original framework. Nowadays, this traditional approach is sometimes kindheartedly called GOFAI - Good Old-Fashioned Artificial Intelligence.

However, Simon's optimism would seem to be slightly over-convenient. Firstly, before a certain aspect of cognition can be described as symbol-processing, a clear definition of that aspect will have to be found. Margaret Boden's contribution is a textbook example. She discusses a classic problem: can a machine be creative? After a clarifying analysis of the concept of 'creativity', Boden concludes that computers, like people, can be creative. Looking for solutions to a problem, they are able to reach beyond their conceptual scope - the context within which, and the way in which, a solution to this problem would normally be examined. According to Boden, the Automatic Mathematician programme by Douglas Lenat is an example of a creative programme. This programme has developed, among other things, completely new number-theoretical theorems.

A second, more important, problem for GOFAI is those aspects of cognition which cannot be comprehensively described by means of language or linguistic symbols. Anatol Rapoport mentions, for example, the (intuitive) recognition of smell, and Ronald Chrisley, who discusses this problem at length, mentions perception in general. Chrisley therefore concludes that the only way to show what a cognitive function such as perception really is, is to produce a physical realization of such a function. We have to build robots and make these robots interact with their environment. Lynn Stein's contribution, about a robot which has to find its way through a room full of obstacles, seems to agree with Chrisley's view. Traditional serially designed robots often turn out to be very limited and inflexible - Good Old-Fashioned Artificial Intelligence. With the development of neural networks, computer programmes in which representations can no longer be identified as explicit symbols, and which feature parallel instead of serial processing, artificial intelligence has been given an important impulse, and Chrisley's dreams could well be coming true.

Inspired by and based on neurophysiological research, the use of these neural networks has produced remarkable results. Paul Churchland's contribution to the book provides a good example. He deals with a neural network designed to simulate stereopsis. Churchland shows us how useful the development of neural networks can be for a better understanding of cognition. The (constructivist) learning model described in Chris Stary's and Markus Peschl's contribution clearly exposes the limitations of the traditional symbol-processing models. Their views agree with those of Chrisley. In their model, Stary and Peschl differentiate between a (traditional) symbolic and a subsymbolic level. The latter is the most important. It is a neural network in which representations do not occur as symbols. For some items at the subsymbolic level, it is possible to produce, as it were, a translation at the symbolic level, but this only applies to a few items from this subsymbolic level. Since, in both language and thought, we are used to the explicit formulations which can be provided at the symbolic, but not at the subsymbolic, level. Only the symbolic level is directly accessible to us. So if, in the modelling of cognition, we were to restrict ourselves to traditional symbolic models, an important part of cognition would remain beyond the reach of these models.

Despite all the problems indicated, most of the authors show a belief in the possibility of artificial thought. The sceptics are clearly in the minority. One of them, Kalyan Basu, refers to Heidegger and Gadamer and criticizes the, in his opinion, much too reductionistic approach within the cognitive sciences. But Basu's criticism will not make the cognitive-scientific community change their minds, any more than Hubert Dreyfus did in his book What Computers Can't Do. In my opinion, Margaret Boden is right when she says that acceptance of the idea of artificial intelligence will probably be mostly a matter of people getting used to it. For example, would it not be easier for us to attribute intelligence and consciousness to machines that look like us, rather than to machines which look like the present computers? This point is also brought to the fore in two (slightly substandard) contributions dealing with the question of whether machines are basically capable of moral consciousness and aesthetic judgement. These reflections call to mind the Turing test. Would it not be possible to develop a test which would give a decisive answer to the question of whether or not a machine is intelligent? It soon turned out that Turing's original test was not convincing. But Selmer Bringsjord shows, perhaps sometimes a little contrived, that all kinds of variations on Turing's original idea also fail to solve anything. According to Bringsjord, even if the concept in question can be defined sharply enough, and although we have more and more technical jargon at our disposal for the design of an accurate test, we will ultimately not be able to develop a nice, neat, empirical test.

In my opinion, possibly the most disconcerting question pointed out by, among others, Boden, is whether we will be able to recognize as such the forms of artificial intelligence, creativity, ethics, etc., of our own design. This question will certainly gain relevance when cognitive functions in models are no longer exclusively represented as the familiar symbol-processing operations. Unfortunately, this issue is receiving too little attention, as is the question of the degree of autonomy we will grant machines. How far will people be prepared to give up their control over intelligent machines? Should we grant those machines the same rights as people? These questions deserve attention, certainly in view of the compilers' definition of android epistemology.

The conclusion was provided by Marvin Minsky. With great recalcitrant pleasure, he describes a fictitious dialogue between two extraterrestrials. Clearly equipped with a form of intelligence superior to our own, these creatures discuss our shoddy cognitive constitution. Minsky shows what, to me, is one of the attractions of research into artificial intelligence: an excellent medicine for man's inflated ego. As the compilers already mentioned in their foreword, Minsky's contribution in itself is reason enough to buy this book.

 

translation OLIVIER / WYLIE