Mediamatic Magazine vol 6#1 Viktor Gerenz 1 Jan 1991

Probleme der Künstlichen Intelligenz

Oswald Wiener

Merve Verlag (pub) Berlin 1990, ISBN 3-88396-078-0, German text, pp. 159, DM 16


Probleme der Künstlichen Intelligenz -

Oswald Wiener is a machine. The ‘consciousness’ of machine w is a screen. Behind this screen is a delicately circuited environment of machine modules which communicate with each other, transfer parameters, etc. Gauges in ws physical system register ‘feelings' (a full stomach, etc.) and ‘moods' and write symbols on the screen. 'Images of the external environment' appear in the instrumentation located on the exterior of machine w. Once these images are transformed into pictures on the screen, they can no longer be clearly distinguished from 'inner' images which appear through pressure on the ‘memory’. The case is similar with the regulating impulses which are either channelled through to the effectors or can be transformed into ‘internal gestures' for the manipulation of certain screen images.

w can call upon particular screen images in order to solve problems and then search for regularities. During this process w is guided by inner mechanisms of images, signposts and markers. What is not spelled out on the screen is 'unconscious’, w is capable of limited parallel processing but the overall impression is one of sequentiality.

When w ‘verbally communicates' with another mechanism, the image-producing apparatus functions in the background regulating an output module which finds the appropriate words. While listening and reading, verbal expressions act as regulating signals and influence the operation of the underlying mechanism. The expressions retrieve structures and regulate the intricate interrelation of these structures turning them into a temporary complex within the run environment. When these intervening regulating impulses cannot be routinely accepted, the main programme calls the underlying image-producing modules instead of the condensed signs.

The image elements serve w as point of departure for the manipulation during the transformation of the apparatus. This is proof of intelligence. Symbols are condensed structures, procedures put on hold which can be expanded if necessary. If a symbol is moved the central operating system causes the image-producing structures to follow suit.
Unfortunately we do not learn much about the doubdess extremely interesting behaviour of machine w in the dream mode. We suspect, however, that on the screen level it does not differ substantially from the external chains of symbols coming from the input station (e.g. eye') which have reached this same level.

Reality is a dream regulated from within the sensorium. (p. 147)
As is the case with many machines that are known to us, w primarily works on mathematical problems but does not shy away from, say, the moon, the word ’redundancy' or wiggling the left ear.

Oswald Wiener is not just a machine, he’s a poet — that he insists upon. Is his transformation into a machine therefore poetry? Even in his earlier work Improvement of Central Europe he proposed the ‘Bio Adapter' as an epistemological thought experiment in which preserved bodies are gradually replaced by cerebral prostheses. In the Bio Adapter experiment the brain is all that is left of the human being. Wiener is now dealing with the question of whether the brain’s function can also be taken over by machines: hence the question of whether the production of thought through Artificial Intelligence is feasible.

AI and its auxiliary ‘cognitive science’ use behaviourism in their search for suitable operationalizable models of thought. But behaviourism, which is limited to externally registered processes, has failed to consider the internal structures of the 'black box’ when attempting to explain human thought. The products of ai turn out accordingly, says Wiener: As critics have insinuated, cognitive psychology frequently resembles the kind of anthropology in which Martians (or better yet: civil servants) might engage: in any case, some of their results look as though they had been produced by their own A 1 programmes, (p.10)

Disciples of AI who think they can do without consideration of their complicated inner life when dealing with mental phenomena are confronted by Wiener's view that there is no psychological, indeed no scientific theory which in the end is not based upon self-analysis. In order to lead the theory of intelligence out of its present sterility, a self-analysis whose aim is a Turing-machine-like description of introspective processes is exactly what is needed.

So much for the method and aim of the investigation. The above mentioned self modulations already suggest their implementation. Which results does Wiener attain ?
Wiener is a materialist even beyond the metaphor. He considers internal images and their manipulation as a reality which is open to neurological investigation. For him dubious expressions such as 'consciousness' or 'ego' are shorthand for mechanical processes which can be explained. Therefore there can be no fundamental obstacle to reproducing such processes found in the finite automaton, man, in finite automata using different hardware.
The old hypothesis of AI concerning symbol processing is correct in Wieners opinion: human ideas operate — on screen — sequentially with symbols. But Wiener does not want to understand symbols as atomic but as ‘handles’ on the underlying structures. Up until now, problem-solving and skill acquisition programmes have suffered from the fact that both only recognise‘flat’ symbols, not structures; not to mention their lack of an overview of the problem area. The illusion of intelligence results from the fact that its activity unfolds in the head of the programmer where the programme clearly runs on the entire apparatus of a de-facto intelligence, (p. 85).

Intelligence is reflected in the to-and-fro between symbols and their structures hidden within symbol manipulation. In this process a perception of regularities or similarities occurs which lies at the root of the creative process. The basis of this perception — one of the central problems of AI — cannot be determined by Wieners experiment. Perception takes place outside of self-observation, behind the screen, and announces its presence with a signal that cannot be localized and which can only be described by the declaration 'Similarity', (p. 37).

Unlike structures behind the screen, machine w is able to investigate chains of symbols on screen looking for similarities according to the automation theory and ‘fold’ these similarities to create Turing machines, i.e. compress them into a programme to produce chains of symbols. For Wiener the construction of Turing machines under secondary conditions (the folds of a chain of symbols) constitutes the core of creativity. The prerequisite for this is that segments are cut out of the tightly meshed difficult to disentangle environment of the machine, as it is always found in nature, and labeled ‘machine’, ‘symbol' or ‘module’, etc. This process is, on the other hand, the cognition of regularities. Regularities are, however, not a characteristic in a programme description but are a characteristic of the observer. Regularities in the world are structures within myself. (...) How does an organism individualize machines in its environment and within its own repertoire ? (...) The organism accomplishes this case by case. (p. 68f) Finally Wiener addresses the canonical Stopping Problem which implies that a machine is principally unable to solve a certain class of problems. This is used as an objection to the feasibility of AI. He writes: Thereby it seems that a final judgement has been pronounced regarding the possibility of machine intelligence. But this is not the case: for human intelligence is faced with exactly the same hurdles: I have no method at my disposal that would guarantee the success oj structuring that isappropriate in every case. Even I cannot find regularities, etc., arbitrarily, (p. 87).

In this way Wieners introspection has supplied us with a structural model of thought and has outlined a series of achievements which an AI machine would need in order to perform. Proof could only be found in the framework of a mathematical epistemology and for this reason we will always have to be satisfied with conviction. That is all I know — when someone shows me a machine which operates discernibly in processes as I find them in my introspection, then I will simply learn to live with the view that it thinks as I do. (p. 97.)

I don't know if Wiener has considerably advanced intelligence research with exclamations from the subconscious, the observation that problems must be solved ‘case by case’ and generative mechanisms must be found which lie much deeper than in Chomsky's model, (p. 118). Having studied this tedious and not exactly user-friendly text, the impression frequently comes to mind that evidence would have been more easily found.

Recommended target group: disciples of AI who would like to know how poets reflect upon their own field. To a limited degree it is also aimed at poets — assuming there is a suitable run environment — who have always wanted to know something about well-formed chains of symbols but were afraid to ask.

translation Kirsten Lee