Mediamatic Magazine Vol.7#1 Jules Marshall 1 Jan 1992

AI !(2)

Back to Networks

Hard ai also came under attack from other directions in the 80s. Philosopher John Searle came up with his Chinese Room thought experiment. He cited examples of how simplified versions of the Turing test had already been passed, but denied that this indicates the possession of understanding, that appropriate symbol manipulation by recursive rules represents conscious intelligence. He imagined himself locked in a room, doing all the calculations himself on pen and paper to run the test-passing algorithm.
Equipped with an instruction manual in English giving him all the information he needs to run the algorithm, problems are fed into the room in Chinese. He manipulates them according to his rulebook and posts the answer back, in Chinese. Does he understand Chinese? The consensus is that he doesn't (Scientific American, Jan 1990).

Searle argued that the difference between brains (which can have a mind) and computers (which can't) lies in the material construction of each and this assumption led in the mid-80s to a resurgence of interest in artificial neural networks – computers modelled on the wiring of the brain.

Neural nets had been around before until Minsky and Papert's book Perceptrons appeared to demolish their theoretical base, just as the us military was making hundreds of millions of dollars available for conventional expert system research.

7 Besides the dreams of fighting robots and pilotless planes, their main goal was automated translation of Russian technical journals and radio broadcasts into English. With the end of the Cold War, expect to more and more talk about the ai Gap with Japan, which has taken on the role of stick - with - which - to - beat - dollars - out - of - Congress - with. In 1988, a leading neural net researcher for the government called neural nets more important than the atom bomb.


But modification of the theory and new research into the brain's
biology has convinced increasing numbers of researchers that neural nets are the way forward.

Firstly, nervous systems are massively parallel. The retina, for example, processes its whole input of around 1 million distinct signals arriving at the optic nerve at once, not 16 or 32 bits at a time. Secondly, neurons are comparatively simple and analogue - i.e. their output varies with their input - not digital. Also, axons (the 'wire' part of a nerve cell) from one cell to another often have a complementary axon returning, which allows the brain to modulate its activity as a genuine dynamic system whose continuing behaviour is to some extent independent of the outside.

Moreover, the brain wiring is immensely more complicated than that of a computer. An important difference is that logic gates have few in-outs, but nerve cells may have 80,000 excitory synaptic endings, which are not fixed, as in a computer, but changing all the time (there's evidence that changes in synaptic organisation can occur in a matter of seconds). This brain plasticity is probably responsible for laying down memories, so it can be seen as an essential feature. If the brain is a computer, it's a permanently changing one.

Artificial neural nets turn out to be very good at things conventional computers are not, such as pattern recognition, learning, tolerance of faults, storing large amounts of information in a distributed fashion (thanks to its specific synaptic configuration strengths being shaped by past learning).

Recent work by connectionists (as neural network fans are called) looks very promising. Carver Mead at Caltech has used vlsi (Very Large Scale Integration) chips to make an artificial cochlea and retina. These are not simulations, but real information-processing units responding in real time to real light and sound.
The circuitry of the chips is based on known anatomy and physiology (of a cat) and output is dramatically similar to their biological counterparts. Terry Sejnowski developed a network called net-talk, linked it to a speech synthesiser so that its output could be listened to while it learned to read out loud. Starting by producing a formless noise for a few hours, the net eventually starts to babble like a baby, and overnight training improved its performance to 95% accuracy - far better than any conventional ai has managed. Intriguingly, it made the same mistakes (such as over-generalisation of rules) that children make when they learn to read.

8 See Apprentices of Wonder – Inside the Neural Network Revolution, by William Allman for an excellent introduction to connectionism and more details of these and other important developments.

Finally, Carnegie Mellon University's alvinn (Autonomous Land vehicle in a Neural Network) uses four Sun workstations to process incoming video signals and compares them to thousands of stored images. It knows to brake for a person, swerve round dog and keep off the pavement, and set a driverless speed record of 55 mph over a 21-mile trip. Its primary use, of course, is intended to be military, but it's also intended as some sort of 'ultimate cruise control' or robo-mailman.

The drawbacks of neural nets are their slow and limited training, usually needing thousands of trial-and-error attempts. As for replicating human intelligence, neural nets' results are more like habits than insights. In spite of this, many critics of Hard ai accept that an artificial intelligence may be developed by exploiting what is learned about the nervous system, if this artificial mind has all the causal powers relevant to conscious intelligence – which brings us back to square one: more empirical studies are needed into the neuronal basis for memory, emotion and learning, plus how these interact with the motor system.

Creativity

Margaret Boden (The Creative Mind) challenges those who think ai can't teach us much about distinctly human processes like imagination and creativity. She too believes connectionism may give us the first significant ideas on how analogical thinking and generalisation occur in the mind. Her point is not that computers can be creative, but that there are aspects of human creativity, which we can begin to understand through the attempts to build computer models of creativity. This involves the exploration and transformation of conceptual spaces, and the notion and structure of a conceptual space - as well as its various possible transformations - can be described using computational concepts.

Recently, several programs have been written which appear to create. Jazz Improviser which can, surprise, do jazz improvisations (well enough to probably pass the Turing Test); Lawyers for the estate of deceased American fiction writer Jacqueline Susann are suing the author of a program which writes original stories in her style; Aaron, a program (written by human artist Harold Cohen) consisting of a few hundred rules on artistic style has generated thousands of different drawings, some of them exhibited in the Tate and other galleries.

Are they creative? No more than painting-by-numbers is, or following a knitting pattern. Regarding Aaron, Boden says since all the drawings could have been done before with the same program, it's more like an artist who's found a style and is sticking with it. A truly creative artistic program would be able to say I'm bored with this, I'll try drawing limb parts as straight-sided geometrical figures and see what happens. The program would need a way of reflecting on its own knowledge and would have to be able to construct, inspect and change various maps of its mind. The point is that by their failings, such programs teach us more about human creativity.
Genuine creativity requires a break with or transformation of what has gone before, and therefore some conception not only of what has gone before, but of the outer context (technological, social political, etc) in which work is being created.

The question of what intelligence and creativity is, is subsidiary to that of what is consciousness, since (unless researchers can show us otherwise), the former cannot be present without the latter. Physicist David Penrose (The Emperor's New Mind, oup) argues that there's an essentially non-algorithmic ingredient to consciousness. In direct opposition to our century's assumptions about the mind, it is the mysterious black box of the unconscious that may well be governed by (horrendously complex) algorithms, but it is the conscious, aware 'me' – what researchers have been formally studying as the rational and therefore translucent part of the brain – which is in fact the non-algorithmic, mysterious side. Penrose claims that given our brains are the result of natural selection, there must be some advantage to having a consciousness, and that is our ability to form instant judgements about fresh information (and determine its truth or beauty). Even mathematics, he points out, simply communicates those truths and to claim that the algorithm for consciousness would itself be conscious is nonsense.

Synthetic or Applied Intelligence

Whether computers could ever think like humans is still a rather rarefied question. Getting the last 10% of verisimilitude may be of only theoretical interest and likely to be mega-expensive. The biggest impact of ai is likely to be in the middle ground between the theoretical and conventional applications of computers with what has been termed applied intelligence.
This uses case-based reasoning (as opposed to rule-based), which draws inferences from thousands of actual experiences. It is this pragmatic strand of ai which will have the most impact in the coming years, both economically and socially.

9 Business Week magazine estimates that knowledge of how manufactured goods are built and work amounts to 70% of development costs, rising to 90% in the service industry. If such knowledge can be encapsulated in programs, that knowledge could be leveraged to the hilt, as they put it.

It's an idea put forward by David Bolter in Turing's Man. Bolter – a classicist – argues that the computer, as a defining technology of our age,

10 Being a technology which captures the imagination of poets and philosophers and in doing so helps redefine how an age sees itself and how we resolve the dichotomies of life/death etc, rather as the steam engine in the Industrial Revolution, the clock in the Renaissance and the potter's wheel in ancient Greece did.

is changing the way we think about time, space, humanity, history – everything. To be a Turing's Man (the up-to-date phrase would be something akin to Tim Leary's cybernaut), you don't have to agree with the Hard ai position (or be a man), merely work intimately and for extended periods of time with a computer. In doing so, new metaphors and ways of seeing suggested by the computer become internalised. Thus modified and at ease with our new silicon partners, we will be free to enter a new age – the age of synthetic intelligence.

What will this mean for our concepts of reality and illusion; of what it means to be alive or dead? to be conscious or immortal? These terms in previous ages had been pure abstractions whose existence was tied to matters of semiotics and definition. Despite their huge psychological resonance, such discussions had little practical relevance until we developed the technology to keep brain dead bodies alive, replace body parts with artificial prostheses and develop ai systems that simulate features of human consciousness.

To answer, we have to look at ai as part of a western science which functions within a set of conceptual parameters that are largely set by corporate, governmental, military and scientific institutions. No formal means exists by which ordinary people can debate or even discuss the pros and cons of what is happening. We are suffering (or just waking up from) what Langdon Winner calls technological somnambulism, Our willing sleepwalk through the process for reconstituting the conditions of human existence.

With each new generation of technology, we have fewer alternatives and become more immersed in technological consciousness. As Jerry Mander said in Whole Earth Review (Spring 1992): Living constantly inside an environment of our own invention, reacting solely to things we ourselves have created, we are essentially living inside our own minds. Where evolution was once an interactive process between humans and the natural, unmediated world, it is now an interaction between humans and our own artefacts. We are essentially co-evolving with ourselves in a weird kind of intraspecies incest. What kind of world are we building here? What qualities of social, moral and political life do we create in the process, and will this world be friendly to human sociability or not?

Heidegger predicted in 1956 that we may finally find a synthesis of the apparently irreconcilable dialectic between mechanism and meaning by literature revealing technology's essence, its power to enframe the phenomenal universe within structures of utility: Because the essence of technology is nothing technological but rather is a way of viewing the world, essential reflection upon technology and decisive confrontation with it must happen in a realm that is on the one hand akin to the essence of technology and on the other fundamentally different from it – i.e., art.

So far, most essential artistic reflection about ai and the other major epistemological/technological issues has come from cyberpunk fiction, film and criticism.

11 William Gibson's Swarm trilogy (Neuromancer, Count Zero and Mona Lisa Overdrive), introduces a number of identifiable ai types: the artificial (electronic) consciousness of the fused elements of cyberspace (Wintermute/Neuromancer) is a vast artificial consciousness similar to models suggested by connectionism: create a net big enough and fill it with enough information and consciousness will spontaneously develop as an emergent property of a dynamic system. The (steam driven) conscious ainarrator of The Difference Engine represents the same principle and illustrates the equivalence of Turing machine's principle. The idea of a digitised human intelligence maintaining its sense of identity after transferal to a computer is illustrated by the Finn character, while Case's human intelligence was augmented with discreet, intelligent programs such as that which enabled him to speak Spanish. Putting many of these programs together created the more sophisticated intelligent agent, like the hand held electronic amanuensis used by Komiko in Mona Lisa.A selection of ai in film includes the humans - create - machine - smarter - than - themselves (2001: A Space Odyssey, 1968. Evil machine runs amok because, the 1984 sequel informs us, of the imperfection of its original program); computer links up to everything else and takes over the world (Colossus: The Forbin Project, 1970); expert system for controlling defence, fitted with voice recognition (War Games, 1983); neural network brain gradually learns to become more like a human (Terminator 2, 1991). The best popular tv handling the topics are probably Star Trek: the Next Generation and Red Dwarf.

Partly because of the huge technological knowledge that is required to make sense of them, partly because it is science fiction's generic task to explore the cognitive mapping and poetic configuration of social relations as these are constituted by new technological modes of being-in-the-world
(Vivian Sobchack).

But the spectre haunting cyberpunk is the uneasy recognition that our primal urge to replicate our consciousness and physical being (into images, words, machine replicants, computer code) is not leading us closer to the dream of immortality, but is creating merely a pathetic parody, a simulacrum of our essences that is supplanting us, taking over our physical space and our roles without the drawbacks of human error, emotions, the passions that make life so exhilarating and frightening.

Penrose makes a similar point: either artificial consciousness is impossible, or we will eventually discover what is responsible for consciousness, in which case we will probably try and replicate it. Such an artificial consciousness would have a tremendous advantage over us in being designed specifically for
consciousness, and not simply the high point of some messy evolutionary past. Unencumbered by the useless bits of baggage we carry around (like emotions) and built whole rather than grown from a single cell, they might supersede humans.

On the other hand, he says, maybe there's more to consciousness than that, and all the 'evolutionary luggage' is a prerequisite. On the fringes of neo-Darwinism, there is an increasing admission that there does in fact seem to be something too perfect about evolution for blind chance to be solely responsible, that there's an apparent 'groping' towards some future purpose (Penrose).

The search for artificial intelligence will inevitably become synonymous with the search for humanity and God. If technology took them away from us, only an analysis of technology can give them back.