Top of this document
Go directly to navigation
Go directly to page content

Mediamatic Magazine Vol. 9#1: Jacques Servin 1998 1  

Mitchell

An Introduction to Genetic Algorithms

In 1969 a Cambridge mathematician, John Conway, amused himself by constructing a computer game he called Life. The game's world was divided into cells which each had a specific, determined effect on its neighbors every tick of the game. Given different initial configurations, the cells would affect each other to different ends, and shapes of variable liveliness would arise. Conway gave names to a few of these: blocks, loaves, beehives, blinkers, gliders, spaceships, and R pentominos, which would start out as a few cells, blossom into a giant assortment of everything else, and then die.

Out of this diversion arose the field called Artificial Life, which in the 1980s completely revitalized the study of Artificial Intelligence, which had mostly petered out because of what seemed to be its congenital inability to get a computer to do things any two-year-old could, like tell a cat from a dog or know happy from sad. Artificial Life and its techniques, including what's called Genetic Algorithms, suggested a way over this hurdle: by letting programs expand and blossom on their own, rather than pre-programming machines with vast troves of knowledge that would work only so long as conditions stayed constant. It was hoped that computers would figure out how to do things we couldn't teach them, and that furthermore they might become viable in the unpredictable, mutable real world.

In 1984 Valentino Braitenberg published Vehicles, subtitled A Study in Synthetic Psychology. Through thought experiments he showed how self-determined, evolving machines could come to seem by turns alive, intelligent, original, and ultimately fully human. This is a completely brilliant book, largely because it proceeds methodically and prosaically through the most astounding ideas. Also because of its ample but subtle interlarding of humor. Braitenberg gives many convincing demonstrations of the value of considering intelligence and biology in mechanical terms, and shows, for example, how the function of the retina, in itself, as well as in the context of the body might best be explained synthetically. Braitenberg incidentally debunks much holy thinking, parading as science, and leaves the mystical content of his masterful thinking for others to develop.

Hans Moravec took up this slack in 1988 with the four-times reprinted Mind Children, a giddy assortment of near crackpot ideas supported by the flimsiest science, pickled with a brilliantly told history of computers. With a few words changed and minus the history, it could be a pitch for any of several mystical sects promising immortality, unlimited knowledge and power, and a few father figures. Like many others who live for computers, Moravec is basically religious; when he comes down to earth about one of his grand ideas - which he does assiduously, being a scientist - it becomes clear there's basically nothing that makes it any more likely than anything else.

From dust were you made, and to dust you shall return...The hallmark of powerful mythical thinking and messianic movements must be their ability to tap into cheaply common themes. The dust thing is one of many such motifs to be dug up from right below the surface of Moravec's thinking, and to be found flying majestically over the surface of much thinking like it: scientists think life came from clay, Moravec explains, when evolutionary processes dispensed with their crystalline framework; for it to end embodied in silicon, ourselves quite dispensed with, would take us full circle. ... ''and then M. declared the joke to be on the author, for these words would bear literal but kind fruit indeed...

So Reb M. built a creature and it was of clay, and it spoke to him thus... In an exaggerated case of exaggerated hopefulness, Moravec predicts unimaginably smart, self-engineered trillion-armed robots whose main accomplishment, besides being fabulous in every respect, will be sucking out our brains and placing them in other trillion-armed robots, without any effect on our souls except immortality. These gizmos will give us the power to outlive the universe - even the allegedly cyclic running - down of this latter, the ultimate wall for more prosaic visionaries, poses no threat: our future computer selves will make batteries powered by the decay of the universe, and plug themselves in. (It is slightly ironic to find the Big Bang, one of the scientific facts most clearly a matter of belief, among what may be Moravec's wildest theorizing.) ... and it rescued Reb M. from much loneliness, and some even called it Messiah.

And they commenced to frenziedly build a great tower within the walls of the city... and when it came crashing down because of a glut of individuality, basically, the King M. of that city vowed another go in the offing, be it twice ten thousand years in the waiting... ''The dream of regaining our shattered communalism, of basking in a unified, pan-human striving for some Babel or other, is a little too obvious and overworked to go into.

In the year 2000, machines will do most of our thinking... '''Year 2000'-brand scientific prophesies have been familiar to Americans since the 1930s; they are basically a lazy variation on Messiah, and so of course none of them have come true. The year 2000 has always informed lay visions of computer science - which Moravec's isn't - and is fuelled by the staggeringly swift improvement in technology we've been subjected to during these last few decades. The main flaw in this sort of 'thinking' is that it pays no heed to the overarching power of money and greed, paradoxically enough for an American phenomenon. The Romans possessed steam engines, and used them only in parades; Roman roads were apparently good enough for the Romans, and the force of their capitalism wasn't clamoring for better. The Egyptians had light bulbs, and used them only for gilding. We may have the technology to make autonomous robots, and they might even turn out benevolent, who knows - but who's going to pay for it? The airplane only expanded in power and size and comfort to a certain limit, and stopped when the upstairs cocktail lounge was invented. The Americans of the 1970s had the technological capacity to make cities in space, yet still today the moon landing (due entirely to national pride) is their most impressive achievement, and probably will remain so for decades, or until the right new threat comes along. As Moravec astutely explains, computers themselves have reached their present point largely because of threats to America-World War II, the Japanese 'Fifth Generation' effort, etc. Even just equalling the last forty years and developing computers a million times more powerful than what we have now - Moravec's baseline requirement for intelligence to start flourishing in their clay - will be possible only if there's a strong need. But now that there is such a glut of information and speed that new books bemoaning the excess appear every month, and studies have shown that the overall productivity of American business has in fact slightly decreased ''because of computers, how long will the drive into cyberspace last?

It is likely that the sort of applications most soberly envisioned in the literature of Artificial Life, software equivalents of self-cleaning ovens - as opposed to the domestic golems predicted by Moravec for right about now - will trudge into commonness within the next decade or two. Is there really such a difference between a computer or word processor that does some of your thinking for you and one that has dozens of specialized, highly visible functions? The changes that this sort of thing might effect could conceivably be vaster than those caused by any other invention, but they will most likely just put a cap on the passivity engineered for us by the Industrial Revolution.

For all the cold water that needs to be thrown on some elements associated with the study of Artificial Life and Genetic Algorithms, the field is indeed fascinating for its content - there is something primordially thrilling about elaborating, in what passes for real life, the Frankenstein myth - and for its potential as well. Some of its most fascinating artistic products so far are robots that paint and the Oz Project at Carnegie-Mellon, where researchers are attempting to create virtual novels which evolve according to the 'reader's' choices. Unlike hypertext, the content of Oz fiction actually changes. This really is a new kind of art, unlike hypertext (whether literary or three-dimensional), and if successful will challenge many of criticism's assumptions.

The field of Artificial Life has also produced much research that is fascinating in and of itself. This was summed up most recently in Melanie Mitchell's Introduction to Genetic Algorithms, which demonstrates frankly what an infancy the field is in. Each of her examples of a GA's success is qualified by its failures; there are countless adhortations to researchers to find out just what GA's can do, since it now seems there's much that they can't, or don't do. (They still can't discern cats from dogs.)

Mitchell's book, while well-written and often interesting, will appeal mostly to academics who want a comprehensive view of the field and abstract directions for research. A reader who wants to get a more vital sense of this exciting and strange field would do better to peruse journals like Artificial Life, from MIT Press, and the Santa Fe Institute Studies in the Sciences of Complexity series; these will show the reader why researchers think they will succeed, within a few decades, in creating and proliferating creatures that share with natural ones - 'wetware' in the quite common term - an at least primordial ability to adapt, evolve, and think.

Programmers interested in creating GA's of their own will find Genetic Algorithms, by Artificial Life guru John Holland (1989), more useful than anything else. Its many code examples make implementing a GA seem easy, and its overview of techniques is practical and sufficient.