I’ve been following the virtual world called Second Life for some time, so it was a pleasure to read Wade Roush’s thoughtful and intelligent cover story (“Second Earth,” July/August 2007). The piece benefited greatly from the fact that your writer entered into the life of the community he was trying to understand.
I’m sure you’ll receive some splenetic, sarcastic criticism of the piece from someone disgusted by the very idea of a Second Life. Unlike Roush, though, your critic will almost certainly have spent no time in acquiring one.
In his essay arguing against the possibility of producing conscious machines (“Artificial Intelligence Is Lost in the Woods,” July/August 2007), is Yale computer science professor David Gelernter arguing against artificial intelligence or artificial humanity? Intelligence does not require all the human interactions with the world or emotions that he lists, unless there is a particular need to provide those for the intended application.
Consciousness is hard to define. Maybe someone should make a replacement for the Turing test, Alan Turing’s suggestion that if a computer can answer questions the same way a human would, then it can be considered intelligent. A Helen Keller test, perhaps: it may be possible, after all, that there is or will be a computer in existence that is conscious, but for whom we have not provided the means for input or output that it would need to signal to us that it is conscious. Or maybe it’s speaking “Chinese” to an “English” world or broadcasting radio to a television world.
I think we’d better find a more general concept of consciousness than Gelernter’s so that, at a minimum, we’ll recognize that aliens have landed if they ever do.
Stanley D. Young
Fort Collins, CO
I side with the anticognitivists (and thus David Gelernter). AI software running on von Neumann machines will never be conscious, and without consciousness there can be no experience, human or otherwise. Believing that somehow consciousness will arise like a deus ex machina on your Pentium is an article of religious faith.
Still, while AI software cannot replicate consciousness, networks of artificial neurons have considerably more promise. Consider machines being built by Kwabena Boahen’s group at Stanford or earlier by Carver Mead’s student Misha Mahowald at Caltech.
There are also hybrids in which real neural circuits are emulated in very large-scale integration (VLSI): Paul Rhodes’s group at Evolved Machines in Palo Alto is working on that, as is Theodore Berger’s group at the University of Southern California.
Digital computers are so second millennium. As my MIT classmate Ray Kurzweil might say, “Plug that silicon retina into your optic nerve, and you won’t know the difference.”
Menlo Park, CA