Some futurists such as Ray Kurzweil have hypothesized that we will someday soon pass through a singularity–that is, a time period of rapid technological change beyond which we cannot envision the future of society. Most visions of this singularity focus on the creation of machines intelligent enough to devise machines even more intelligent than themselves, and so forth recursively, thus launching a positive feedback loop of intelligence amplification. It’s an intriguing thought. (One of the first things I wanted to do when I got to MIT as an undergraduate was to build a robot scientist that could make discoveries faster and better than anyone else.) Even the CTO of Intel, Justin Rattner, has publicly speculated recently that we’re well on our way to this singularity, and conferences like the Singularity Summit (at which I’ll be speaking in October) are exploring how such transformations might take place.
As a brain engineer, however, I think that focusing solely on intelligence augmentation as the driver of the future is leaving out a critical part of the analysis–namely, the changes in motivation that might arise as intelligence amplifies. Call it the need for “machine leadership skills” or “machine philosophy”–without it, such a feedback loop might quickly sputter out.
We all know that intelligence, as commonly defined, isn’t enough to impact the world all by itself. The ability to pursue a goal doggedly against obstacles, ignoring the grimness of reality (sometimes even to the point of delusion–i.e., against intelligence), is also important. Most science-fiction stories prefer their artificial intelligences to be extremely motivated to do things–for example, enslaving or wiping out humans, if The Matrix and Terminator II have anything to say on the topic. But I find just as plausible the robot Marvin, the superintelligent machine from Douglas Adams’ The Hitchhiker’s Guide to the Galaxy, who used his enormous intelligence chiefly to sit around and complain, in the absence of any big goal.
Indeed, a really advanced intelligence, improperly motivated, might realize the impermanence of all things, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence, concluding that inventing an even smarter machine is pointless. (A corollary of this thinking might explain why we haven’t found extraterrestrial life yet: intelligences on the cusp of achieving interstellar travel might be prone to thinking that with the galaxies boiling away in just 1019 years, it might be better just to stay home and watch TV.) Thus, if one is trying to build an intelligent machine capable of devising more intelligent machines, it is important to find a way to build in not only motivation, but motivation amplification–the continued desire to build in self-sustaining motivation, as intelligence amplifies. If such motivation is to be possessed by future generations of intelligence–meta-motivation, as it were–then it’s important to discover these principles now.
There’s a second issue. An intelligent being may be able to envision many more possibilities than a less intelligent one, but that may not always lead to more effective action, especially if some possibilities distract the intelligence from the original goals (e.g., the goal of building a more intelligent intelligence). The inherent uncertainty of the universe may also overwhelm, or render irrelevant, the decision-making process of this intelligence. Indeed, for a very high-dimensional space of possibilities (with the axes representing different parameters of the action to be taken), it might be very hard to evaluate which path is the best. The mind can make plans in parallel, but actions are ultimately unitary, and given finite accessible resources, effective actions will often be sparse.
Gain the insight you need on artificial intelligence at EmTech MIT.