Views from the Marketplace are paid for by advertisers and select partners of MIT Technology Review.
The Future of Artificial Intelligence and Cybernetics
In this article excerpt, a British researcher discusses why AI and cybernetics are moving beyond the realm of science fiction—but warns that the technologies also raise significant ethical questions.
Science fiction has, for many years, looked to a future in which robots are intelligent and cyborgs are commonplace. The Terminator, The Matrix, Blade Runner and I, Robot are all good examples of this vision.
But until the last decade, consideration of what this might actually mean in the future was unnecessary because it was all science fiction, not scientific reality. Now, however, science has not only done some catching up; it’s also introduced practicalities that the original story lines didn’t appear to include (and, in some cases, still don’t include).
What we consider here are several different experiments linking biology and technology together in a cybernetic way—essentially ultimately combining humans and machines in a relatively permanent merger.
When we typically first think of a robot, we regard it simply as a machine. We tend to think that it might be operated remotely by a human, or that it may be controlled by a simple computer program.
But what if the robot has a biological brain made up of brain cells, possibly even human neurons? Neurons grown under laboratory conditions on an array of non-invasive electrodes provide an attractive alternative with which to realize a new form of robot controller. In the near future, we will see thinking robots with brains not very dissimilar to those of humans.
This article is excerpted from a lengthier exploration of AI and cybernetics.
That development will raise many social and ethical questions. For example, if the robot brain has roughly the same number of human neurons as a typical human brain, then could it, or should it, have rights similar to those of a person? Also, if such robots have far more human neurons than in a typical human brain—for example, a million times more neurons—would they, rather than humans, make all future decisions?
Many human brain–computer interfaces are used for therapeutic purposes to overcome medical or neurological problems, with one example being the deep brain stimulation (DBS) electrodes used to relieve the symptoms of Parkinson’s disease. However, even here it’s possible to consider using such technology in ways that would give people abilities that humans don’t normally possess—in other words, human enhancement. In some cases, those who have undergone amputations or suffered spinal injuries due to accidents may be able to regain control of devices via their still-functioning neural signals.
Meanwhile, stroke patients can be given limited control of their surroundings, as indeed can those who have motor neurone disease. With those cases, the situation isn’t straightforward, as patients receive abilities that normal humans don’t have—for example, the ability to move a cursor on a computer screen using nothing but neural signals.
It’s clear that connecting a human brain with a computer network via an implant could, in the long term, open up the distinct advantages of machine intelligence, communication, and sensing abilities to the individual receiving the implant. Currently, obtaining the go-ahead for each implantation requires ethical approval from the local authority governing the hospital where the procedure is performed. But looking ahead, it’s quite possible that commercial influences, coupled with societal wishes to communicate more effectively and perceive the world in a richer form, will drive market desire.
For some, brain–computer interfaces are perhaps a step too far just now—particularly if the approach means tampering directly with the brain. As a result, the most studied brain–computer interface to date is that involving electroencephalography (EEG). While EEG experimentation is relatively cheap, portable, and easy to set up, it’s still difficult to see its widespread future use. It certainly has a role to play in externally assessing some aspects of brain functioning for medical purposes. However, the idea of people driving around while wearing skullcap of electrodes, with no need for a steering wheel, doesn’t seem realistic. Completely autonomous vehicles are much more likely.
Such experimental cases indicate how humans—and animals, for that matter—can merge with technology. That, in turn, generates a plethora of social and ethical considerations as well as technical issues. That’s why it’s vital to include a sense of reflection so that the additional experimentation we’ll now witness will be guided by the informed feedback that results.
This article is excerpted from a lengthier exploration of AI and cybernetics. Read the full article on BBVA’s OpenMind site.
Kevin Warwick is deputy vice chancellor for research at Coventry University in the United Kingdom. He is former professor of cybernetics at Reading University, also in the U.K. He is the author or co-author of more than 600 research papers.