Views from the Marketplace are paid for by advertisers and select partners of MIT Technology Review.
Our Extended Sensoria. How Humans Will Connect with the Internet of Things
Mark Weiser predicted the Internet of Things in a seminal article in 1991 about how people would interact with networked computation distributed into the environments and artifacts around them.
Before the IoT moniker dominated, his vision of “ubiquitous computing” could take many names and flavors as factions tried to establish their own brand (“Things That Think” at the Media Lab, “Project Oxygen” at MIT’s Lab for Computer Science, “Pervasive Computing,” “Ambient Computing,” “Invisible Computing,” “Disappearing Computer,” etc.), but it was all still rooted in Weiser’s “UbiComp.”
The Internet of Things assumes ubiquitous sensate environments. Without these, the cognitive engines of this everywhere-enabled world are deaf, dumb, and blind, and cannot respond relevantly to the real-world events that they aim to augment. And the last decade has seen a huge expansion in wireless sensing, which is having a deep impact in ubiquitous computing. Advances have been rampant and sensors of all sorts now seem to be increasingly in everything. A myriad of commercial products are appearing for collecting athletic data for a variety of sports that range from baseball to tennis, and even the amateur athlete of the future will be instrumented with wearables that will aid in automatic/self/augmented coaching. Sensors of various sorts have also crept into fabric and clothing and going beyond wearable systems are electronics that are attached directly to or even painted onto the skin.
In George Orwell’s 1984, it was the totalitarian Big Brother government who put the surveillance cameras on every television—but in the reality of nowadays, it is consumer electronics companies who build cameras into the common set-top box and every mobile handheld. Cameras are becoming commodity and they will become even more common as generically embedded sensors.
In the next years, as large video surfaces cost less and are better integrated with responsive networks, we will see the common deployment of pervasive interactive displays. Information coming to us will manifest in the most appropriate fashion (e.g., in your smart eyeglasses or on a nearby display)—the days of pulling your phone out of your pocket and running an app are severely limited.
How Humans Will Connect with the Internet of Things?
Furthermore, the energy needed to sense and process has steadily declined—sensors and embedded sensor systems have taken full advantage of low-power electronics and smart power management. Similarly, energy harvesting, once an edgy curiosity, has become a mainstream drumbeat that is resonating throughout the embedded sensor community. And the dream of integrating harvester, power conditioning, sensor, processing, and perhaps wireless on a single chip nears reality.
Moore’s Law has democratized sensor technology enormously. Ever more sensors are now integrated into common products (witness mobile phones, which have become the Swiss Army Knives of the sensor/RF world), and the DIY movement has also enabled custom sensor modules to be easily purchased or fabricated through many online and crowd-sourced outlets. As a result, this decade has witnessed an explosion of real-time sensor data flowing into the network. This will surely continue in the following years, leaving us the grand challenge of synthesizing this information into many forms—for example, grand cloud-based context engines, virtual sensors, and augmentation of human perception. These advances not only promise to usher in true UbiComp, they also hint at radical redefinition of how we experience reality that will make today’s common attention-splitting between mobile phones and the real world look quaint and archaic.
We are entering a world where ubiquitous sensor information from our proximity will propagate up into various levels of what is now termed the “cloud” then project back down into our physical and logical vicinity as context to guide processes and applications manifesting around us.
Our relationship with computation will be much more intimate as we enter the age of wearables. Right now, all information is available on many devices around us at the touch of a finger or the enunciation of a phrase. Soon it will stream directly into our eyes and ears once we enter the age of wearables. This information will be driven by context and attention, not direct query, and much of it will be pre-cognitive, happening before we formulate direct questions. Indeed, the boundaries of the individual will be very blurry in this future. Humanity has pushed these edges since the dawn of society. Since sharing information with each other in oral history, the boundary of our mind expanded with writing and later the printing press, eliminating the need to mentally retain verbatim information and keep instead pointers into larger archives. In the future, where we will live and learn in a world deeply networked by wearables and eventually implantables, how our essence and individuality is brokered between organic neurons and whatever the information ecosystem becomes is a fascinating frontier that promises to redefine humanity.
Read the full article today!
Gain the insight you need on emerging technologies at EmTech MIT.Learn more and register