Skip to Content

The iPhone’s Untapped Potential

Apple could do a lot more with all the sensors in the iPhone.
June 29, 2007

Apple is known for its innovative gadget design, and with the release of the iPhone, it continues to live up to its hype. But while people are fawning over features like the smart, multitouch screen and the advanced Web browser, there is important technology under the hood that will likely go underappreciated. The iPhone has tiny, powerful sensors–an accelerometer, an ambient light sensor, and an infrared sensor–that are able to pick up cues from the environment and adjust the phone’s functions accordingly. Apple has decided to use these sensors for detecting when to convert the screen view from portrait to landscape, for adjusting the brightness of the screen based on the brightness of the environment, and for disabling the touch screen when a person holds the phone to her ear.

Extrasensory phone: Apple’s iPhone comes with sensors that can detect changes in the phone’s position and environment. Researchers at other companies have been developing mobile-phone applications that can employ data collected by these sorts of sensors to infer a user’s behavior.

Of course, Apple isn’t the first to put sensors such as accelerometers in phones. Nokia, for example, has a sports phone (called the 5500) that uses an accelerometer as a pedometer. When a person takes the phone jogging, the accelerometer logs the rate of vibrations and sends that data to software that determines speed and distance. The 5500 also offers an accelerometer-based game in which a user tilts the device to navigate a ball through a maze. In addition, Nokia offers a developers’ kit so that people can make their own accelerometer-based games, potentially mimicking the style of those played with Nintendo’s popular Wii controller. (See “Hack: The Nintendo Wii.”)

These functions, while useful and entertaining, are still pretty mundane, says Nathan Eagle, a research scientist at MIT. “These are trivial uses for what has the potential to provide a whole slew of new features and functionality,” he says. Separate research taking place at MIT, Intel, and other companies suggests that, with the right software, built-in hardware such as accelerometers, light sensors, a GPS, and the phone’s own microphone could provide contextual clues about people’s activities and behaviors. A sensor-enabled phone could feasibly help monitor your exercise habits, keep track of an elderly relative’s activities, and let your friends and family know if you’re available for a call or instant-messaging conversation. It could even provide insight into social networks.

“If you get access to [a phone’s] accelerometer data, you can get a variety of contextual clues about how the user is living their life,” Eagle says–for instance, whether or not a user is riding a bike, taking the subway, walking up stairs, or sitting for a long period of time. The data can be used to let workers know if they need to take a break or if a person is meeting exercise goals, he says. Eagle and Sandy Pentland, professor of media arts and sciences at MIT, have used Nokia phones equipped with sensors to study the behavior of people in groups and even predict their actions to a certain extent. (See “Gadgets That Know Your Next Move.”)

To explore other possibilities, researchers at Intel use a small gadget, about the size of a pager, that amasses data from seven sensors: an accelerometer, a barometer, a humidity sensor, a thermometer, a light sensor, a digital compass, and a microphone, says Tanzeem Choudhury, a researcher at Intel Labs Seattle. Most of the sensors are used to determine location and activity, but the microphone can provide interesting insight into social networks, she says, such as whether a person is having a business conversation or a social chat. Aware of privacy concerns, the researchers designed the microphone data to be immediately processed so that all words are removed, and only information about tone, pitch, and volume is recorded. Recently, Intel researchers equipped a first-year class of University of Washington graduate students with these sorts of sensors and, based on their interactions, were able to watch social networks develop over time.

To churn through all the data the Intel sensors collect, the researchers designed software to process it in stages, explains Choudhury. “You can do some simple processing on the mobile device,” she says, such as averaging similar data points over time and throwing out data from a sensor that’s below a threshold. Most mobile phones have the processing capabilities to do this and extract actions such as walking and sitting.

In the next stage of processing, researchers plug these actions into machine-learning models that infer more-complex behaviors. For instance, making a meal will require short walking bursts, standing, and picking things up. The Intel researchers developed models that look for certain actions occurring in succession. These models can also adjust to the basic quirks of the user, accounting for variation in cooking behavior; some meals may require more walking than others, and some people may sit more during meal preparation than others. This sort of information could be useful, Choudhury says, in determining if an elderly person is eating regularly. She notes that currently, some of the modeling is too computationally intensive to do entirely on a cell phone, and some of the data must be uploaded to a computer or a server. However, she says, the algorithms are becoming more efficient, and the processing power in phones continues to increase.

At this point, says MIT’s Eagle, it wouldn’t be too difficult to write consumer software that could infer a person’s basic activities. These activities could then be used to update the status listed in an instant-messenger program or on a blog. Eagle notes, however, that manufacturers might be hesitant because it’s likely that all the required data processing could cut battery life.

Apple has made no announcements about whether it might include such software in future versions of the iPhone. And it’s unlikely that outside developers will be able to take advantage of the sensors at this point: Apple is limiting third-party development to applications that run within the Web browser–essentially, specialized Web pages. But as more phones become equipped with sensors, and phones’ processing power continues to increase, Eagle suspects that sensor-based applications will become more popular.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.