Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

If you’ve ever accidentally shot a video sideways, or cropped the top of someone’s head out of a frame, you might be glad to know about a new cell-phone app that automatically provides shooting advice to videographers.

Beyond just warning when the light is too low, or the color balance is off, the app issues alerts and guidance when a person has been framed poorly, or when the camera is being moved too jerkily. The software, which analyzes a video in real time, offers a peek at features that could become standard in future videocameras.

The new app, called NudgeCam, was developed for Android cell phones by researchers at FX Palo Alto Laboratory, a corporate research lab owned by Fuji Xerox. The app tracks faces in a video and provides on-screen tips for how to best size and position them inside the frame. It also warns if the camera is not being held level, or if the image is too bright or dark, or if the audio quality is bad.

“This is an approach to the media overload problem,” says Scott Carter, who developed the app with colleagues John Adcock and John Doherty. “NudgeCam is intended to guide the capture of video so you don’t have to edit and review so much footage.”

The app provides the kind of standard advice taught at media schools, such as how a person’s face should occupy a certain proportion of the video frame and should be positioned slightly off center. “These are well-known heuristics that are taught widely but are not integrated into the [video] capture devices we use,” says Carter. The app can also be used to make templates to guide the capture of specific types of footage–for example, arrows direct a user to move the camera a particular way. Tags can also be added as reminders to be checked off during a recording, for example, so that the shooter ensures that an interviewee’s gaze is steady.

Similar features may eventually appear in consumer cameras. “We view the app platform as a stepping stone,” says Carter. “The goal is that these ideas can one day be embedded in other sorts of higher-end cameras.”

Capturing and processing video and audio in real time is computationally intensive. Creating the prototype software on a portable device would be a challenge without the flexibility of the Android software development kit (SDK) and the power of a phone like the Nexus One, which has a one-gigahertz processor, says Carter. It would be impossible to do the same on even a high-end digital camera, because such cameras are relatively locked down, and lack powerful processors.

0 comments about this story. Start the discussion »

Credit: FX Palo Alto Laboratory

Tagged: Communications, Facebook, apps, cell phones, video, photography, face tracking

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me