If you’ve ever accidentally shot a video sideways, or cropped the top of someone’s head out of a frame, you might be glad to know about a new cell-phone app that automatically provides shooting advice to videographers.
Beyond just warning when the light is too low, or the color balance is off, the app issues alerts and guidance when a person has been framed poorly, or when the camera is being moved too jerkily. The software, which analyzes a video in real time, offers a peek at features that could become standard in future videocameras.
The new app, called NudgeCam, was developed for Android cell phones by researchers at FX Palo Alto Laboratory, a corporate research lab owned by Fuji Xerox. The app tracks faces in a video and provides on-screen tips for how to best size and position them inside the frame. It also warns if the camera is not being held level, or if the image is too bright or dark, or if the audio quality is bad.
“This is an approach to the media overload problem,” says Scott Carter, who developed the app with colleagues John Adcock and John Doherty. “NudgeCam is intended to guide the capture of video so you don’t have to edit and review so much footage.”
The app provides the kind of standard advice taught at media schools, such as how a person’s face should occupy a certain proportion of the video frame and should be positioned slightly off center. “These are well-known heuristics that are taught widely but are not integrated into the [video] capture devices we use,” says Carter. The app can also be used to make templates to guide the capture of specific types of footage–for example, arrows direct a user to move the camera a particular way. Tags can also be added as reminders to be checked off during a recording, for example, so that the shooter ensures that an interviewee’s gaze is steady.
Similar features may eventually appear in consumer cameras. “We view the app platform as a stepping stone,” says Carter. “The goal is that these ideas can one day be embedded in other sorts of higher-end cameras.”
Capturing and processing video and audio in real time is computationally intensive. Creating the prototype software on a portable device would be a challenge without the flexibility of the Android software development kit (SDK) and the power of a phone like the Nexus One, which has a one-gigahertz processor, says Carter. It would be impossible to do the same on even a high-end digital camera, because such cameras are relatively locked down, and lack powerful processors.
Carter intends to experiment with Stanford University’s Frankencamera platform for Nokia’s N900 smart phone. This platform gives developers more freedom to develop software to control a phone’s photographic hardware than they’d get from production cameras or even app SDKs like Android’s.
Developers are testing NudgeCam as part of research studies that ask participants to record video diaries or interviews. Helping people compose better video can increase the amount of useful material a study produces, says Carter.
Automated guidance on capturing video has particular potential in developing regions, Carter says. In such places, people are often unfamiliar with how to make videos. He has been discussing with Berkeley researchers how NudgeCam might aid a project that helps rural health workers in India use cell phones to capture interviews with community leaders to encourage people to visit clinics.
“Videos really helped them,” says John Canny, a human computer interaction researcher who leads the Berkeley project, “but in order to scale the idea, we need most of the videos to be taken by health workers, most of whom are poorly educated, and roughly a quarter of whom are illiterate. This kind of technology could help them with that.”
But Canny says the app has potential for consumers, too. “Everybody could use some help,” he says, pointing out that some cameras can already warn of a shaky grip or provide simple guides to composition. “This seems like a qualitative improvement on that.”
Carter is also working with robot-building firm Willow Garage, which could use NudgeCam’s techniques to guide users of a camera on a telepresence robot.