Fitting the hardware for a high-quality camera into a slender smartphone is tricky. The smoother a camera lens, the less distortion it will produce. But if a lens is too small, the effect of any distortion is magnified. As a result, you sometimes see a bulge sticking out of handsets to accommodate a smooth yet sizable lens.
But software may offer a way around this. A Canadian startup called Algolux says that by computationally accounting for imperfections in lenses (or photographers), it can get higher-quality images out of today’s cell phones and eventually make phone cameras thinner and cheaper.
The Montreal-based company, which recently completed a $2.6 million funding round, is testing its technology on a variety of smartphones. Allan Benchetrit, Algolux’s CEO, says the company believes some phone makers will add its software to some handsets next year.
A series of example photos on Algolux’s website show why they’d be interested. There are marked differences between “before” and “after” shots. Smartphone photos corrected for aberrations in a camera’s hardware show sharper spikes on a cactus in one photo, and sharper letters on a building’s sprinkler system hookup in another. Algolux does this by identifying the specific defects in any given camera through a calibration process and inverting them with its software.
The company also has a method for correcting for motion blur, which often occurs when you take photos in low light. To do this, Algolux uses a front-facing camera to grab high-speed video while photos are being shot with the rear camera. Data from the front camera is used for motion tracking and combined with readings from sensors on the phone, such as its accelerometer and gyroscope, to get an overall measurement of how the user moved the camera to create the blur. This information can then be used to determine how the deblurring software should go to work on images taken with the rear camera.
It could be a while before this second tactic is included in any phones, though. Using the front and rear cameras simultaneously to shoot photos and videos isn’t common with existing smartphones. It’s not even possible on Apple devices; Google says it’s up to Android smartphone makers to decide if they want to enable simultaneous camera use on handsets.
And generally, resolving either type of blur with software could eat up a lot of processing power and battery life, says Bruce Hemingway, a senior lecturer in computer science and engineering at the University of Washington who teaches a course on the science and art of digital photography. Already, smartphone camera apps doing heavy computation sometimes tend to pause between taking shots, he says, which people hate.
Still, he says, the company’s sample images look good.
“I think it is feasible,” he says. “We’re on the edge of where this is really effective in, say, cell phones.”
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
The walls are closing in on Clearview AI
The controversial face recognition company was just fined $10 million for scraping UK faces from the web. That might not be the end of it.
This horse-riding astronaut is a milestone in AI’s journey to make sense of the world
OpenAI’s latest picture-making AI is amazing—but raises questions about what we mean by intelligence.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.