Skip to Content
Uncategorized

Color Editing for Dummies

A new Xerox prototype aims to let people use simple natural-language commands to tweak photos and documents, avoiding complex color-editing tools.

You’re editing a great digital photo and want to enrich the yellowness of the scene. On your computer desktop you’re confronted with three color-editing sliders: for red, for green, and for blue. You have two choices, neither of which is obvious for most users: either reduce the blue, or increase red and green together. (However, the latter approach will also make the whole image lighter.)

Color editing: In the prototype technology from Xerox, a series of simple written commands changes color characteristics in an original landscape shot (top) to a modified version (bottom) without the user having to understand complex color-editing software. View slide show to see how it works.

Researchers at the Xerox Research Center Webster, in Webster, NY, say that they have developed a prototype that makes color editing intuitive by allowing people to simply type in commands that say things like “Make the background more yellow” or “I want the sky a darker blue.”

It’s color editing for an age when millions of people are trying to make holiday cards, calendars, and framed photos from their digital snapshots. The Xerox technology could also help small businesses avoid having to get documents refined by expensive pre-press consultants who use complex professional tools before printing.

While the Xerox technology is still just a prototype–it was announced yesterday at a conference in Kansas City, MO–the company’s goal is to give people a simple tool to translate their written (and perhaps, eventually, spoken) descriptions of colors into numerical codes for shades and colors that printers–whether home models or commercial presses–use to print color documents.

Multimedia

  • View a slide show of how Xerox's technology works.

“Color-editing tools were designed by engineers, with interfaces that are engineering oriented,” says Geoffrey Woolfe, principal scientist at the Xerox Research Center, who developed the technology. “The underlying model of the controls does not match the users’ cognitive understanding. But everybody can describe color using language. They can explain how the colors need to change based on words.”

The technology starts with natural-language recognition. The prototype currently recognizes some 1,800 words that people use to describe color–such as “teal” or “azure” or even “carnation pink”–and another 15 perceptual attributes of color such as “deeper” or “brighter,” plus modifiers like “more” or “less.”

Based on the command, the technology finds the closest color in the image that corresponds to the spoken color. Then it creates a digital “mask” over those parts, and finally implements appropriate changes only to the areas inside the mask, according to Woolfe’s paper describing the technology.

The same kinds of general written commands can result in very different actions at a technical level. With today’s color-editing tools, achieving goals like “Make the sky darker blue,” “Make the greens more yellow,” and “Make the reds slightly more saturated” would require manipulating controls that involve colors, saturation, and contrast, Woolfe says.

“We are democratizing color, in a way, trying to put high-quality color in the hands of ordinary people,” he says. The technology reflects preliminary research that is several years away from commercialization.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.