Skip to Content

The Challenges of Virtual Hairstyling

New system uses haptics to allow designers to comb hair just like they would in real life.

Imagine that you had to design a virtual hairstyle by painstakingly defining the position and shape of every single hair on a character’s head. It sounds like a joke, but modeling hair in this way - with a heaping helping of post-processing for added realism - remains the industry standard for creating virtual 3D hairstyles.

If you were an animator at Pixar or Dreamworks, for your own sanity you might avoid hair all together - say, by putting your characters in hats. How else to explain the higher-than-expected frequency of baldness among recognizable computer-animated heroes?

Work from Ugo Bonanni and colleagues at MIRALab at the University of Geneva promises to redress the sorry state of virtual hairstyling in the most intuitive and natural way possible: by using touch-sensitive, force-feedback technology - a.k.a. haptics - to allow animators and even real-life hairstylists to style the coifs of virtual characters in the same way they would style the hair of a real person.

This isn’t the first time researchers have tried to simulate cutting, wetting, drying and even moussing hair - that distinction belongs to Kelly Ward and colleagues, whose 2007 Interactive Virtual Hair Salon managed to transform the helmet-head that had previously afflicted one virtual character into a more-realistic (but not necessarily improved) frizzy mop of overbleached split ends appropriate for whichever mid-80’s music video they planned to drop her into. However, Ward’s Virtual Hair Salon didn’t offer either combing or brushing - less of an issue than you would think, given that the simulation also didn’t offer virtual bed-head or, given enough computer cycles, gigantic fused dreadlocks.

Bonanni’s solution is to create virtual hair that can not only responds to virtual combing in real time - it also delivers appropriate force-feedback to the virtual hairbrush used to style it. This is, as Bonanni points out, a non-trivial computing challenge:

“Displaying a physically based simulation of a hairstyle with real-time animated hair strands in contact with a styling tool returning haptic interaction forces is a very ambitious endeavor which calls for a highly efficient and accurately synchronized multithreaded application. […]

Fundamental requirements to the visuo-haptic hair simulation models underlying a 3D styling application include the ability to define the precise placement of hair strands, the handling of both interaction forces and torques, and an appropriate force/torque accumulation and propagation mechanism synchronizing the visual and haptic modalities in a consistent way.”

Indeed, one could argue this particular application is one of the more important uses of the incredible power of today’s so-called “desktop supercomputers,” which, let’s face it, are for the most part sitting idle, their designers constantly on the hunt for a task big enough to warrant their brand of awesome.

This interface is the result:

By adding in the use of the keyboard, users can expand the power of their virtual hairbrush - which, unlike a regular hairbrush, is not bound by conventional notions of time, space or, given the look sported by the researcher’s test model, beauty:
And, while we’re at it, here is the equation for a single strand of hair:
In their conclusion, the researchers look forward to a revolution in virtual hairstyling enabled by the unstoppable march of Moore’s Law:

“Application of the proposed interaction metaphors to a complete hairstyling interface relying on a simulation model enabling the explicit positioning of arbitrary centerline nodes […] together with appropriate parallelization schemes exploiting multicore architectures, could lead to a more robust implementation of haptics-based hairstyling applications and a breakthrough in 3D hair modeling.”

Certainly we can all agree that such a breakthrough cannot arrive soon enough.

Keep Reading

Most Popular

Geoffrey Hinton tells us why he’s now scared of the tech he helped build

“I have suddenly switched my views on whether these things are going to be more intelligent than us.”

ChatGPT is going to change education, not destroy it

The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.

Meet the people who use Notion to plan their whole lives

The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.

Learning to code isn’t enough

Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.