MIT Technology Review Subscribe

An AI Dreamed Up Street Scenes, and They’re Surprisingly Good

You’re looking at pure fiction: this image was actually created by an AI, trained on the kinds of driver’s-eye labeled images often supplied to self-driving cars. Usually, humans describe which parts of a picture are, say, cars or sidewalks, and the labeled images are used to train neural networks to recognize what they’re looking at. Instead, Qifeng Chen, from Stanford University and Intel, got a similar neural net to use those learnings to render new street scenes. It puts a road somewhere down the middle, trees down the side, cars on the road … and the results are surprisingly good.

Chen tells New Scientist that the software could be used to great effect in video games, where it could create realistic virtual worlds on the fly. It’s worth bearing in mind that the games industry is big business: Twitter famously invested $150 million in Magic Pony, which also uses AI to generate high-quality computer-game graphics, giving the startup a valuation of a cool $1 billion.

Advertisement
This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in
This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement