MIT Technology Review Subscribe

Deep Learning Creates Earth-like Terrain by Studying NASA Satellite Images

Video games could soon be set in realistic worlds generated on demand.

The landscapes in video games and artificial worlds can be generated in two ways. The first is to hand-craft the terrain and populate it with appropriate colors and textures such as rocks, grass, trees, snow and so on. This produces high-quality results but is expensive because of the human labor involved.

The second method is to generate the landscape algorithmically, a process that is much quicker and cheaper. This is how players in the game Minecraft enter an entirely new landscape every time they play.

Advertisement

The algorithms behind this process are well developed, and programmers have fine-tuned them over the years to produce different climates, textures, height variations and so on. But new landscape-generating algorithms are themselves time-consuming and expensive to write. So a way to automate their creation would be a significant advance.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Today Christopher Beckham and Christopher Pal at the Montreal Institute of Learning Algorithms in Canada say they have trained a deep-learning machine to generate realistic landscapes using satellite images of Earth as a training set. In effect, the machine writes its own algorithm. The work promises to significantly change the way artificial landscapes can be generated on the fly.

The system that Beckham and Pal exploit is called a generative adversarial network. It consists of two deep-learning machines that work together to tackle a problem, in this case generating realistic terrain.

The first machine generates new terrain while the second evaluates the results and provides feedback. The first machine then uses this feedback to produce another set of landscapes, which the second machine evaluates with feedback, and so on. The idea is that the second machine learns to produce landscapes that match the feedback given by the first machine.

Clearly, an important part of this process is teaching the first machine what an ideal landscape should look like. This type of task has become straightforward in machine learning when there is large database of images to learn from—for example, in face recognition or object recognition. But it has not yet been done for terrain generation in this way.

So Beckham and Pal’s first goal was to create a database of images for  training.

It turns out that exactly this kind of data is available thanks to NASA’s Visible Earth program, which has created a detailed map of our home planet. This includes data on height, shape, and color.

NASA’s images are huge: 21,600 pixels by 10,800 pixels. They show the entire planet, with each pixel representing a square kilometer on the surface. Beckham and Pal take a random 512×512-pixel window and slide it across the images to create large database of image samples for training. They remove any images that are largely black (i.e., that show pure ocean) so that the training is not too trivial. “The textures in the collection can correspond to various biomes such as jungle, desert, and arctic,” they say.

Advertisement

They then use this data set to train a deep-learning machine to recognize realistic Earth terrains of various types. Next they set up another deep-learning machine to generate 512×512-pixel images at random. It sends these maps to the trained machine, which evaluates them and sends its feedback.

At first, of course, the generated landscapes are poor representations of Earth terrain. But over many iterations, the machine learns how to produce landscapes that receive good evaluations. And once it has done this, it can generate new Earth-like terrains continually.

But the images are not perfect. They can contain artifacts from the learning process that do not conform to real-world features. These could be prevented with deeper learning configurations or by blurring the images, say the researchers.

There is clearly more work to be done, but the pair seem happy with this outcome. “We have achieved a reasonable first step toward procedural generation of terrain based on real-world data,” they say.

That’s interesting work that has a wide range of other applications. For a start, the training database doesn’t have to be Earth-based. NASA has detailed images of the moon, Mars, Titan, and various other places in the solar system that could be used to train similar networks. So games like Minecraft could easily take on a distinct lunar or Martian feel with little human input.

And the training database needn’t even be terrain based. “One can imagine the same scheme being applied to synthesise 3D meshes which are then textured (e.g. faces),” says Beckham and Pal.

That’s something that could be of interest to a wide range of game makers and others. “These kinds of possibilities serve to not only promote richer entertainment experiences, but to also provide useful tools to aid content producers (e.g. 3D artists) in their work,” say Beckham and Pal.

Ref: arxiv.org/abs/1707.03383 : A Step Towards Procedural Terrain Generation With GANs

Advertisement
This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement