Skip to Content

How Moss Helped Machine Vision Overcome an Achilles’ Heel

There are some thing machine vision just cannot recognize well. Now a research project to identify moss has found a way to overcome this limitation.

In recent years, deep-learning algorithms have revolutionized the way machines recognize objects. State-of-the-art algorithms easily outperform humans in identifying ordinary things such as tables, chairs, cars, and even faces.

But these algorithms have an Achilles’ heel: there are some things they just cannot see. For example, machine vision is not good at recognizing things like grasses and herbs, because they have amorphous forms that are hard to define.

A table generally has four legs and a flat surface, features that machine learning is good at identifying. By contrast, grasses and herbs of the same species can be different sizes and have different numbers of leaves, seeds, and so on, depending on the growing conditions. That makes it hard for machine vision to recognize them, particularly if they aren’t in flower.

Machines find it similarly hard to identify trees from aerial imagery or crops from satellite images. What’s needed is a new approach that can train deep-learning algorithms to work their magic on objects with ambiguous form.

Enter Takeshi Ise and pals at Kyoto University in Japan. These guys have developed a simple technique that helps deep-learning machines to recognize these amorphous plants. They’ve put the new technique through its paces by teaching it to recognize different types of moss, a plant with a hard-to-define form.

The team is well placed to study moss, given Kyoto’s famously warm and wet climate, which promotes its growth. Ise and co began by photographing moss at a traditional Japanese garden in Kyoto, called Murin-An, where it is cultivated.

They identified three kinds of moss and photographed each individually but also in places where they are all present along with other non-mossy plants and features. Each picture was taken with a digital camera, such as an Olympus OM-D E-M5 Mark II, with a 50mm lens (or equivalent) from a distance of 60 centimeters directly above the moss mats. These images have 4608 x 3456 pixels.

The goal for their deep-learning algorithm is to identify the different types of moss in a single image and to distinguish the moss from other objects and plants.

Their method is straightforward. To train the algorithm, the team divides up each image of a specific moss into much smaller regions of 56 x 56 pixels, with 50 percent overlap. In this way, the original image generates some 90,000 images, of which they use 80 percent for training their algorithm and the rest for testing it.

Although the training images of were taken of a uniform mat of a specific type of moss, these mats can contain small regions of other mosses. So the team examined all the training images and removed the images of alien mosses by hand. That left images of three type of moss—Polytrichum, Trachycystis, and Hypnum—as well as non-moss features. All of the training images could then be labeled as one of these types and fed into the deep-learning machine.

The results are impressive. Using this method, the algorithm quickly learned to recognize each type of moss with good accuracy. When the researchers let the algorithm loose on a single image showing various types of moss, it was able to accurately identify the mosses in different areas of the image. “The model correctly classified test images with accuracy more than 90%,” they say.

The algorithm does better for some types of moss than others. “The estimated performance for Polytrichum is 99% [recognition accuracy], Trachycystis is 95%, and Hypnum is 74%,” say Ise and co.

The lower accuracy for Hypnum is because this plant is more amorphous than the others, with less well-defined forms of growth. By contrast, Polytrichum has a distinctive, well-defined shape.

The team say there are various ways of improving the accuracy, such as building a training set of photographs taken at different times of the year when the Hypnum moss, in particular, can look more distinctive. Or the white balance on the digital camera could be standardized to get more accurate color rendition for each moss.

In any case, the results show significant promise for the future. The technique could be applied to aerial imagery to better identify trees and plants in images taken from above. That would be hugely useful for stock-taking in the wild or in large managed areas such as farms and forests.

In the meantime, Ise and co say they plan to develop an app that allows people to identify moss using a smartphone. That could prove popular for gardeners. 

Ref: arxiv.org/abs/1708.01986 : Identifying 3 Moss Species By Deep Learning, Using The “Chopped Picture” Method

Keep Reading

Most Popular

conceptual illustration of a heart with an arrow going in on one side and a cursor coming out on the other
conceptual illustration of a heart with an arrow going in on one side and a cursor coming out on the other

Forget dating apps: Here’s how the net’s newest matchmakers help you find love

Fed up with apps, people looking for romance are finding inspiration on Twitter, TikTok—and even email newsletters.

digital twins concept
digital twins concept

How AI could solve supply chain shortages and save Christmas

Just-in-time shipping is dead. Long live supply chains stress-tested with AI digital twins.

still from Embodied Intelligence video
still from Embodied Intelligence video

These weird virtual creatures evolve their bodies to solve problems

They show how intelligence and body plans are closely linked—and could unlock AI for robots.

computation concept
computation concept

How AI is reinventing what computers are

Three key ways artificial intelligence is changing what it means to compute.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.