Skip to Content

Sponsored

Computer vision in AI: The data needed to succeed

There is nothing more important for artificial intelligence initiatives than acquiring contextually labeled, high-quality training data and developing a reliable data pipeline. Here’s how to get there.

April 29, 2021

Provided byAlegion

Developing the capacity to annotate massive volumes of data while maintaining quality is a function of the model development lifecycle that enterprises often underestimate. It’s resource intensive and requires specialized expertise.

At the heart of any successful machine learning/artificial intelligence (ML/AI) initiative is a commitment to high-quality training data and a pathway to quality data that is proven and well-defined. Without this quality data pipeline, the initiative is doomed to fail.

Computer vision or data science teams often turn to external partners to develop their data training pipeline, and these partnerships drive model performance.

There is no one definition of quality: “quality data” is completely contingent on the specific computer vision or machine learning project. However, there is a general process all teams can follow when working with an external partner, and this path to quality data can be broken down into four prioritized phases.

Annotation criteria and quality requirements

Training data quality is an evaluation of a data set’s fitness to serve its purpose in a given ML/AI use case.

The computer vision team needs to establish an unambiguous set of rules that describe what quality means in the context of their project. Annotation criteria are the collection of rules that define which objects to annotate, how to annotate them correctly, and what the quality targets are.

Accuracy or quality targets define the lowest acceptable result for evaluation metrics like accuracy, recall, precision, F1 score, et cetera. Typically, a computer vision team will have quality targets for how accurately objects of interest were classified, how accurately objects were localized, and how accurately relationships between objects were identified.

Workforce training and platform configuration

Platform configuration. Task design and workflow setup require time and expertise, and accurate annotation requires task-specific tools. At this stage, data science teams need a partner with expertise to help them determine how best to configure labeling tools, classification taxonomies, and annotation interfaces for accuracy and throughput.

Worker testing and scoring. To accurately label data, annotators need a well-designed training curriculum so they fully understand the annotation criteria and domain context. The annotation platform or external partner should ensure accuracy by actively tracking annotator proficiency against gold data tasks or when a judgement is modified by a higher-skilled worker or admin.

Ground truth or gold data. Ground truth data is crucial at this stage of the process as the baseline to score workers and measure output quality. Many computer vision teams are already working with a ground truth data set.

Sources of authority and quality assurance

There is no one-size-fits-all quality assurance (QA) approach that will meet the quality standards of all ML use cases. Specific business objectives, as well as the risk associated with an under-performing model, will drive quality requirements. Some projects reach target quality using multiple annotators. Others require complex reviews against ground truth data or escalation workflows with verification from a subject matter expert.

There are two primary sources of authority that can be used to measure the quality of annotations and that are used to score workers: gold data and expert review.

  • Gold data: The gold data or ground truth set of records can be used both as a qualification tool for testing and scoring workers at the outset of the process and also as the measure for output quality. When you use gold data to measure quality, you compare worker annotations to your expert annotations for the same data set, and the difference between these two independent, blind answers can be used to produce quantitative measurements like accuracy, recall, precision, and F1 scores.
  • Expert review: This method of quality assurance relies on expert review from a highly skilled worker, an admin, or from an expert on the customer side, sometimes all three. It can be used in conjunction with gold data QA. The expert reviewer looks at the answer given by the qualified worker and either approves it or makes corrections as needed, producing a new correct answer. Initially, an expert review may take place for every single instance of labeled data, but over time, as worker quality improves, expert review can utilize random sampling for ongoing quality control.

Iterating on data success

Once a computer vision team has successfully launched a high quality training data pipeline, it can accelerate progress to a production ready model. Through ongoing support, optimization, and quality control, an external partner can help them:

  • Track velocity: In order to scale effectively, it’s good to measure annotation throughput. How long is it taking data to move through the process? Is the process getting faster?
  • Tune worker training: As the project scales, labeling and quality requirements may evolve. This necessitates ongoing workforce training and scoring.
  • Train on edge cases: Over time, training data should include more and more edge cases in order to make your model as accurate and robust as possible.

Without high-quality training data, even the best funded, most ambitious ML/AI projects cannot succeed. Computer vision teams need partners and platforms they can trust to deliver the data quality they need and to power life-changing ML/AI models for the world.

Alegion is the proven partner to build the training data pipeline that will fuel your model throughout its lifecycle. Contact Alegion at solutions@alegion.com.

This content was produced by Alegion. It was not written by MIT Technology Review’s editorial staff.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.