Who has time to read every article they see shared on Twitter or Facebook, or every document that’s relevant to their job? As information overload grows ever worse, computers may become our only hope for handling a growing deluge of documents. And it may become routine to rely on a machine to analyze and paraphrase articles, research papers, and other text for you.
An algorithm developed by researchers at Salesforce shows how computers may eventually take on the job of summarizing documents. It uses several machine-learning tricks to produce surprisingly coherent and accurate snippets of text from longer pieces. And while it isn’t yet as good as a person, it hints at how condensing text could eventually become automated.
The algorithm produced, for instance, the following summary of a recent New York Times article about Facebook trying to combat fake news ahead of the U.K.’s upcoming election:
- Social network published a series of advertisements in newspapers in Britain on Monday.
- It has removed tens of thousands of fake accounts in Britain.
- It also said it would hire 3,000 more moderators, almost doubling the number of people worldwide who scan for inappropriate or offensive content.
The Salesforce algorithm is dramatically better than anything developed previously, according to a common software tool for measuring the accuracy of text summaries.
“I don’t think I’ve ever seen such a large improvement in any [natural-language-processing] task,” says Richard Socher, chief scientist at Salesforce. Socher is a prominent name in machine learning and natural-language processing, and his startup, MetaMind, was acquired by Salesforce in 2016.
The software is still a long way from matching a human’s ability to capture the essence of document text, and other summaries it produces are sloppier and less coherent. Indeed, summarizing text perfectly would require genuine intelligence, including commonsense knowledge and a mastery of language.
Parsing language remains one of the grand challenges of artificial intelligence (see “AI’s Language Problem”). But it’s a challenge with enormous commercial potential. Even limited linguistic intelligence—the ability to parse spoken or written queries, and to respond in more sophisticated and coherent ways—could transform personal computing. In many specialist fields—like medicine, scientific research, and law—condensing information and extracting insights could have huge commercial benefits.
Caiming Xiong, a research scientist at Salesforce who contributed to the work, says his team’s algorithm, while imperfect, could summarize daily news articles, or provide a synopsis of customer e-mails. The latter could be especially useful for Salesforce’s own platform.
The team’s algorithm uses a combination of approaches to achieve its improvement. The system learns from examples of good summaries, an approach called supervised learning, but also employs a kind of artificial attention to the text it is ingesting and outputting. This helps ensure that it doesn’t produce too many repetitive strands of text, a common problem with summarization algorithms.
The system experiments in order to generate summaries of its own using a process called reinforcement learning. Inspired by the way animals seem to learn, this involves providing positive feedback for actions that lead toward a particular objective. Reinforcement learning has been used to train computers to do impressive new things, like playing complex games or controlling robots (see “10 Breakthrough Technologies 2017: Reinforcement Learning”). Those working on conversational interfaces are increasingly now looking at reinforcement learning as a way to improve their systems.
Kristian Hammond, a professor at Northwestern University, and the founder of Narrative Science, a company that generates narrative reports from raw data, says the Salesforce research is a good advance, but it also shows the limits of relying purely on statistical machine learning. “At some point, we have to admit that we need a little bit of semantics and a little bit of syntactic knowledge in these systems in order for them to be fluid and fluent,” says Hammond.
Hammond says the use of an attention mechanism mimics, at a very simple level, the way a person pays attention to what he's just just said. “When you say something, the details of how you say it are driven by the context of what you have said before,” he says. “This work is a step in that direction.”
Improving the language skills of computers may also prove important in the quest to advance artificial intelligence. A startup called Maluuba, which was acquired earlier this year by Microsoft, recently produced a system capable of generating relevant questions from text. The Maluuba team also used a combination of supervised learning and reinforcement learning.
Adam Trischler, senior research scientist at Maluuba, says asking relevant questions is an important part of learning, so it is important to create inquisitive machines, too. “The ultimate goal is to use question-and-answering in a dialogue,” Trischler says. “What if a machine could go out and gather information and then ask its own questions?”