Vast quantities of data are freely available on the Web, and it can be a potential treasure trove for many businesses–providing they can figure out how to use it effectively.
A company can, for example, comb through data from the U.S. Patent and Trademark Office and court records prior to acquiring another company to see if any of its intellectual property is tied up in legal action. In practice, however, going through so much information takes time and effort to orchestrate.
IBM hopes that a new tool, called BigSheets, will help users analyze Web data more easily. The company has developed a test version of the software for the British Library.
“The ability of any user to do their own types of interesting analytics is coming of age,” says Rod Smith, vice president of emerging Internet technologies for IBM.
BigSheets is built on top of another piece of software called Hadoop. This is an open-source platform for processing very large amounts of Web data by splitting up tasks and handing them off to a cluster of different computers. Hadoop is often used to analyze large amounts of unstructured Web data.
BigSheets uses Hadoop to crawl through Web pages, parsing them to extract key terms and other useful data. BigSheets organizes this information in a very large spreadsheet, where users can analyze it using the sort of tools and macros found in desktop spreadsheet software. Unlike ordinary spreadsheet software, however, there’s no limit to the size of a spreadsheet created through BigSheets.
To use BigSheets, a user would point the tool at a set of URLs or a repository of data. Lists of terms can be used to organize the data into rows and tables, and these can be adjusted later.
Smith says that IBM chose the spreadsheet as the model for organizing data because most users are already familiar with such software. If users want to represent the data in more complex ways, the tool will work with an IBM visualization tool called Many Eyes, as well as other visualization software.
BigSheets has “a level of integration that I haven’t seen,” says Ben Lorica, a senior analyst in the research group at the technical publishing company O’Reilly Media. Traditionally, Lorica says, companies have split the functions that BigSheets performs into three separate tasks–Web crawling, data analysis, and visualizations. Because BigSheets is built on Hadoop, which is fundamentally designed to work on enormous quantities of data, Lorica says, “scale is not a problem” for BigSheets.
He cautions, however, that BigSheets is at an early stage and needs to be tested with other data. Since the technology is being developed in conjunction with particular partners of IBM, it’s unclear how easy it would be for a company start using it, he says. Setting up a Hadoop cluster can be a demanding task, he says, and if BigSheets isn’t packaged well, companies may find themselves needing an army of consultants to prepare the way for the tool.
The first test for BigSheets came at the British Library, which has been working since 2004 to create an archive of the roughly eight million UK websites. At regular intervals, the Library takes snapshots of Web pages, converts them to an archival file format, and stores them. But searching and analyzing this data is another challenge, and that’s where BigSheets came in.
In less than eight hours, Smith says, his team took 4.5 terabytes of archive files and processed them using a Hadoop cluster of four machines. With guidance from British Library researchers, the team used BigSheets to extract keywords, author information, and other metadata from these unstructured Web pages. They experimented with term frequency analysis and ran tag clouds and other visualizations.
The British Library researchers were able to adjust the kinds of metadata they were interested in over the course of the first day, focusing more on who had authored pages than they originally intended. Visualizations provided new insights. For example, using a tag cloud, the researchers discovered that the name of British political figure and writer Alastair Campbell was often misspelled as “Alistair,” surfacing large numbers of relevant records that could easily have been overlooked.
Eytan Adar, an assistant professor of information and computer science at the University of Michigan, who researches Internet-scale systems, text mining, and visualization, says that the tool could have a big impact. “Although the British Library’s content seems restricted to a few snapshots for each page, this still translates to a ton of data, and simply dumping search results in response to a query isn’t useful,” Adar says.
Adar has designed his own tool, called Zoetrope, for analyzing how Web pages have changed over time. BigSheets brings new insights, he says, by comparing data from many different pages as well as over time. Adar says that effective visualizations are “crucial for letting users quickly understand large collections of data.”
After further testing, IBM hopes to incorporate BigSheets into its existing services and products.
These weird virtual creatures evolve their bodies to solve problems
They show how intelligence and body plans are closely linked—and could unlock AI for robots.
A horrifying new AI app swaps women into porn videos with a click
Deepfake researchers have long feared the day this would arrive.
Chinese hackers disguised themselves as Iran to target Israel
But they left a few clues that gave them away.
DeepMind says it will release the structure of every protein known to science
The company has already used its protein-folding AI, AlphaFold, to generate structures for the human proteome, as well as yeast, fruit flies, mice, and more.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.