Determining the most influential users of Twitter is probably not what the creators of the Cray XMT supercomputer had in mind when they designed their machine. But when you’re packing this much computational heat, you go where the hard problems are. Twitter, Facebook and the rest of the social Web have become the modern-day equivalent of the water cooler, albeit with an automatic transcriptionist present. And processing all the data that conversation generates turns out to be a very hard problem.
For example, as of February 2010, Facebook included 400 million active users with an average of 120 “friend” connections each, all of whom collectively shared 5 billion pieces of information in a single month.
Figuring out who the “influencers” are in such a massive social networks requires creating a gigantic social graph, where each user is a vertex and the connections between them are lines. Ranking users within such a graph requires a determination of their “centrality”. That is, how many other people are connected to them, and how many people are connected to them, and so on, until you get to the trunk of the tree structure underlying connectedness on a service like Twitter.
It turns out this is not the sort of problem that is readily handled even by the usual go-to workstations of the scientific supercomputing world – the GPGPU-powered supercomputers that leverage the graphics chips usually used to render lush 3D environments in videogames. These GPGPU workstations simply don’t allow enough control over how many processes are running in parallel to efficiently churn through social graphs as big as the one represented by Twitter or Facebook.
That’s why David Ediger of Georgia Tech, with the help of a long list of collaborators, turned to the 128-CPU Cray XMT housed at the Pacific Northwest National Laboratory. The XMT is a favorite of supercomputing hot-rodders and uber-geeks who appreciate its fine-grained massively multithreaded tunability. This machine is usually pressed into service for solving problems like “Hierarchical Bayesian Modeling for Text Analysis” or analyzing the stability of America’s power grid, but Ediger had it cogitating on every stray thought from a single day’s worth of the Twitter firehose.
The Cray made short work of Twitter, disposing of an entire day’s worth of connections in under an hour. The results will surprise no one – on Twitter, a tiny fraction of sources are retweeted widely, mostly government and media, while the rest of the service is either people talking in small groups or literally talking to themselves.
The point, though, is that throwing a finely-tuned Cray running Ediger’s custom software – GraphCT – at Twitter allowed the researchers to digest the service in something like real time. Which is exactly the sort of capability that intelligence agencies, marketers and perhaps even Twitter itself might want to have.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.