Skip to Content
Uncategorized

The Speed of Online Conversation

Data shows how web discussions progress around content and sites like Twitter get most of the chatter.
November 12, 2009

Everyone knows that online conversations happen fast, but Ilya Grigorik, CTO and founder of PostRank, a company that tracks online conversations around pieces of content, shared some interesting concrete numbers this afternoon at Defrag 2009, a technology conference taking place in Denver. Grigorik randomly selected 100,000 posts that the company had tracked and calculated when conversation happened around them.

It’s no surprise that 80 percent of engagement around a post happens on day one, and that 60 percent of that happens within the first hour. What was surprising, however, is that this is actually a decrease from the numbers Grigorik has for 2007. According to his data from two years ago, 95 percent of engagement happened on the first day, and 90 percent of that was within the first hour.

These numbers seem strange considering that the Web appears to be operating at a faster pace. Grigorik’s numbers show, for example, that about on average 66 percent of the conversation around a post happens on “chatter” channels such as Twitter, which is nearly the opposite of the trend two years ago, when most conversation happened on the site where a post was published.

Grigorik said he thinks the explanation lies in the effect of the strength of weak ties. He believes that online conversation has become so distributed that it takes time for information to filter out to every social group that’s going to talk about it. If he’s right, it’s a ray of hope for the real-time Web.

On the surface, it might appear that more real-time streams will lead to a stream of data that appears and disappears, leaving no time to ponder the meaning of any of it. If Grigorik is right, however, real-time streams and the social infrastructure around them may help information find its way to more people who would be interested in discussing it.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.