Skip to Content
Uncategorized

Why the Popularity of Some Web Pages Doesn’t Fall Over Time

Unlike many Web pages, the traffic to some sites doesn’t fall over time. A new model of Web traffic shows why.

Back in 2005, a group of computer scientists carried out a now-famous study of the way visits to a website fall over time. These guys looked at the traffic to a Hungarian news site and found that it decayed as a power law.

This, they said, has a straightforward explanation: the amount of traffic is simply a reflection of people’s browsing habits. 

They speculated that during each visit, surfers access all the new articles that appeared since their last visit. And since the time between visits follows a power law, the number of people who have not yet seen a story also follows a power law. This explains the observed pattern of traffic. 

However, there are a number of problems with this model. Not the least of these is that a fraction of surfers visit a site very rarely, perhaps once a year. Is it likely that these visitors access all the news stories since their last visit? If not, then the model doesn’t quite work.

Today, Mikhail Simkin and Vwani Roychowdhury at the University of California, Los Angeles put forward another idea. They point out that the traffic to some websites does not fall with time and that the previous theory cannot explain this. Their evidence comes from traffic to one of their own sites which follows a kind of punctuated equilibrium, rising and falling sharply over time as other sites point to theirs.

This phenomenon clearly flies in the face of the earlier theory but Simkin and Roychowdhury say it can be easily explained.

Their idea is that the popularity of a webpage is simply a function of how easily it can be accessed: pages near the top of a website are easier to find than those further down or on other pages.

This, they say, perfectly explains why news stories drop in popularity over time: other stories simply replace them at the top of the news agenda, making it harder to access older ones. Simkin and Roychowdhury present data showing how the popularity of pages falls as a power law with how far down the list it sits.

There is another factor of course: the attractiveness of the story. This determines the initial interest in the story but not the way this interest falls over time, they say.

Simkin and Roychowdhury say this idea explains why their web pages have not fallen in popularity in time–the pages have not fallen down a news list and are just as easily accessed now as they were when they were published. 

But significantly, their idea also explains why other news stories do fall in popularity in time. 

That’s an interesting idea, not least because it’s straightforward to test. 

It may be that many news sites and blogs already experience the effect Simkin and Roychowdhury suggest. Long after publication, readers can only find most stories through a Google-type search and this makes them all equally accessible. 

So a study of the traffic at this stage, when their accessibility is essentially equal,  might well reveal the kind punctuated equilibrium that Simkin and Roychowdhury observe in their own website. 

It also suggests that organising content to make it all easier to access for longer periods of time could pay off in terms of long term traffic. Only one way to find out!

Ref: arxiv.org/abs/1202.3492: Why Does Attention To Web Articles Fall With Time?

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.