Skip to Content

When Google launched its Nexus One smartphone on Tuesday, the event provided an indirect demonstration of another hotly advancing technology: real-time search.

Google unveiled real-time search as an enhancement to search returns in December, recognizing that Web users increasingly produce and demand data faster than the then-existing Google technology was able to index and provide it. Other search engines are also offering various kinds of real-time search returns.

As the Nexus One announcement unfolded knots of Google employees participated in a huge feedback loop: they eyed real-time search returns about the Nexus One–in many cases doing so using the device itself–as the bloggerati and Twitterati pecked madly away on their own smartphones.

“As we were announcing the phone, these real-time [search] results were pointing out all the highlights of the phone,” Google Fellow Amit Singhal says. “All the important things that I needed to know… were available to me right on Google’s results page.”

I met Singhal in Mountain View Wednesday. He explained that just a few months ago, the gap between a blog or microblog post and their discovery via a Google search would have been five to fifteen minutes–but now it can be less than ten seconds. This is thanks to new agreements between Google and Twitter (as well as other sources of real-time data) details of which have not been officially disclosed, as well as new algorithms for sifting through the new data to discern its relevance.

In explaining the technology to me, Singhal sat down to check out the latest search returns on the Nexus One. “Someone just Tweeted about a certain link on the phone, someone else Tweeted on what the prices are,” he noted excitedly. “As this content is created, we are getting it, bringing it to our users, passing through our relevance filters.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.