Skip to Content

Larry Page’s Oddly Backward Comments on Mobile

Google’s CEO says we shouldn’t design for mobile devices. Say what now?
January 24, 2013

It was an offhanded, qualified remark made in passing during Google’s latest earnings call, but Larry Page’s comments on mobile design had all the grace of a gong being hit. “I’d almost say that we shouldn’t be designing for mobile,” he said, as reported by Quartz. “The kind of mobile phones we have now, the state of the art, are a little bit beyond, and those experiences [i.e., full websites] should work on those devices as well.”

Really? Now that Android phablets are a fad, mobile design is irrelevant since we can just stuff desktop websites onto waffle-sized screens? I can’t believe Page is really that dumb–this is the guy who led a company-wide overhaul of Google’s products in favor of a more design-savvy direction

What Page’s comments about mobile (unintentionally) illustrate is not that mobile design doesn’t matter, but that it matters now more than ever, as the very term “mobile device” spreads ever thinner over a proliferation of different devices and form factors. According to mobile-design expert Luke Wroblewski, we’ve only just barely begun to figure out what UI designs and user experiences make sense for iPhone-sized screens–a five-year-old device–much less tablets, phablets, wearables, and the rest. “Most people aren’t even making mobile stuff yet,” he told me. “ ‘Shoving my thing on the smaller screen’– that’s where the whole tech industry is. After four years of mobile apps, I’m only now starting to see examples of native mobile experiences with no laptop-based GUI artifacts.”

The “experiences” that Page casually mentioned were all designed for a decades-old hardware use case–accessing the web through a keyboard-and-mouse-driven, desktop-based device–that is increasingly on the wane. He’s right that most mobile sites are still frustrating to use. That’s because new mobile devices get invented faster than we can figure out how people actually tend to use them, and then design effective experiences around those uses. But the solution isn’t to continue “shoving” the old, pre-mobile web onto our handheld devices–or, as Page’s remark implies, simply keep making our mobile screens bigger and bigger to accommodate “experiences” that were originally designed for desktop computers. 

Responsive design–Page’s catch-all solution to supposedly stand in for mobile design–can be a trap, too. That trendy term doesn’t just mean “rearranging your legacy desktop site to fit better on a Nexus 4.” Every new mobile form factor drives subtle but significant differences in how people use the device. Pinching or tapping to zoom a “full” website on your smartphone is an elegant solution to the problem that few websites are mobile-native experiences, but an even better solution would be to design those websites so that they’re mobile-native in the first place. Not crippled or stunted versions of the “real” website, mind you. Just rethought, so that information and services are presented in ways that make more sense for a mobile context. (I don’t care how enormous your phablet is; interacting with the desktop version of NYTimes.com on a mobile browser is just unwieldy.)

What Page really means when he says “we shouldn’t be designing for mobile” is that we shouldn’t be designing poorly for mobile. But as with anything else, practice makes perfect. 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.