Skip to Content
Uncategorized

Why The New Gawker.com is Not The Future of the Web

The only good thing about the redesign is the apoplectic reactions it inspired on Facebook and Twitter.
February 9, 2011

Let’s talk about information density. No medium has ever demanded as much of it as the Web – and why? Hold up a piece of A4 paper to your “generous” widescreen laptop monitor. Oops - same size. Websites are built to fill that much screen territory, if we’re lucky. Developers are constrained by a sort of lowest common denominator when it comes to available space.

In light of these constraints, you’d think the redesigned Gawker don’t-call-it-a-blog empire would make sense. Here it is, perhaps most gorgeously realized on Nick Denton’s paean to his own geek past, io9.com:

The main article slot allows for a borderline-cinematic realization of the blog form. It makes other blogs, even the Gawker network’s old design, look ho-hum by comparison.

But once you get past the giant images and new galleries (video galleries, even!) it’s not very useful or user friendly. Gawker.com’s giant, every-page-is-a-homepage blog posts eat up all the territory that used to be devoted to something humble and necessary: the dek.

Deks are that bit of text that appears below a headline. Newspapers of the old style used to use them all the time, back when they were necessiated by the hopeless jumble of columns dictated by the nature of moveable type:

Gawker.com used to use them all the time, on the homepage, right below or right next to the headlines. It made it easy to scan the day’s stories and see what was worth reading. It was the same solution people who layout news have been using to conquer the problem of scan-ability and information density for a century. Because it works.

The new Gawker.com is inviting, and it elevates the article page to the status of homepage, which is smart in the age of viewers reaching content via anything but a site’s homepage. But those little headlines in the right margin? They desperately need some context.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.