Skip to Content
Uncategorized

Video-speed Electronic Paper

Researchers at Philips have demonstrated a new technology for electronic paper which will support real-time video. It looks like we might have usable lightweight very cheap electronic paper in the next few years. Now that low cost flat-panel 15” LCDs…
October 2, 2003

Researchers at Philips have demonstrated a new technology for electronic paper which will support real-time video. It looks like we might have usable lightweight very cheap electronic paper in the next few years. Now that low cost flat-panel 15” LCDs cost less than $300 at your local shopper’s club the 3-D shape of everyone’s desktop has changed–its a single flat panel display and a keyboard. But the functionality is really no different than we got at the 1984 superbowl when the Macintosh was first announced. PDAs and cell phones are the same thing too (and some of them have more pixels than those old Macs). Tablet computers have added the ability to scribble and sketch on top of a bit-map display, but its not that different than sketching with a mouse as in the old MacDraw program.

Really cheap electronic paper might finally lead to something radically new in the way we interact with our machines. It might bring back the real desktop, rather than the virtual one we’ve all learned to work in. Imagine for a moment (and then just wait a handful of years until its true) that with electronic paper you can have an ultra-thin semi-rigid display about a millimeter thick, that includes a built in flat recharable battery, video RAM, and a wireless receiver. Your computer has the wireless transmitter, but it only needs to go three or four feet. Now a single piece of electronic paper is going to look an awful lot like a standard LCD display of today only a lot thinner. But you could have lots of them. You could stack them like a pile of papers on your desk. Lay them out, shuffle them around, or even pin them up on the wall of your cubicle. You have your email on one. You put it over there on the left. You have the powerpoint presentation you are working on on another one. It’s in the pile on the right, along with two or three spreadsheets for different projects you’re working on. And the one on your cubicle wall (pinned right next to the induction recharger so the battery stays charged and it can change continuously) might be connected to a web page using a new standard for continously updating content that provides an HTMLized version of CNN, mixing live video and hyperlinks. When you want to work on something you shuffle around and pull out the right piece of electronic paper. The MEMs accelerometer in it notices you have picked it up and on the low bandwidth reverse channel notifies your computer. If you use the keyboard with it propped up on the stand behind it you input goes to this application. If you scribble on the electronic paper that is used as input, or if you have a speech interface that is the default context. So now instead of overlapping windows in a virtual desktop you get to physically manipulate your applications and files by moving around pieces of electronic paper. You put things down when you are not using them and they stay there for weeks, perhaps, until you are ready to comeback to them. Your natural instincts in the physical world help you to navigate around in your own personal cyberspace which is embodied in your own physical work space.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.