Amazon is apparently testing a system that lets you pay by scanning your hand
The news: The company is currently trying out scanners that can identify people by their hand to let them pay for purchases in stores, according to the New York Post. If successful, it could be rolled out in branches of Whole Foods, which Amazon owns. The report—which Amazon has so far refused to comment on—says the technology, code-named “Orville,” is being tested at vending machines in the company’s offices in New York.
How it works: Users hold their hand over a scanner, which uses computer vision and depth geometry to identify each hand’s shape and size, the report says. Amazon Prime customers will need to go into stores for their hands to be captured and then linked to their accounts before they can use the payment system.
Handy, but risky: Hand geometry has been around as a biometric for longer than fingerprints or face recognition. But while people are happy unlocking their phones with biometric data, it’s unclear if they’d be willing to pay for things that way. And there’s another big obstacle: it’s relatively easy to report a theft and order a new credit card if someone steals yours. If someone hacks into the database and steals your “hand print,” good luck changing that.
This story first appeared in our daily newsletter The Download. Sign up here to get your dose of the latest must-read news from the world of emerging tech.
Keep Reading
Most Popular
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
OpenAI teases an amazing new generative video model called Sora
The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.
Google’s Gemini is now in everything. Here’s how you can try it out.
Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.
This baby with a head camera helped teach an AI how kids learn language
A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.