A 3D-printed prosthetic hand controlled using a new AI-based approach could significantly lower the cost of bionic limbs for amputees....
Real need: There are approximately 540,000 upper-limb amputees in the United States, but sophisticated “myoelectric” prosthetics, controlled by muscle contractions, are still very expensive. Such devices cost between $25,000 and $75,000 (not including maintenance and repair), and they can be difficult to use because it is hard for software to distinguish between different muscle flexes.
Handy invention: Researchers in Japan came up with a cheaper, smarter myoelectric device. Their five-fingered, 3D-printed hand is controlled using a neural network trained to recognize combined signals—or, as they call them, “muscle synergies.” Details of the bionic hand are published today in the journal Science Robotics.
Nimble-fingered: The team tested their setup on seven people, including one amputee. The participants were able to perform 10 different finger motions with around 90% accuracy. What’s more, the device only needed to be trained on five motions for each finger before they were able to do this. The amputee participant was able to perform tasks including picking up and putting down bottles, and holding a notebook.
Hold on: It isn’t clear how much these technologies might reduce the cost of prosthetics, and there are still significant challenges to overcome, like muscle fatigue and the complications that will inevitably come with the getting the software to recognize a wide variety of real-world movements. Still, it’s a promising approach that might someday change the lives of those who rely on dumb or hugely expensive prosthetic limbs.
This tiny, solar-powered, bee-like robot could be the future of drones. One day, anyway....
Flying machine: The RoboBee X-Wing, developed at the Harvard Microrobotics Laboratory, is a remarkable feat of microengineering. It is the first insect-size aerial vehicle to fly without requiring a tether, and it uses recent advances in materials and engineering to achieve new power efficiency. A paper describing it appears in the journal Nature today. You can also watch a video of it in action here.
Why wings? Flapping wings have several potential advantages over the propeller blades that give lift to conventional drones. Wings allow for greater agility and maneuverability, and they are both quieter and safer than propellers.
Intelligent design: Flapping aircraft have been built before now, and you can even buy a few toys that flap through the air. But these machines lack any real control, and they have nothing like the power efficiency of a real bird or insect. Indeed, most tiny drones require a tether connected to an external power source in order to fly. The RoboBee instead collects its own power from several tiny solar panels perched above its wings.
Work to do: The RoboBee looks a bit awkward, and it certainly isn’t ready to be commercialized. It requires an intense light source (three times the strength of regular sunlight), and it can only fly for a few seconds at a time. Still, it points to a future when winged drones might weave through buildings and busy urban areas with unnerving ease.
Sign up here to our daily newsletter The Download to get your dose of the latest must-read news from the world of emerging tech.
Cast your mind back to the internet in 2016. Do you have hazy memories of the Mannequin Challenge? Well, the viral YouTube trend has now been used to train a neural network in understanding 3D scenes....
The context: We are naturally good at interpreting 2D videos as 3D scenes, but machines need to be taught how to do it. It’s a useful skill to have: the ability to reconstruct the depth and arrangement of freely moving objects can help robots maneuver in unfamiliar surroundings. That’s why the challenge has long captivated computer-vision researchers, especially in the context of self-driving cars.
The data: To approach this problem, a team at Google AI turned to an unexpected data set: thousands of YouTube videos of people performing the Mannequin Challenge. (If it happened to pass you by at the time, this involved standing as still as possible while someone moved around you, filming the pose from all angles.) These videos also happen to be a novel source of data for understanding the depth of a 2D image.
The method: The researchers converted 2,000 of the videos into 2D images with high-resolution depth data and used them to train a neural network. It was then able to predict the depth of moving objects in a video at much higher accuracy than was possible with previous state-of-the-art methods. Last week, the researchers were awarded a best paper honorable mention at a major computer vision conference.
Unknowing participants: The researchers also released their data set to support future research, meaning that thousands of people who participated in the Mannequin Challenge will unknowingly continue to contribute to the advancement of computer vision and robotics research. While that may come as an uncomfortable surprise to some, this is the rule in AI research rather than the exception.
Many of the field’s most foundational data sets, including Fei-Fei Li’s ImageNet, which kicked off the deep-learning revolution, were compiled from publicly available data scraped from Twitter, Wikipedia, Flickr, and other sources. The practice is motivated by the immense amount of data required to train deep-learning algorithms and has only been exacerbated in recent years as researchers produce ever bigger models to achieve breakthrough results.
Data privacy: As we have written before, this data-scraping practice is neither obviously good nor bad but calls into question the norms around consent in the industry. As data becomes increasingly commoditized and monetized, technologists should think about whether the way they’re using someone’s data aligns with the spirit of why it was originally generated and shared.