Yesterday at EmTech’s “From the Labs: Cool Innovations” session, professor of computer science Holly Yanco from the University of Massachusetts, Lowell, discussed her robotic wheelchair project. She first demonstrated the difficulty of using a standard robotic arm attachment for wheelchairs by showing an over screen shot of complicated joystick instructions, which, she pointed out, many people don’t want to have to learn in order to command a robot to reach for an object. Instead, she is combining camera vision with touch screen technology, so that a camera will take a shot of objects in front of a shelf, for example, and display them on a touch screen. The user simply touches the object she wants on the screen and Yanco’s software lets the robot reach for it. This intuitive approach, she says, will make robotic assistants more useful for people. “My students are very inspired by video games,” says Yanco. Just as in video games, a more intuitive approach to the joystick tends to be more successful and result in a more enjoyable experience for the user.
These weird virtual creatures evolve their bodies to solve problems
They show how intelligence and body plans are closely linked—and could unlock AI for robots.
A horrifying new AI app swaps women into porn videos with a click
Deepfake researchers have long feared the day this would arrive.
Chinese hackers disguised themselves as Iran to target Israel
But they left a few clues that gave them away.
DeepMind says it will release the structure of every protein known to science
The company has already used its protein-folding AI, AlphaFold, to generate structures for the human proteome, as well as yeast, fruit flies, mice, and more.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.