The way in which humans interact with computers has been dominated by the mouse since it was invented in the 1960s by Doug Engelbert. A mouse uses a flat two-dimensional surface as a proxy for a computer screen. Any movements of the mouse over the surface are then translated into movements on the screen. These days, a mouse also has a number of buttons, and often scroll wheel, that allow interaction with on-screen objects.
The mouse is a hugely useful device but it is also a two-dimensional one. But what of the three-dimensional world and the long-standing, but growing, promise of virtual reality. What kind of device will take the place of the mouse when we begin to interact in three-dimensions?
Today, we get to see one idea developed at the University of Wyoming in Laramie by Anh Nguyen and Amy Banic. These guys have created an intelligent thimble that can sense its position accurately in three-dimensions and respond to a set of preprogrammed gestures that allow the user to interact with objects in a virtual three-dimensional world.
The problem of interacting in three dimensions is by no means new. It’s been possible to buy a computer mouse for some time that senses its position in three dimensions. However these tend to have limited resolution and application.
Anybody who has a modern computer game console such as an Xbox Kinect or a Nintendo Wii, will be aware of the way these devices capture three-dimensional movements and translate them onto 2-D screen. The problem here is that these devices are locked to a particular technology and cannot be transferred to a PC or Mac, for example.
Then there is the LeapMotion, which measures the movement of an entire hand in three-dimensional space. It was launched to great fanfare and anticipation last year but has so far failed to live up to expectations.
Nguyen and Banic have instead aimed to create a cheap device that works as a universal input for more or less any computing device. And they want to make it as small and unobtrusive as possible so that it can be easily transported.
The result is the 3DTouch, a thimble-like device that sits on the end of a finger, equipped with a 3D accelerometer, a 3D magnetometer and 3D gyroscope. That allows the data from each sensor to be compared and combined to produce a far more precise estimate of orientation than a single measurement alone. In addition, the 3DTouch has an optical flow sensor that measures the movement of the device against a two-dimensional surface, exactly like that inside an ordinary mouse.
For the moment, the device is hooked up by wire to an Arduino controller which combines the data from all the sensors. The fused data is then streamed to a conventional laptop. However, Nguyen and Banic recognised the bulkiness of this set up. “This wired connection later could be replaced by a wireless solution using a pair of XBee modules,” they say.
But the ability to know its orientation in space is only one part of this device’s spec. Nguyen and Banic have also built in a number of mouse-like gestures that allow a user to interact with 3-D objects, by selecting and dragging them, for example. These gestures include a finger tap, a double tap and a press gesture. And having more than one 3DTouch on different fingers allows multitouch interaction.
Nguyen and Banic have tested their new device to measure its pointing accuracy and say that it is reasonably good. They say it’s possible to move a three-dimensional object within and 84 x 84 mm target area with a positioning error of only about 1 mm.
And they say they know what modifications could easily improve it, such as a more reliable optical sensor. Given the components used, the 3DTouch should be relatively cheap but Nguyen and Banic do not say just how much it might cost.
Overall, these folks have an interesting device on their hands that could be coming to fruition at precisely the right time. Nguyen and Banic say it will work with existing devices such as a desktop PC or a Cave Autonomous Virtual Environment.
But in recent months, a number of practical virtual reality devices have begun to emerge such as the Oculus Rift and Google cardboard. A cheap and easy way of interacting with these new virtual reality devices could turn out to be hugely useful.
It’s too early to say whether the 3DTouch will fulfil this role but there’s certainly a gap in the market.
Ref: arxiv.org/abs/1406.5581 : 3dtouch: A Wearable 3D Input Device With An Optical Sensor And A 9-DOF Inertial Measurement Unit
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.