To register, click this link just before 4 p.m. ET. We also recommend you download or enable the Zoom application. You will be muted when you enter the Q&A....

In this episode of Radio Corona, Gideon Lichfield, editor in chief of MIT Technology Review, speaks with Tomas Pueyo, whose Medium post “Coronavirus: Why You Should Act Now” has become one of the defining explainers on the internet about the coronavirus outbreak (it has been viewed more than 40 million times, and translated into at least 30 languages).

In interviews, Pueyo is quick to point out he is not an epidemiologist. He is the vice president of growth at Course Hero, an online learning platform. Even so, his post synthesized data available about the outbreak into a compelling and clear argument that influenced many people's thinking.

Pueyo and Lichfield will be discussing how to find and communicate trustworthy information in the midst of a pandemic. They will also be taking your questions.   

You can watch our previously recorded episodes here. For more news about coronavirus and how it's changing our world, sign up for the Coronavirus Tech Report, a free newsletter from Technology Review.

Expand

Google researchers are using imitation learning to teach autonomous robots how to pace, spin, and move in more agile ways....

What they did: Using a data set of motion capture data recorded from various sensors attached to a dog, the researchers taught a quadruped robot named Laikago several different movements that are hard to achieve through traditional hand-coded robotic controls.

How they did it: First, they used the motion data from the real dog to construct simulations of each maneuver, including a dog trot, side-step, and … a dog version of classic ’80s dance move, the running man. (The last one was not, in fact, performed by the real dog itself. The researchers manually animated the simulated dog to dance to see if that would translate to the robot as well.) They then matched together key joints on the simulated dog and the robot to make the simulated robot move in the exact same way as the animal. Using reinforcement learning, it then learned to stabilize the movements and correct for differences in weight distribution and design. Finally, the researchers were able to port the final control algorithm into a physical robot in the lab—though some moves, like the running man, weren’t entirely successful.

Why it matters: Teaching robots the complex and agile movements necessary to navigate the real world has been a long-standing challenge in the field. Imitation learning of this kind instead allows such machines to easily borrow the agility of animals and even humans.

Future work: Jason Peng, the lead author on the paper, says there are still a number of challenges to overcome. The heaviness of the robot limits its ability to learn certain maneuvers, like big jumps or fast running. Additionally, capturing motion sensor data from animals isn’t always possible. It can be incredibly expensive and requires the animal’s cooperation. (A dog is friendly; a cheetah, not so much.) The team plans to try using animal videos instead, which would make their technique far more accessible and scalable.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.

Expand