Facebook’s digital currency plans took a major hit today as Visa, Mastercard, eBay, and Stripe have followed PayPal’s example and backed out of the nonprofit Facebook set up to manage the currency....
And then there were 23: When Facebook unveiled its plans in June for the currency it calls Libra, it also revealed that 27 other firms—including big names like Visa, Mastercard, PayPal, Uber, and Spotify—had signed on to participate in the Libra Association, a Switzerland-based nonprofit Facebook has set up to develop and maintain the currency. The company said it would have 100 members on the list by the time of Libra’s launch, which is still planned for next year.
But last week, after rumors surfaced that some members were considering backing out, PayPal became the first to announce its departure. Now it’s beginning to look like a mass exodus.
Political pressure points: The proposal for Libra has received a chilly reception from policymakers and central bankers around the world, which may be contributing to the departures. In July, David Marcus, Facebook’s lead on the project, was grilled by skeptical US lawmakers. France and Germany have vowed to block Libra altogether.
Just this week, US Senators Brian Schatz of Hawaii and Sherrod Brown of Ohio sent letters to the CEOs of Visa, Stripe, and Mastercard expressing “deep concerns” about Libra and warning them that their companies could face stricter regulatory scrutiny because of their involvement in the project.
Onward? The Libra Association’s founding members (what remains of them) are scheduled to meet on October 14 in Geneva, Switzerland, where the group will review a charter and appoint a board of directors, according to the Wall Street Journal. And Facebook CEO Mark Zuckerberg is scheduled to testify about Libra in front of the House Financial Services Committee on October 23.
In message on Twitter posted Friday evening, Marcus said “the pressure has been intense,” and that he respects Visa and Mastercard’s decision to “wait until there’s regulatory clarity for Libra to proceed.” He added: “I would caution against reading the fate of Libra into this update.”
Researchers have shrunk state-of-the-art computer vision models to run on low-power devices....
Growing pains: Visual recognition is deep learning’s strongest skill. Computer vision algorithms are analyzing medical images, enabling self-driving cars, and powering face recognition. But training models to recognize actions in videos has grown increasingly expensive. This has fueled concerns about the technology’s carbon footprint and its increasing inaccessibility in low-resource environments.
The research: Researchers at the MIT-IBM Watson AI Lab have now developed a new technique for training video recognition models on a phone or other device with very limited processing capacity. Typically, an algorithm will process video by splitting it up into image frames and running recognition algorithms on each of them. It then pieces together the actions shown in the video by seeing how the objects change over subsequent frames. The method requires the algorithm to “remember” what it has seen in each frame and the order in which it has seen it. This is unnecessarily inefficient.
In the new approach, the algorithm instead extracts basic sketches of the objects in each frame, and overlays them on top of one another. Rather than remember what happened when, the algorithm can get an impression of the passing of time by looking at how the objects shift through space in the sketches. In testing, the researchers found that the new approach trained video recognition models three times faster than the state of the art. It was also able to quickly classify hand gestures with a small computer and camera running only on enough energy to power a bike light.
Why it matters: The new technique could help reduce lag and computation costs in existing commercial applications of computer vision. It could, for example, make self-driving cars safer by speeding up their reaction to incoming visual information. The technique could also unlock new applications that previously weren’t possible, such as by enabling phones to help diagnose patients or analyze medical images.
Distributed AI: As more and more AI research gets translated into applications, the need for tinier models will increase. The MIT-IBM paper is part of a growing trend to shrink state-of-the-art models to a more manageable size.