During a distributed denial of service (DDoS) attack, an attacker overwhelms a domain-name server with traffic until it collapses. The traditional way of fending off an attack like this is to pile up bandwidth so the server under attack always has more than enough volume to handle what the attacker has released. But as hackers become capable of attacks with bigger and bigger data volumes, this is no longer feasible.
Since the target of DDoS attacks is a website’s IP address, Hanqing Wu, the chief security scientist at Alibaba Cloud, devised a defense mechanism through which one Web address can be translated into thousands of IP addresses. This “elastic security network” can quickly divert all benign traffic to a new IP address in the face of a DDoS attack. And by eliminating the need to pile up bandwidth, this system would greatly reduce the cost of keeping the Internet safe.
Traditional filmmaking techniques often don’t work in virtual reality. So for the past few years, first as the principal filmmaker for virtual reality at Google and now as an independent filmmaker, Jessica Brillhart has been defining what will.
Brillhart recognized early on that in VR, the director’s vision is no longer paramount. A viewer won’t always focus where a filmmaker expects. Brillhart embraces these “acts of visitor rebellion” and says they push her to be “bold and audacious in ways I would never have been otherwise.” She adds: “I love how a frame is no longer the central concept in my work. I can build worlds.”
Joshua Browder is determined to upend the $200 billion legal services market with, of all things, chatbots. He thinks chatbots can automate many of the tasks that lawyers have no business charging a high hourly rate to complete.
“It should never be a hassle to engage in a legal process, and it should never be a question of who can afford to pay,” says Browder. “It should be a question of what’s the right outcome, of getting justice.”
Browder started out small in 2015, creating a simple tool called DoNotPay to help people contest parking tickets. He came up with the idea after successfully contesting many of his own tickets, and friends urged him to create an app so they could benefit from his approach.
Browder’s basic “robot lawyer” asks for a few bits of information—which state the ticket was issued in, and on what date—and uses it to generate a form letter asking that the charges be dropped. So far, 375,000 people have avoided about $9.7 million in penalties, he says.
In early July, DoNotPay expanded its portfolio to include 1,000 other relatively discrete legal tasks, such as lodging a workplace discrimination complaint or canceling an online marketing trial. A few days later, it introduced open-source tools that others—including lawyers with no coding experience—could use to create their own chatbots. Warren Agin, an adjunct law professor at Boston College, created one that people who have declared bankruptcy can use to fend off creditors. “Debtors have a lot of legal tools available to them, but they don’t know it,” he says.
Browder has more sweeping plans. He wants to automate, or at least simplify, famously painful legal processes such as applying for political asylum or getting a divorce.
But huge challenges remain. Browder is likely to run into obstacles laid down by lawyers intent on maximizing their billable hours, and by consumers wary of relying too heavily on algorithms rather than flesh-and-blood lawyers.
Five years ago, when Phillipa Gill began a research fellowship at the University of Toronto’s Citizen Lab, she was surprised to find that there was no real accepted approach for empirically measuring censorship. So Gill, now an assistant professor of computer science at the University of Massachusetts, Amherst, built a set of new measurement tools to detect and quantify such practices. One technique automatically detects so-called block pages, which tell a user if a site has been blocked by a government or some other entity. In 2015, Gill and colleagues used her methods to confirm that a state-owned ISP in Yemen was using a traffic-filtering device to block political content during an armed conflict.
Problem: Complex microprocessors—like those at the heart of autonomous driving and artificial intelligence—can overheat and shut down. And when it happens, it’s usually the fault of an internal component on the scale of nanometers. But for decades, nobody who designed chips could figure out a way to measure temperatures down to the scale of such minuscule parts.
Solution: Fabian Menges, a researcher at IBM Research in Zurich, Switzerland, has invented a scanning probe method that measures changes to thermal resistance and variations in the rate at which heat flows through a surface. From this he can determine the temperature of structures smaller than 10 nanometers. This will let chipmakers come up with designs that are better at dissipating heat.
Volodymyr Mnih, a research scientist at DeepMind, has created the first system to demonstrate human-level performance in almost 50 Atari 2600 video games, including Pong and Space Invaders. Minh’s system was the first to combine the playful characteristics of reinforcement learning with the rigorous approach of deep learning, which mirrors the way the human brain processes information—learning by example. His software learned to play the games much as a human would, through playful trial and error, while using the game score as a measurement by which to hone and perfect its technique for each game.
Most driverless cars use laser sensors, or lidar, to map surroundings in 3-D and spot obstacles. But some cheap new sensors may not be accurate enough for high-speed use. “They’re more suited to a Roomba,” says Austin Russell, who dropped out of Stanford and set up his own lidar company, Luminar. “My biggest fear is that people will prematurely deploy autonomous cars that are unsafe.”
Luminar’s device uses longer-wavelength light than other sensors, allowing it to spot dark objects twice as far out. At 70 miles per hour, that’s three extra seconds of warning.
Safety never used to be much of a concern with machine-learning systems. Any goof made in image labeling or speech recognition might be annoying, but it wouldn’t put anybody’s life at risk. But autonomous cars, drones, and manufacturing robots have raised the stakes.
Angela Schoellig, who leads the Dynamic Systems Lab at the University of Toronto, has developed learning algorithms that allow robots to learn together and from each other in order to ensure that, for example, a flying robot never crashes into a wall while navigating an unknown place, or that a self-driving vehicle never leaves its lane when driving in a new city. Her work has demonstrably extended the capabilities of today’s robots, enabling self-flying and self-driving vehicles to fly or drive along a predefined path despite uncertainties such as wind, changing payloads, or unknown road conditions.
As a PhD student at the Swiss Federal Institute of Technology in Zurich, Schoellig worked with others to develop the Flying Machine Arena, a 10-cubic-meter space for training drones to fly together in an enclosed area. In 2010, she created a performance in which a fleet of UAVs flew synchronously to music. The “dancing quadrocopter” project, as it became known, used algorithms that allowed the drones to adapt their movements to match the music’s tempo and character and coordinate to avoid collision, without the need for researchers to manually control their flight paths. Her setup decoupled two essential, usually intertwined components of autonomous systems—perception and action—by placing, at the center of the space, a high-precision overhead motion-capture system that can perfectly locate multiple objects at rates exceeding 200 frames per second. This external system enabled the team to concentrate resources on the vehicle-control algorithms.
A sizable percentage of hospital patients end up with an infection they didn’t have when they arrived.
Among the most lethal of these is Clostridium difficile. The bacterium, which spreads easily in hospitals and other health-care facilities, was the source of almost half a million infections among patients in the United States in a single year, according to a 2015 report by the Centers for Disease Control and Prevention. Fifteen thousand deaths were directly attributable to the bug.
Jenna Wiens, an assistant professor of computer science and engineering at the University of Michigan, thinks hospitals could learn to prevent many infections and deaths by taking advantage of the vast amounts of data they already collect about their patients.
“I think to really get all of the value we can out of the data we are collecting, it’s necessary to be taking a machine-learning and a data-mining approach,”
Wiens has developed computational models that use algorithms to search through the data contained in a hospital’s electronic health records system, including patients’ medication prescriptions, their lab results, and the records of procedures that they’ve undergone. The models then tease out the specific risk factors for C. difficile at that hospital.
“A traditional approach would start with a small number of variables that we believe are risk factors and make a model based on those risk factors. Our approach essentially throws everything in that’s available,” Wiens says. It can readily be adapted to different types of data.
Aside from using this information to treat patients earlier or prevent infections altogether, Wiens says, her model could be used to help researchers carry out clinical trials for new treatments, like novel antibiotics. Such studies have been difficult to do in the past for hospital-acquired infections like C. difficile—the infections come on fast so there’s little time to enroll a patient in a trial. But by using Wiens’s model, researchers could identify patients most vulnerable to infections and study the proposed intervention based on that risk.
At a time when health-care costs are rising exponentially, it’s hard to imagine hospitals wanting to spend more money on new machine-learning approaches. But Wiens is hopeful that hospitals will see the value in hiring data scientists to do what she’s doing.
“I think there is a bigger cost to not using the data,” she says. “Patients are dying when they seek medical care and they acquire one of these infections. If we can prevent those, the savings are priceless.”