A spy may have used an AI-generated face to deceive and connect with targets on social media....
The news: A LinkedIn profile under the name Katie Jones has been identified by the AP as a likely front for AI-enabled espionage. The persona is networked with several high-profile figures in Washington, including a deputy assistant secretary of state, a senior aide to a senator, and an economist being considered for a seat on the Federal Reserve. But what’s most fascinating is the profile image: it demonstrates all the hallmarks of a deepfake, according to several experts who reviewed it.
Easy target: LinkedIn has long been a magnet for spies because it gives easy access to people in powerful circles. Agents will routinely send out tens of thousands of connection requests, pretending to be different people. Only last month, a retired CIA officer was sentenced to 20 years in prison for leaking classified information to a Chinese agent who made contact by posing as a recruiter on the platform.
Weak defense: So why did “Katie Jones” take advantage of AI? Because it removes an important line of defense for detecting impostors: doing a reverse image search on the profile photo. It’s yet another way that deepfakes are eroding our trust in truth as they rapidly advance into the mainstream.
To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.
A group called Xenotime, which began by targeting oil and gas facilities in the Middle East, now has electrical utilities in the US and Asia in its sights.
A group called Xenotime, which began by targeting oil and gas facilities in the Middle East, now has electrical utilities in the US and Asia in its sights....
The news: Industrial cybersecurity firm Dragos says it has uncovered evidence that Xenotime has been laying the early groundwork for potential attacks on power companies in the US and elsewhere. The hackers have been testing password defenses and trying to steal login credentials from employees since the end of 2018.
Safety threat: Xenotime is the group behind Triton—code that can disable safety systems that are the last line of defense against serious industrial accidents. The malware was discovered in a Saudi petrochemical plant in 2017 before it could cause any damage. Cybersecurity experts say it can be used to attack safety controls in everything from dams to nuclear power plants.
The good news: Dragos believes the probing of US and Asian targets is still at a very early stage, and the firm hasn’t found any sign—so far—that the Xenotime group has been able to penetrate systems and introduce the Triton malware.
The not-so-good news: The hackers, who some security experts suspect may be linked to the Russian government, are patient and persistent. They spent more than a year worming their way into the Saudi plant’s systems and putting the Triton malware in place.
Researchers at Facebook have created a number of extremely realistic virtual homes and offices so that their AI algorithms can learn how the real world works....
Real deal: A team at Facebook Reality Labs created 18 “sample spaces” through a program known as Replica. The idea is for AI agents to learn about real-world objects through exploration and practice. In theory, this could make chatbots and robots smarter, and it could make it possible to manipulate VR in powerful ways. But the virtual spaces need to be extremely lifelike for this to be transferable to the real world.
Mirror world: The environments were created by mapping real offices and homes using a high-definition 3D camera rig. The researchers also developed new software to deal with reflections, which can easily confuse such scanning systems. Whereas other simulation engines run at around 50 to 100 frames per second, Facebook says AI Habitat runs at over 10,000 frames per second, which makes it possible to test AI agents rapidly.
Home alone: These virtual spaces can be loaded into a new environment called AI Habitat, inside which AI programs can explore and learn. The algorithms will first be trained to recognize objects in different settings. But over time they should build some common-sense understanding about the conventions of the physical world—like the fact that tables typically support other objects.
Uncommon sense: A lack of common sense is a glaring problem for today’s AI systems. Unlike a person, a chatbot or robot cannot rely on an understanding of the world—things like physics, logic, and social norms—to figure out the intent of an ambiguous command. The complexity and ambiguity of language makes this situation all too common.