Skip to Content
Uncategorized

New Class of Malware Attacks Specific Chips

Computer scientists reveal malware that attacks specific processors rather than the operating system that runs on them.

Computer malware is insidious and dangerous but there are well known limits to the kinds of attacks that it can be used to mount. One of the most obvious is that the malware has to be targeted at a weakness in a specific operating system.

So there’s no shortage of malware targeting the Windows operating system, for example, but this is easy enough to avoid by using a Mac.

But Anthony Desnos and friends at the Ecole Superiore d’Informatique, Electronique, Automatique (ESIEA) in Paris say it ought to be possible to make malware much more insidious. Today, they show how to create malware that targets a specific processor rather than the operating system that runs on it. That kind of attack is much harder to protect against.

The first step in such an undertaking is to work out how to identify a processor, a task that is by no means straightforward but not impossible.

One clue comes from a bug in Intel’s P5 chip back in 1994 that caused it to make floating point errors in various calculations. A simple way to discover whether anybody is using such a chip would be to carry out a calculation that the P5 is known to screw up.

Desnos and co point out that all chips have mathematical limitations that are determined by the standards they use for encoding numbers and carrying out floating point arithmetic. Some of these are well known.

For example, many processors use the IEEE P754 standard for 32-bit number formats and basic mathematical operations. Here, the first bit represents the sign of the number, the next 8 bits represent the exponent and the final 23 bits represent the mantissa.

(One way to represent a number is to write down its digits and then indicate where the decimal point should go. So the number 123.45 can be written as 12345 x 10^-2. 12345 is the mantissa and -2 is the exponent. )

This standard has various known limitations. Consider, for example, the expression:

F(X,Y) = (1682XY^4 + 3X^3 + 29XY^2 - 2X^5 + 832)/107751

When X = 192119201 and Y = 35675640, the answer is 1783. But a processor using the IEEE P754 standard will calcalute that F(X,Y) = −7.18056 x 10^20. A dead give away.

The problem for Desnos and co is to find a set of floating point calculations like this that can uniquely identify any processor.

And they’ve gone some way to finding them using tasks such as calculating sin(10^10 pi) for various different numerical values of pi. They can’t yet spot specific processors but they can use this technique to identify families of them (see table above). It’s then just a question of running some code that does the damage.

Desnos and co say this kind of approach would allow much more specific cyberattacks than are possible today. “If such an approach is possible, this would enable far more precise and targeted attacks, at a finer level in a large network of heterogeneous machines but with generic malware,” they say.

That’s a worrying new addition to the armoury of malice. Highly targeted cyber attacks have obvious value, as demonstrated recently by the Stuxnet worm aimed at computer systems used to control industrial machines and apparently targeted at Iran and China.

The only question now is how long till we see processor-dependent malware in the wild.

Ref: arxiv.org/abs/1011.1638: Processor-Dependent Malware… And Codes

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.