Don’t throw out your CPUs just yet, but there may be a new way to run your neural networks.

In the regular world of computing—whether you’re running exotic deep-learning algorithms or just using Excel—calculations are usually performed on a processor while data is passed back and forth to the memory. That works perfectly well, but some researchers have argued that performing calculations in memory itself would save time and energy that is usually used to move data around.

And that’s exactly the concept that a team from IBM Research in Zurich has now applied to some AI algorithms. The team has used a grid of one million memory devices, pictured above, which are all based on a phase-change material called germanium antimony telluride. The alloy’s special trick is that, when it’s hit by an electrical pulse, its state can be changed—from amorphous, like glass, to crystalline, like metal, or vice versa.

By varying the size and duration of the electric pulses, it’s possible to change the amount by which that crystallization changes. And that, in turn, can be used to represent a number of different states, not just regular 0s and 1s, which can be used to perform calculations rather than just store data. By using that quirk and enough chunks of memory, the IBM researchers have shown that they can perform machine-learning tasks like finding correlations in unknown data streams. The work is published in Nature Communications.

This is, admittedly, a small, niche, lab-based study. But the team reckons it could, if scaled up, create computing systems that perform some AI tasks 200 times faster than regular devices. Even if it can achieve just a fraction of that boost, in-memory AI may have a future.