Skip to Content
Artificial intelligence

Silicon Photonic Neural Network Unveiled

Neural networks using light could lead to superfast computing.

Neural networks are taking the world of computing by storm. Researchers have used them to create machines that are learning a huge range of skills that had previously been the unique preserve of humans—object recognition, face recognition, natural language processing, machine translation. All these skills, and more, are now becoming routine for machines.

So there is great interest in creating more capable neural networks that can push the boundaries of artificial intelligence even further. The focus of this work is in creating circuits that operate more like neurons, so-called neuromorphic chips. But how to make these circuits significantly faster?

Today, we get an answer of sorts thanks to the work of Alexander Tait and pals at Princeton University in New Jersey. These guys have built an integrated silicon photonic neuromorphic chip and show that it computes at ultrafast speeds.

Optical computing has long been the great dream of computer science. Photons have significantly more bandwidth than electrons and so can process more data more quickly. But the advantages of optical data processing systems have never outweighed the additional cost of making them, and so they have never been widely adopted.

That has started to change in some areas of computing, such as analog signal processing, which requires the kind of ultrafast data processing that only photonic chips can provide.

Now neural networks are opening up a new opportunity for photonics. “Photonic neural networks leveraging silicon photonic platforms could access new regimes of ultrafast information processing for radio, control, and scientific computing,” say Tait and co.

At the heart of the challenge is to produce an optical device in which each node has the same response characteristics as a neuron. The nodes take the form of tiny circular waveguides carved into a silicon substrate in which light can circulate. When released this light then modulates the output of a laser working at threshold, a regime in which small changes in the incoming light have a dramatic impact on the laser’s output.

Crucially, each node in the system works with a specific wavelength of light—a technique known as wave division multiplexing. The light from all the nodes can be summed by total power detection before being fed into the laser. And the laser output is fed back into the nodes to create a feedback circuit with a non-linear character.

An important question is just how closely this non-linearity mimics neural behavior. Tait and co measure the output and show that it is mathematically equivalent to a device known as a continuous-time recurrent neural network. “This result suggests that programming tools for CTRNNs could be applied to larger silicon photonic neural networks,” they say.

That’s an important result because it means the device that Tait and co have made can immediately exploit the vast range of programming nous that has been gathered for these kinds of neural networks.

They go on to demonstrate how this can be done using a network consisting of 49 photonic nodes. They use this photonic neural network to solve the mathematical problem of emulating a certain kind of differential equation and compare it to an ordinary central processing unit.

The results show just how fast photonic neural nets can be. “The effective hardware acceleration factor of the photonic neural network is estimated to be 1,960 × in this task,” say Tait and co. That’s a speed up of three orders of magnitude.

That opens the doors to an entirely new industry that could bring optical computing into the mainstream. “Silicon photonic neural networks could represent first forays into a broader class of silicon photonic systems for scalable information processing,” say Taif and co.

And others are working in this area too. Earlier this year, Yichen Shen at MIT and a few pals proposed the architecture behind a fully optical neural network and demonstrated elements of it using a programmable nanophotonic processor.

Of course much depends on how well the first generation of electronic neuromorphic chips perform. Photonic neural nets will have to offer significant advantages to be widely adopted and will therefore require much more detailed characterization. Clearly, there are interesting times ahead for photonics.

Ref: arxiv.org/abs/1611.02272: Neuromorphic Silicon Photonics

This story was updated on November 22 to include additional work done by researchers at MIT.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.