Biologists have long sought a cheap way to simultaneously detect different types of biological molecules in a sample, such as the several malarial proteins that might be present in a patient’s blood. One approach uses polymer tags with bar code-like lines that glow different colors when receptors on the tags bind to specific molecules. But making such tags on a large scale has been prohibitively expensive, as each extra bar line adds another step to the manufacturing process.
Now a group of MIT researchers has created a microfluidic printing press that can produce tiny particles in a single step. In addition to biotags, the method can turn out all kinds of shapes – from keys to cylinders to swirls – that could be used to make everything from microelectromechanical machines to optical devices, fabrics, and even the miniature stirring bars and valves used in microfluidics. “This is a beautiful piece of work for continuous synthesis of particles, with great flexibility in the shapes that can be produced,” says Howard Stone, a professor of engineering at Harvard University.
The process, developed by an MIT group led by chemical engineer Patrick Doyle, begins with one or several closely spaced, parallel, 100–micrometer–scale streams of liquid. The liquids contain the polymers’ precursors, some of which may be bound to proteins that can serve as receptors on a biotag. A flash of ultraviolet light projected through a stencil causes the polymers to solidify in specific shapes. The resulting particles can have several “stripes” – each created from a separate stream of fluid.
A horrifying new AI app swaps women into porn videos with a click
Deepfake researchers have long feared the day this would arrive.
We can’t afford to stop solar geoengineering research
It is the wrong time to take this strategy for combating climate change off the table.
Meet Altos Labs, Silicon Valley’s latest wild bet on living forever
Funders of a deep-pocketed new "rejuvenation" startup are said to include Jeff Bezos and Yuri Milner.
The new version of GPT-3 is much better behaved (and should be less toxic)
OpenAI has trained its flagship language model to follow instructions, making it spit out less unwanted text—but there's still a way to go.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.