Skip to Content

Revealing the Source of Ritalin’s Brain Boosting Benefits

The ADHD drug improves attention by enhancing neural plasticity.
March 8, 2010

New research in animals sheds light on how Ritalin, the stimulant drug prescribed to millions of children each year in the United States for attention deficit hyperactivity disorder (ADHD), sheds light on how the drug works. The molecule appears to boost both attention and enhance the speed of learning by increasing the activity of the chemical messenger dopamine, according to new research in Nature Neuroscience.

Rats given Ritalin were able to more quickly learn that a combination of signals–a flash of light and sound–meant they could get a sugar water reward. But if the rats were also given a drug to block one type of dopamine receptor, the effect was lost. Treated animals also focused more intently on the task at hand, engaging in less unrelated behavior. Another drug, designed to block a second type of dopamine receptor, blocked Ritalin’s ability to increase focus.

Researchers also found that drug-treated animals had enhanced neural plasticity, or changes in strength of the connections between nerve cells. The ability of our neural circuits to change strength in response to new information underlies our ability to learn.

“Since we now know that Ritalin improves behavior through two specific types of neurotransmitter receptors, the finding could help in the development of better targeted drugs, with fewer side effects, to increase focus and learning,” said Antonello Bonci, MD, principal investigator at the Ernest Gallo Clinic and Research Center and professor of neurology at UCSF, in a statement from the university. The Gallo Center is affiliated with the UCSF Department of Neurology.

While Ritalin is mostly prescribed for children with ADHD, it also boosts cognitive function in healthy people. A number of studies suggest that a growing number of healthy adults and teens are taking Ritalin and similar drugs to aid in studying or work performance.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.