Skip to Content

Putting Technology in Its Place

Kentaro Toyama went to India with noble intentions for using technology to improve people’s lives. Now he’s wrestling with why the impact was so small.

Kentaro Toyama calls himself “a recovering technoholic”—someone who once was “addicted to a technological way of solving problems.” Five years in India changed him. After getting his PhD in computer science and working on machine vision technologies at Microsoft, Toyama moved to Bangalore in 2004 to help lead the company’s new research center there. He and his colleagues launched dozens of projects that sought to use computers and Internet connectivity to improve education and reduce poverty. But early successes in pilot projects often couldn’t be replicated; in some schools, computers made things worse. In a book being released this spring, Geek Heresy: Rescuing Social Change from the Cult of Technology, Toyama argues that technologists undermine efforts at social progress by promoting “packaged interventions” at the expense of more difficult reforms. Toyama, who is now an associate professor in the School of Information at the University of Michigan, spoke to MIT Technology Review’s deputy editor, Brian Bergstein.

When you went to India, technological optimism was flourishing there. Bangalore was being described as the next great tech hub.

Yes, absolutely. We [reasoned] that here’s this technology sector, which is incredibly successful. Isn’t there some way that we can take the sheen of this sector and spread it around not just to people who are well educated and middle class, but also to people who are poorer and who don’t have the same kind of educational advantages—basically, to the 80 percent of the country that really is considered poor by any standard? At that time, there were barely even mobile phones. It was mostly Internet-connected PCs. I thought there was some way to use them in a way that we could support the health-care system, agriculture, or education.

What was your success rate?

I ultimately took stock of 50-odd projects that I had either been directly involved with or supervised. Very few were the kind where we felt, “This is working so well that we should really expand it.” Very often, it was because there were just limits to the human and institutional capacity on the ground that could take advantage of the technology.

For example, in education, one of the most difficult things to overcome is the way in which education is done—everything from how the public school system is managed to how it’s administered to how the government interacts with it. In India, we found instances where teachers were often called away by the government. The government feels that they’re government employees and, therefore, can be called upon to help with other government tasks.

Another example is the health-care system. If you go to a typical rural clinic, it’s not the kind of place that anybody from the United States would think of as a decent place to get health care. Bringing along a laptop, connecting it to wireless, and providing Internet so you can do telemedicine is just an incredibly thin cover. It’s a thin, superficial change.

As an example of a project that has made a difference, your book cites Digital Green. It makes and shows videos in which farmers in India share advice about planting techniques or how to handle animals. What makes that successful?

We were very cautious that the technology doesn’t replace an existing agriculture extension system. It merely amplifies whatever system is already there: human beings who have a lot of agriculture expertise, who are willing to talk to farmers and who have some rapport with farmers.

“The fundamental error people make is that we should have the computer be the primary instrument of education. It’s not clear to me why people seem to make that leap.”

We do these sessions in villages where somebody who is in touch with an agriculture extension person will call together the villagers and then do a screening using small projectors. On the one hand, it’s just basically a video screening. [But] the mediators are trained in a way so that they’re asked to provoke discussion, which is a critical part of the learning process. If you don’t do the mediation, it’s just like watching TV. And the farmers, many of them have TVs in their homes. They see agricultural programs, but that information does not register. They don’t end up implementing it for various reasons. Whereas when they have discussions together, or when they see farmers that are just like them, then they’re much more likely to believe the content of the videos and adopt the farming practice.

What do you think of One Laptop per Child, which may be the poster child for the idea of a technologically driven intervention?

There are already several randomized, controlled trials of schools with and without One Laptop per Child. Generally, what most of these studies show is that schools with laptops did not see their children gain anything in terms of academic achievement, in terms of grades, in terms of test scores, in terms of attendance, or in terms of supposed engagement with the classroom.

That might surprise people who have seen anecdotal success stories.

It’s the anecdotes that really keep the technology sector going in the [economic] development context. It is so easy to get an interesting story if you take some gadget and give it to a child. I have done this myself multiple times. The first thing that you see is kids just overjoyed that they have this new gadget in their hands. It’s a new toy, and they love it. You can’t not take a photograph of a smiling kid holding a laptop.

The reality is, that joy is the same joy that you see when you peek over the shoulder of a kid who has a smartphone in their hands in the developed world, which is to say they’re overjoyed because they’re playing Angry Birds. On the one hand, I do think that a certain amount of educational toys and play is important, but I just don’t think that K through 12 education of any serious kind can be based entirely on that kind of play.

I think it’s perfectly sensible for parents to want a certain amount of exposure to technology for their children, both as a form of explorative play and as a way to get them used to technology that they’ll undoubtedly encounter later in their life. I think the fundamental error people make is that, therefore, we should have the computer be the primary instrument of education for all children. That, to me, is a major leap. It’s not clear to me why people seem to make that leap all the time. I think one of the issues is we tend to think of education as being the content. We overemphasize the importance of content, as opposed to emphasizing the part that’s really difficult in any good education, which is adult-supervised motivation—the motivation of the child to learn something.

Why do many technologists fail to see that technology on its own is so limited in the changes it can bring?

What I see is a societal level of confusion of correlation and cause. We see this incredible success of Silicon Valley and the technology industry overall. On a daily basis, especially those of us who can afford the technology, we see it in our own lives: here’s this technology that just seems to be making everything more convenient, everything better, and so on and so forth. So we assume that it’s the technology that is directly responsible, when in fact it’s a whole bunch of other stuff that already has to be there in the first place. If you’re lacking that other stuff to begin with, then the technology by itself doesn’t cause all of those benefits.

The tricky thing about this is you can be very scientific about these things and still come up with the wrong conclusion. Multiple times in my lab, we’d run trials where you compare a control situation with a treatment situation. The treatment situation gets some kind of technology. If you measure some positive benefit in the technology case, your conclusion is that technology helped. But it was always the people that we worked with, the partners that we chose and the people on the ground who interacted with the people that we wanted to support. All of those human factors were required for the technology itself to have an impact; whether the technology helped or not was really up to people.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.