Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

Many of my gamer friends are abuzz about this blog entry written by a young college student about his experiences participating in some lab-based experiments into the effects of violent video games. Not sure what the news value is but it certainly vividly illustrates some of the concerns many of us have about this kind of research.

As an MIT faculty member, I have tremendous respect for the authority which real science commands, but the “media effects” researchers misuse that authority in applying lab techniques to address what are essentially cultural questions. This research reduces complex cultural processes to simple variables which can be tested in the laboratory and in the process strips away most of what it would need to research if it actually wanted to address the questions it poses. Ultimately, what these reseaerchers put forward is not the data – which often shows minor effects at best – but the interpretation of that data – which makes much larger claims – and those conclusions, qualified in the original research documents, get expanded further at each subsequent step in the reporting process – by journalists, activists, politicians, and finally your Aunt Agatha talking over the back fence to her neighbors.

What’s wrong with this picture? First, let’s start with the fact that unlike real lab research, this lab rat CAN talk back and in fact, seems to know more about the phenomenon being researched than the researchers. There is no effort here to tap what he knows or to ask him what the experience means. He is assumed to be inarticulate and his interpretations are measured only indirectly. His own insights about the experience here are richer than the binary codes they will be given as the researchers mulch down their aggregate data.

Second, the researchers seem to care very little about the most basic distinctions between different types of games or different kinds of representations of violence which would be part of the way gamers think about and respond to the media. “Violence” is assumed to be one simple thing which produces specific and predictable results and not a theme which gets dealt with in many different ways.

Third, the researchers have removed him from any context where he would ever actually be playing games, creating an arbitrary situation which strongly colors his response. Ask yourself whether playing in a laboratory booth is more like playing at home alone or in an arcade and you can see how inadequate it is as a substitute for either experience.

Fourth, to give the researcher the benefit of the doubt, they introduce other variables into the experiment – such as the potential frustration caused by asking someone to play with a badly tuned controller – which taint the results they get. In the end, the measures being applied aren’t fine tuned enough to distinguish between aggression stirred up by violence and frustration created by being thrust into a situation over which you have so little control.

Fifth, the research from the very beginning has an agenda – in this case, perhaps, it is already linked with a law suit. There is enormous pressure to find negative effects. If they don’t find those effects, the experiment will be regarded as a failure and its results will not be reported. But an experiment which produces no results in this case isn’t really a failed experiment. It may demonstrate that such media does not have the kinds of effects hypothesized.

Sixth, most often, such methods expose people to kinds of media they would not otherwise consume. Often they have little of the kinds of preparation – in terms of knowledge, skill, or emotional readiness – which actual consumers in real life situations would have before they experienced such content, in most cases. So, what are we measuring – the impact of such media on casual consumers or its impact on people who spend a great deal of their lives interacting with games. Is there a difference between what is actually measured here and the way those results will be interpreted once they are released to the public?

Food for thought next time you read a newspaper story reporting on this kind of research finding.

0 comments about this story. Start the discussion »

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me