Skip to Content
Uncategorized

There Is No Digital Divide

A concept that animates hundreds of millions in Federal spending needs to be retired.

We all know poor people are on the wrong side of an uncrossable technological chasm known as the “digital divide.” Their lack of iPads and data plans and broadband is just one more way they’re doomed to stay poor right up until they become the shock troops of the zombie apocalypse, am I right?

Indeed, a recent New York Times piece, “Wasting Time Is New Divide in Digital Era” (or, as Gawker put it, “Poor People Are Wasting Time on the Internet!”) asserts that while all kids are spending more time with media, those with lower socio-economic status were spending even more of it, and on activities like Facebook that aren’t exactly conducive to learning. In other words: even when you give poor people access to technology, they don’t know what to do with it! Might as well give a paleolithic tribe access to a chip fab, pffft.

Jessie Daniels, Associate Professor of urban public health at Hunter College and CUNY and author of a forthcoming book on Internet propaganda, tweeted her displeasure at the piece. (There’s even a Storify of all her comments on it.)

Given her background in the subject, I asked Daniels to amplify her concerns about how the Times piece on the “New Digital Divide” missed the point. The results are an intriguing take on how our cultural biases inform how we look at access to technology, and could have implications for the hundreds of millions the Federal government plans to spend on closing the “digital divide.”

1. Many technology writers (myself included) take for granted that there is a “digital divide.” In your tweet, were you saying that there isn’t a “digital divide,” or that the framing of it is harmful? (Or both?)

Daniels: “Right, I think we’ve all sort of accepted the “digital divide” framework, but there are some real problems with that.  First of all, saying there is a “digital divide” presumes a shared understanding of that term and there’s not one.  The original NTIA report from 1998 defined “digital divide” as someone with a desktop computer with (dial-up) Internet access. Since that time, those technologies have faded, yet the terminology has persisted. What many people have done is to talk about “multiple divides” or, as the New York Times did today, “new divides.”  But I find this framing problematic.

“I would argue, and lots of others have too, that the framing of “digital divide” treats lots of complex ideas about access to and use of Internet technologies in a simplistic, “either or” kind of way.  Following that 1998 NTIA report I mentioned, there were lots of research and popular press stories that talked about “technology haves and have nots.”  That’s far too simplistic for adequately understanding what’s happening with technology access and use.  And, it leads people - both researchers and journalists - to start asking the wrong kinds of questions, such as: “what’s wrong with the technology have-nots?”  And, “why can’t the technology have-nots behave more like the technology haves?” Given that in the original research, the middle- and upper-classes, whites, and men were more likelyt to have access to technology, those sorts of questions about the characteristics of the “have-nots” just point us to old ways of thinking about class, about race, and about gender.”

2. It sounds like you’re saying that, because we’ve framed the internet / technology habits of (to simplify) white men as “normal” or desirable, we’re in some sense missing the point. While that makes us look like dopes for making such assumptions, does that suggest anything hopeful about the “digital divide” or whatever we should call it?

Daniels: “Yes, that’s definitely part of it. Once again, affluent white men (to vastly simply) and their habits of access and use end up being the standard against which everyone else is measured, so that when there’s any difference from that pattern, it ends up getting read as “bad” or pathological somehow. 

“The framework of “digital divide” also encourages us to assume that certain categories of people (everyone other than white males) are somehow less technologically adept.  

“So, for example, some of the work I do is with homeless LGBT youth, most of whom are Black or Latina/o.  These young people are struggling with some big life challenges, and they are - like other people their age - completely wired. My research finds that Black/Latina/o LGBT youth who are homeless - in other words, the very people who should be on the “other side” of so-called the “digital divide,” are in fact, quite adept at technology and most have smart phones. They use this technology to survive - to find work, social services, avoid police or report police misconduct. And, they use their smart phones like everyone else does, to listen to music, to connect with friends, lovers, family.  But, the “digital divide” framework has no way to explain this. 

“Instead of “digital divide,” other scholars have talked about “digital fluency,” or even “digital entitlements” which I like better.  Of course, these metaphors carry their own symbolic weight, but how we talk about these issues matter.”

3. It sounds like the Feds are going to drop $200 million on putting digital educators in schools. Given that kids are already so plugged in (to their own uses for these technologies) do you think this is a good idea?

Daniels: “Yes, I actually think this is a good idea and there are lots of ways to make this work well.  

“In my own research with adolescents surfing the web, I found that while they were very adept at some things (opening multiple browser windows, locating things online quickly), they weren’t very good at some other, important tasks. For example, they weren’t good at deciphering “cloaked” sites from legitimate ones.  Cloaked sites are ones that disguise a political agenda by hiding authorship (e.g., www.martinlutherking.org looks like a civil rights site, but is in fact, run by white supremacists;  www.teenbreaks.com looks like a ‘reproductive health’ site, but is a pro-life site).  The good news is that it’s fairly easy and straightforward to teach the skills necessary to parse the good from the bad online.  

“Here, I’d point to the work of my friend Howard Rheingold and his new book “Net Smart,” which is an excellent guide for how to be a digitally fluent user of all the technologies we have available to us now.  It’s an excellent book and I think the FCC should include it in their plan for training the digital educators going into schools!

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.