Skip to Content

This App May Be the Future of Bedtime Stories

A startup is using voice recognition during story time to bridge the digital and real worlds.
August 21, 2017

The other day, I was reading The Very Hungry Caterpillar to my daughter. When I got to the part where the caterpillar ate through one apple, I paused, surprised by an unmistakable munching sound coming from my coffee table.

The sound was actually emitted by an app called Novel Effect that uses voice-recognition technology to insert sound effects and music to books as you read them aloud—ideally, to make the experience of reading aloud more engaging for kids at home or in the classroom.

“You still get engagement, you still get interactivity,” says Matt Hammersley, Novel Effect’s CEO and one of its four cofounders. “But they’re not staring at a screen and you’re actually encouraging face-to-face personal communication.”

A free beta version of the app is currently available for the iPhone and iPad (an Android app is coming). It’s stocked with sound effects for a handful of books like The Very Hungry Caterpillar and Where the Wild Things Are.

For now, Novel Effect makes money when users buy books from Amazon via the app. In the early fall, the Seattle-based company plans to roll out a more polished version of its app with sound effects for over 100 different books; then it will start charging a $5 monthly fee to use it. The company already has a partnership with publisher Hachette Book Group, and several others are in the works, Hammersley says.

Novel Effect’s effort to bridge the real world of books with the digital world makes a lot of sense right now: we’ve quickly gotten comfortable with voice recognition in our daily lives, from Siri on Apple’s iPhones to Alexa on Amazon’s Echo speakers. A little under a fifth of the population will use a digital assistant at least once a month this year, according to research firm eMarketer. And the people using these things the most are between the ages of 25 and 34. Chances are many of them have kids to read with.

Novel Effect knows this well: in Seattle, it is participating in the Amazon Alexa Accelerator, which is backed by Amazon’s $100 million Alexa Fund and is being run with accelerator Techstars.

To use Novel Effect, you open up the specific book in the app that accompanies the real-world book; this prompts the app to start listening for the book text so it can analyze your speech to figure out where you are in the story and synchronize all kinds of noises, from stomach-ache groans to neighs. It doesn’t require you to read the book straight from start to finish; you can start on page 10 and jump around if you want. And it doesn’t matter how much time you spend on one page, or whether you interrupt the reading experience to talk about things other than what’s on the page (Hammersley stresses that the app doesn’t record what users say, and is only looking for the book text).

By late this year or early next, the company also plans to offer a tool that lets anyone create new sound-enhanced stories or add sounds to existing ones. 

My experiences with the current version of the app were mixed. I was charmed by the munching and a variety of other sounds in The Very Hungry Caterpillar; the timing was decent and the noises were appropriate. Yet when I tried reading Brown Bear, Brown Bear, What Do You See? it didn’t work nearly as well—tweeting and croaking interrupted my speech, spoiling the surprise of which animal would be revealed next.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.