Skip to Content

What Does the World Look Like to a Dolphin? We Still Don’t Know

A viral image claiming to show what dolphin’s echolocation perceives is backed by suspect science.
December 10, 2015

Bottlenose dolphins, in addition to their remarkable intelligence and agility, are able to “see” their underwater world with sound. This ability—echolocation—is often referred to as a “sixth” sense, but it’s more like a hyperdeveloped version of hearing.

Now a team of researchers from the nonprofit SpeakDolphin.com have released an image they claim captures the image of a diver as seen via a dolphin’s echolocation. This claim should be taken with about as many grains of salt as there are in the sea, as we’ll see.

You may well have seen this story already—it’s been making the rounds for its undeniable “wow” factor. If not, here’s the short version: the SpeakDolphin team, led by Jack Kassewitz, recorded the echoes when a bottlenose directed her echolocation beam (high-frequency sound pulses) over a submerged diver. They sent the recordings to be processed by vibration-imaging company CymaScope, which used the series of audio “snapshots” to reconstruct a three-dimensional representation of the submerged diver. And, in a final step, these images were passed along to 3D Systems, the inventors of 3-D printing, who created a three-dimensional printout of “what the dolphin saw.”

It all sounds very, very cool. But it’s time to break out my naturalist curmudgeon hat.

First point: dolphins can see, in the traditional sense, and can see quite well. (A fact apparently lost in much of the coverage so far.) Their color vision is crap, but they have good low-light vision and their eyes are specially adapted to allow them to see above and below the surface clearly. Unlike some bats, which rely on echolocation so heavily that it’s reasonable to say they perceive their world almost exclusively via sound, dolphins use a mix of sound and sight. As far as how their brains process this unique mix of sensory information … well, there’s no mention of brains anywhere in the press release or credulous first wave of coverage.

Before I go further, credit where credit is due: Rachel Feltman at the Washington Post posted her own takedown of the release while I was working on this post, and her article is worth reading in its entirety for a thorough breakdown of the problems with the SpeakDolphin release’s credibility.

There are also more general issues with the science behind the viral image … in that there doesn’t seem to be much. Or, more specifically, the usual ways of vetting research have been ignored. The results, as Feltman notes, are not published in any journal. SpeakDolphin founder Jack Kassewitz told Tech Insider’s Rebecca Harrington that he prefers to make his research publicly available in book form instead of through journals, a sentiment I empathize with greatly even as it tickles my skeptic bone.

Even more troubling, Feltman points out, Kassewitz seems to see himself as a rebel for the dolphin-human cause, calling mainstream science “bigoted” and “racist” in its attitudes toward dolphins in a statement on his site. While it’s certainly true that science has been slow in appreciating the full intelligence of nonhuman animals, his stance seems more passionate than reasoned. Not to put too fine a point on it, but people who view themselves as maverick geniuses tend to only be half-right. Good science is communal for a reason.

Harrington asked for clarification about how CymaScope generates its images, but did not get any. Instead, she received the following response from Kassewitz: “I am totally open to criticism, but from physicists, because that’s what is going on here.”

This is undeniably a great idea for an experiment. If we can figure out how dolphins perceive the world with a mix of echolocation and vision, it will almost certainly point to new and interesting techniques for sonar and underwater imaging more generally. For now the images may represent some interesting piece of the puzzle of how dolphins use sound and sight to navigate their world, but even if research methodology is solid, it will be a gross overstatement to claim this is “what the dolphin saw.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.