Skip to Content
Uncategorized

New Image Database Could Help Explain Evolution of Human Eye

Images from the birthplace of the human race could help explain some mysteries of human vision, says scientists

The human eye is an amazing piece of machinery. It can distinguish some ten million colours thanks to the remarkable light-sensitive rod and cone cells that populate the back of the eye.

These cells neatly divide up the process of vision. The rods, some 90 million of them, have a peak sensitivity to reddish light and work best in low light, providing our night vision.

The cones, on the other hand, some 5 million of them, come in three types. These are sensitive to long wavelengths (ie red), medium wavelengths (green) and short wavelengths (blue) producing colour vision. They are designated L, M and S cones respectively.

But here’s the puzzle: S cones are rare making up less than 10 per cent of the total. The L and M cones are much more common but their ratio can vary dramatically. People with otherwise normal colour vision can have L:M ratios of between 1:4 and 15:1.

(Other primates have different ratios although the ratio in new world monkeys is similar to ours.)

The question that leaves biologists scratching their heads is why.

One idea is that this distribution of cone cell types is the result of an adaptation to the environment in which the human eye evolved.

So if we can work out what that environment was like, we could get a handle on the forces that shaped our visual system.

Today, Gasper Tkacik at the University of Pennsylvania in Philadelphia and several pals reveal an interesting approach to solving this problem. Their idea is to find a place like the one in which humans evolved and to measure the lighting conditions found there.

And by comparing a big enough sample with measurements from elsewhere, it should be possible to work out how and why the human eye evolved with its curious ratios of cone cell types.

So where to look. The consensus view is that humans diverged from other hominids about 3 million years ago in Africa. One place thought to be representative of the conditions that existed then is the Okavango Delta in Botswana, where the Okavanga River empties into a swamp at the edge of the Kalahari desert forming the world’s largest inland river delta. (Most of the water evaporates.)

If humans evolved in conditions like this, then it’s possible that the lighting conditions there might give us a clue about mysteries like the cone cell ratio.

So Tkacik and buddies travelled to Botswana and have taken 5000 six-megapixel images of the area using a Nikon D70 digital SLR. they then carefully calibrated them to accurately capture the statistics of the light reaching the camera sensors and put them all on the web.

Today, they describe the various ways in which they’ve built this database and announce that it is publicly available under a creative commons license for research in computer vision, the psychophysics of perception and visual neuroscience. The image above is one example.

The idea is that comparing the statistics associated with these images with those from other areas will produce some insight into the evolution of the visual system.

There is another possibility of course. That the peculiar characteristics of the human eye are the result of some much more dramatic incident, such as the Toba supervolcano eruption 70,000 years ago, which may have reduced the global human population to less than 15,000. Other bottlenecks at other times are thought to have reduced the number of humans to less than 2000.

Perhaps the human visual system is optimised to survive during one of these catastrophes. Finding these lighting conditions on Earth today might be significantly harder.

Whatever the cause, the images from Botswana are an interesting first step in rediscovering the conditions in which we evolved and working out why we are the we are.

Ref:arxiv.org/abs/1102.0817: Natural Images From The Birthplace Of The Human Eye

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.