Skip to Content

Probe Sees Unused Internet

A survey shows that addresses are not running out as quickly as we’d thought.
October 15, 2008

In a little more than two years, the last Internet addresses will be assigned by the international group tasked with managing the 4.3 billion numbers. And yet, while most Internet engineers are looking to Internet Protocol version 6 (IPv6), the next-generation Internet addressing scheme, a research team has probed the entire Internet and found that the problem may not be as bad as many fear. The probe reveals millions of Internet addresses that have been allocated but remain unused.

Far and wide: This map was created using data from the researchers’ census. About a quarter of the address space is still unassigned (blue), a quarter appears to be relatively densely populated (green), and nearly half of the space has few servers or did not respond to queries (red).

In a paper to be presented later this month at the Proceedings of the ACM Internet Measurement Conference, a team of six researchers have documented what they claim is the first complete census of the Internet in more than two decades. They discovered a surprising number of unused addresses and conclude that plenty will still be lying idle when the last numbers are handed out in a few years’ time. The problem, they say, is that some companies and institutions are using just a small fraction of the many million addresses they have been allocated.

“People are very concerned that the IPv4 address space is very close to being exhausted,” says John Heidemann, a research associate professor in the department of computer science at the University of Southern California (USC) and the paper’s lead author. “Our data suggests that maybe there are better things we should be doing in managing the IPv4 address space.”

The census, carried out every quarter since 2003 but only recently published, is the first comprehensive map of the Internet since David Smallberg, then a computer-science student at the University of California, Los Angeles, canvassed the Internet’s first servers–all 300-plus of them–following the switchover from the ARPANET in early 1983.

Internet Protocol version 4 (IPv4) addresses are typically managed as network blocks consisting of 256 addresses (known as a C block), 65,536 addresses (known as a B block), or approximately 16.8 million addresses (known as an A block). About a quarter of the A block addresses–the largest segments of the Internet–were given out in the first days of the Internet to early participants and to companies and organizations including Apple, IBM, and Xerox.

Today, A blocks are issued by an organization called the Internet Assigned Numbers Authority (IANA) to large Internet service providers or to regional registrars to which the A blocks are resold. But because accelerating use of the Internet is quickly eating up the remaining free blocks of network addresses, the last blocks will likely be given out between the end of 2010 and 2011.

The next-generation Internet address scheme, IPv6, solves the shortage by vastly increasing the number of addresses available. While IPv4 offers about 4.3 billion addresses for the earth’s 6.7 billion people, IPv6 will offer 51 thousand trillion trillion per person. However, the move to IPv6 has progressed slowly because of cost and complexity, even with recent mandates for use of IPv6 within the U.S. government.

Far and wide: This map was created using data from the researchers’ census. About a quarter of the address space is still unassigned (blue), a quarter appears to be relatively densely populated (green), and nearly half of the space has few servers or did not respond to queries (red).

The new map of the Internet suggests that there is room for more hosts even if addresses are running out. The map reveals that, while roughly a quarter of all blocks of network addresses are heavily populated and therefore efficiently used, about half of the Internet is either used lightly or is located behind firewalls blocking responses to the survey. The last quarter of network blocks consists of addresses that can still be assigned in the future.

The USC research group used the most innocuous type of network packet to probe the farthest reaches of the Internet. Known as the Internet Control Message Protocol, or ICMP, this packet is typically used to send error messages between servers and other network hardware. Sending an ICMP packet to another host (an action known as pinging) is generally not seen as hostile, Heidemann says. “There are certainly people who misunderstand what we are doing,” and interpret it as the prelude to an attack, he says. “By request, we remove them from the survey, but its fewer people than you might think. Pings are pretty innocuous.”

The researchers found that ICMP pings stack up well against another method of host detection, the Internet’s main means of transmitting data: the Transmission Control Protocol, or TCP. TCP-probing is a common technique used by network scanners, but it tends to take longer and is considered more aggressive than ICMP pings, so it may be blocked. To compare the effectiveness of each technique, the team probed a million random Internet addresses using both ICMP and TCP, finding a total of 54,297 active hosts. ICMP pings elicited a response from approximately three-quarters of visible hosts, while TCP probes garnered a response slightly less than two-thirds of the time.

In total, the researchers estimate that there are 112 million responsive addresses, with between 52 million and 60 million addresses assigned to hosts that are contactable 95 percent of the time.

The survey may miss computers behind firewalls or computers that do not respond to pings, but the overall conclusion–that the Internet has room to grow–is spot on, says Gordon Lyon, a security researcher who created the popular network scanning tool NMAP.

“There are huge chunks of IP space which are not allocated yet, and also giant swaths which are inefficiently allocated,” Lyon says. “For example, Xerox, GE, IBM, HP, Apple, and Ford each have more than 16 million IP addresses to themselves because they were allocated when the Internet was just starting.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.