For as little as $0.12 per record, data brokers in the US are selling sensitive private data about active-duty military members and veterans, including their names, home addresses, geolocation, net worth, and religion, and information about their children and health conditions.
In a unsettling study published on Monday, researchers from Duke University approached 12 data brokers in the US and asked what would be necessary to buy this kind of information; they ultimately purchased thousands of records about American service members, finding that many brokers offered to sell the data with minimal vetting and were willing to deal with buyers using email domains based in both the US and Asia.
The year-long study, which was funded in part by the US Military Academy at West Point, highlights the extreme privacy and national security risks created by data brokers. These companies are part of a shadowy multibillion-dollar industry that collects, aggregates, buys, and sells data, practices that are currently legal in the US. Many brokers advertise that they have hundreds of individual data points on each person in their database, and the industry has been criticized for exacerbating the erosion of personal and consumer privacy.
The researchers say they were “shocked” at the ease with which they were able to obtain highly sensitive data about members of the military. “In practice, it seems as though anyone with an email address, a bank account, and a few hundred dollars could acquire the same type of data that we did,” Hayley Barton, a coauthor of the study and a graduate student researcher, says.
The authors hope the study serves as a warning to US lawmakers and are calling on Congress to pass a comprehensive privacy law that restricts the data broker industry.
“What we really need is regulation of this ecosystem,” the report’s lead author, privacy researcher Justin Sherman, says. “At the end of the day, this is a congressional problem—because we need new legal authorities to deal with these risks, and regulatory agencies need more resources.”
Senator Elizabeth Warren, who has reviewed the report and serves on the US Senate Armed Services Committee, broadly agrees. “Data brokers are selling sensitive information about service members and their families for nickels without considering the serious national security risks,” Warren, a Massachusetts Democrat, said in a statement to MIT Technology Review. “This report makes clear that we need real guardrails to protect the personal data of service members, veterans, and their families.”
Selling sensitive information
The danger posed by commercially available data about active-duty military members is not a new problem. For example, in 2018, data about running routes recorded in the fitness tracking app Strava revealed the location of US military bases and patrol routes overseas.
The Duke researchers had previously come across data brokers advertising the sale of information about military personnel, says Sherman, so they wanted to evaluate the national security risks of this industry.
Sherman also notes that data brokers have claimed to have strong vetting processes that prevent data from being sold to criminal or otherwise dangerous parties and to ensure that the data they sell is used responsibly. But their research showed this to be the exception, not the rule.
The team first scraped the web to get a view of how many of the thousands of data brokers in the US advertise the availability of personal data on the country’s service members. It found “7,728 hits for the word ‘military’ and 6,776 hits for the word ‘veteran’ across 533 data brokers’ websites,” according to the paper. Major data brokers including Oracle, Equifax, Experian, CoreLogic, LexisNexis, and Verisk all advertised military-related data.
Next, the researchers contacted 12 of those brokers about purchasing the data. They “found a lack of robust controls when asking some data brokers about buying data on the U.S. military and when actually purchasing data from some data brokers such as identity verification, background checks, or detective controls to ascertain our intended uses for the purchased data.” (The researchers do not identify the brokers they contacted but say that they adhered to all research compliance policies at Duke.)
While some brokers did have controls in place—two of the 12 refused to make the sale because they weren’t convinced the researchers had a verified company—many of the firms did not. In fact, one broker they contacted said that the researchers could avoid a background check if they paid for the data by wire transfer rather than by credit card.
In one particularly disturbing finding, one of the brokers even sold the researchers data about the ages and sex of children of active-duty military members living in Washington, DC, Maryland, and Virginia, and whether they had children living in their homes. This data set, which also included the members’ home addresses, was sold to the researchers when they inquired from both US- and Asia-based domains.
Both Sherman and Sarah Lamdan, a law professor at the City University of New York and author of Data Cartels, a book about the industry, say the practices the researchers observed appear legal and the selling of data about children does not violate the Children’s Online Privacy Protection Act, commonly known as COPPA, a law addressing data about minors’ online activity.
Several brokers also requested that the researchers sign nondisclosure agreements. “Forcing customers to sign NDAs isn’t merely a lack of transparency about what is happening with our data,” says Lamdan. “Rather, it’s a veil of secrecy that the data brokers are drawing around their practices and the huge volumes of our data that they are selling to whoever wants to buy it.”
In the end, the researchers purchased eight data sets from three different brokers, each containing between 4,951 and 15,000 identifiable records, via email addresses with US- and Asia-based domains. The final cost was $0.12 to $0.32 per record for each service member. The researchers did not sign any nondisclosure agreements.
The potential threat from abroad
To determine the scope of the national security risk, the researchers specifically wanted to test whether brokers would sell data to buyers outside the US.
Using a .asia domain name and email address and a Singaporean IP address, the researchers were able to obtain individually identified information on active-duty service members, and data about their marital status, homeowner/renter status, ethnicity, language, religion, and credit rating, among many other data points.
According to the paper, the brokers largely failed to vet the researchers when they inquired from an Asia-based email, just as they neglected to do when the researchers inquired from a US-based domain. One broker did restrict some data fields when the request came from the .asia domain as opposed to the US domain, but most of the brokers responded similarly regardless of where the inquiry originated.
“We were able to buy data from brokers without any vetting, even though it pertained to members of the military, even though we were using a .asia domain, even though we wanted data sent out of the country,” says Sherman—a finding he calls “really concerning.”
The US Department of Defense did not respond to our multiple requests for comment.
In a statement to MIT Technology Review, Senator Ron Wyden, who serves on the US Senate Intelligence Committee, echoed Sherman. “The researchers’ findings should be a sobering wake-up call for policymakers that the data broker industry is out of control and poses a serious threat to US national security,” he said.
“The United States needs a comprehensive solution to protect Americans’ data from unfriendly nations rather than focusing on ineffective Band-Aids like banning TikTok,” added Wyden, an Oregon Democrat. “And not to sound like a broken record, but our country desperately needs a comprehensive consumer privacy law here, to limit the collection, retention, and sale of sensitive personal information from the start.”
Three things to know about the White House’s executive order on AI
Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward.
How generative AI is boosting the spread of disinformation and propaganda
In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.
A controversial US surveillance program is up for renewal. Critics are speaking out.
Here's what you need to know.
Government technology is famously bad. It doesn’t have to be.
New York City is fixing the relationship between government and technology–and not in the ways you’d expect.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.