Why it matters:
Companies and governments have mishandled our data time and again. Data trusts could help us reclaim greater agency over it.
• Digital Public
• Open Data Institute
• National governments
• European Commission
2 to 3 years
Do you simply click “Yes” whenever a company asks for your data? If so, you’re not alone. We can’t be expected to read the lengthy terms and conditions or evaluate all the risks every time we use a service. That’s like asking each of us to assess whether the water we drink is safe every time we take a sip. So we hit “Yes” and hope for the best.
Even if you’ve done your research, though, your decision could affect other people in ways you didn’t account for. When you share your DNA with services like 23andMe, that data reveals a lot about your family’s genetic make-up. What you share on social media could influence your friends’ insurance premiums. Your income statements could affect your neighbor’s ability to obtain a loan. Should sharing this information be solely up to you?
If this model of individual consent is broken, then what’s left? Should we leave it to our politicians to regulate data collection? Perhaps. Governments around the world have implemented data protection regimes (such as Europe’s GDPR) that force companies to ask for our consent before collecting data. They could go further and prohibit the most harmful uses of data. But given the numerous ways in which data might be collected or used, it’s hard to imagine that broad regulations would be enough.
What if we had something to stand up for our data rights the way a trade union stands up for labor rights? And the data equivalent of a doctor to make smart data decisions on our behalf? Data trusts are one idea for how we could get just that.
Data trusts are a relatively new concept, but their popularity has grown quickly. In 2017, the UK government first proposed them as a way to make larger data sets available for training artificial intelligence. A European Commission proposal in early 2020 floated data trusts as a way to make more data available for research and innovation. And in July 2020, India’s government came out with a plan that prominently featured them as a mechanism to give communities greater control over their data.
In a legal setting, trusts are entities in which some people (trustees) look after an asset on behalf of other people (beneficiaries) who own it. In a data trust, trustees would look after the data or data rights of groups of individuals. And just as doctors have a duty to act in the interest of their patients, data trustees would have a legal duty to act in the interest of the beneficiaries.
So what would this approach look like in practice? As one example, groups of Facebook users could create a data trust. Its trustees would determine under what conditions the trust would allow Facebook to collect and use those people’s data. The trustees could, for example, set rules about the types of targeting that platforms like Facebook could employ to show ads to users in the trust. If Facebook misbehaved, the trust would retract the company’s access to its members’ data.
While it’s hard for any of us to assess how sharing our data might affect others, data trustees could weigh individual interests against collective benefits and harms. In theory, because the data trust would represent a collective, it could negotiate terms and conditions on our behalf. Thus, it could allow us to exercise our rights as producers of data in much the same way trade unions allow workers to exercise their rights as purveyors of labor.
Data trusts sound good, but is this vision really realistic? It’s hard to imagine that Facebook would ever agree to deal with one. And we, the users, have few ways to force its hand. We could form a data trust, but unless we’re all willing to leave the platform together, or unless governments provide us with greater enforcement mechanisms, that trust would have very little leverage.
All is not lost, though, because data trusts have many other useful applications. They could allow people to pool their data and make it available for uses, such as medical research, that benefit everyone. Companies that want to show they’re privacy aware could hand over the reins on key data decisions to a trust and instruct it to protect customers’ data rights instead of the company’s bottom line.
For example, in 2017, Google sister company Sidewalk Labs procured the rights to develop Toronto’s Quayside waterfront into a sensor-laden smart neighborhood. But what was hailed by some as a utopia was seen by others as yet another case in which large tech companies have encroached on the public domain, hoovering up residents’ data in the process.
Sidewalk Labs suggested the creation of a civic data trust to guarantee that data collected and used in Quayside would benefit the public. The proposal was that any entity wishing to place a sensor in Quayside would have to request a license to both collect and use data. A review board, made up of community members, would monitor and enforce that collection and use.
The plan itself was flawed, and Sidewalk Labs abandoned the Quayside project in May 2020, but the company’s proposal showcased the promise of data trusts. The idea of creating them to govern data collected in a public context (such as in smart cities, or for public health initiatives) lives on.
The problems data trusts aim to tackle are as urgent as ever. For the coming year, as funding becomes more widely available, we’ll see further research, more experiments, and more policy proposals.
Certainly, data trusts aren’t the only solution to growing privacy and security concerns. Other possible mechanisms, including data cooperatives and data unions, would tackle similar problems in different ways. Together, these new data governance models could help us regain control of our data, enforce our rights, and ensure that data sharing benefits us all.