Algorithms can change the course of children’s lives. Kids are interacting with Alexas that can record their voice data and influence their speech and social development. They’re binging videos on TikTok and YouTube pushed to them by recommendation systems that end up shaping their worldviews.
Algorithms are also increasingly used to determine what their education is like, whether they’ll receive health care, and even whether their parents are deemed fit to care for them. Sometimes this can have devastating effects: this past summer, for example, thousands of students lost their university admissions after algorithms—used in lieu of pandemic-canceled standardized tests—inaccurately predicted their academic performance.
Children, in other words, are often at the forefront when it comes to using and being used by AI, and that can leave them in a position to get hurt. “Because they are developing intellectually and emotionally and physically, they are very shapeable,” says Steve Vosloo, a policy specialist for digital connectivity at Unicef, the United Nations Children Fund.
Vosloo led the drafting of a new set of guidelines from Unicef designed to help governments and companies develop AI policies that consider children’s needs. Released on September 16, the nine new guidelines are the culmination of several consultations held with policymakers, child development researchers, AI practitioners, and kids around the world. They also take into consideration the UN Convention on the Rights of the Child, a human rights treaty ratified in 1989.
The guidelines aren’t meant to be yet another set of AI principles, many of which already say the same things. In January of this year, a Harvard Berkman Klein Center review of 36 of the most prominent documents guiding national and company AI strategies found eight common themes—among them privacy, safety, fairness, and explainability.
Rather, the Unicef guidelines are meant to complement these existing themes and tailor them to children. For example, AI systems shouldn’t just be explainable—they should be explainable to kids. They should also consider children’s unique developmental needs. “Children have additional rights to adults,” Vosloo says. They’re also estimated to account for at least one-third of online users. “We’re not talking about a minority group here,” he points out.
In addition to mitigating AI harms, the goal of the principles is to encourage the development of AI systems that could improve children’s growth and well-being. If they’re designed well, for example, AI-based learning tools have been shown to improve children’s critical-thinking and problem-solving skills, and they can be useful for kids with learning disabilities. Emotional AI assistants, though relatively nascent, could provide mental-health support and have been demonstrated to improve the social skills of autistic children. Face recognition, used with careful limitations, could help identify children who’ve been kidnapped or trafficked.
Children should also be educated about AI and encouraged to participate in its development. It isn’t just about protecting them, Vosloo says. It’s about empowering them and giving them the agency to shape their future.
“Talking about disadvantaged groups, of course children are the most disadvantaged ones.”Yi Zeng
Unicef isn’t the only one thinking about the issue. The day before those draft guidelines came out, the Beijing Academy of Artificial Intelligence (BAAI), an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government, released a set of AI principles for children too.
The announcement follows a year after BAAI released the Beijing AI principles, understood to be the guiding values for China’s national AI development. The new principles outlined specifically for children are meant to be “a concrete implementation” of the more general ones, says Yi Zeng, the director of the AI Ethics and Sustainable Development Research Center at BAAI who led their drafting. They closely align with Unicef’s guidelines, also touching on privacy, fairness, explainability, and child well-being, though some of the details are more specific to China’s concerns. A guideline to improve children’s physical health, for example, includes using AI to help tackle environmental pollution.
While the two efforts are not formally related, the timing is also not coincidental. After a flood of AI principles in the last few years, both lead drafters say creating more tailored guidelines for children was a logical next step. “Talking about disadvantaged groups, of course children are the most disadvantaged ones,” Zeng says. “This is why we really need [to give] special care to this group of people.” The teams conferred with one another as they drafted their respective documents. When Unicef held a consultation workshop in East Asia, Zeng attended as a speaker.
Unicef now plans to run a series of pilot programs with various partner countries to observe how practical and effective their guidelines are in different contexts. BAAI has formed a working group with representatives from some of the largest companies driving the country’s national AI strategy, including education technology company TAL, consumer electronics company Xiaomi, computer vision company Megvii, and internet giant Baidu. The hope is to get them to start heeding the principles in their products and influence other companies and organizations to do the same.
Both Vosloo and Zeng hope that by articulating the unique concerns AI poses for children, the guidelines will raise awareness of these issues. “We come into this with eyes wide open,” Vosloo says. “We understand this is kind of new territory for many governments and companies. So if over time we see more examples of children being included in the AI or policy development cycle, more care around how their data is collected and analyzed—if we see AI made more explainable to children or to their caregivers—that would be a win for us.”
South Africa’s private surveillance machine is fueling a digital apartheid
As firms have dumped their AI technologies into the country, it’s created a blueprint for how to surveil citizens and serves as a warning to the world.
A quick guide to the most important AI law you’ve never heard of
The European Union is planning new legislation aimed at curbing the worst harms associated with artificial intelligence.
Inside the fierce, messy fight over “healthy” sugar tech
Yi-Heng “Percival” Zhang was a leader in rare sugar research. Then things got sticky.
The secret police: Cops built a shadowy surveillance machine in Minnesota after George Floyd’s murder
An investigation by MIT Technology Review reveals a sprawling, technologically sophisticated system in Minnesota designed for closely monitoring protesters.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.