Generative AI: Risks and Opportunities for Children
How can we empower and protect children in the face of Artificial Intelligence?
Generative AI, now front and centre in our digital experiences, is seeing unprecedented uptake. ChatGPT has been one of the fastest growing digital services of all time. Some children use it daily, for example when doing their homework or choosing what to wear. Its capabilities and adoption set off a storm of reactions around the impact and future of AI broadly: from open letters demanding a pause on AI development to calls to protect children against its harmful content. In the face of increasing pressure for the urgent regulation of AI, and schools around the world banning chatbots, governments are asking how to navigate this dynamic landscape in policy and practice.
The known issues with AI systems, such as algorithmic bias and a lack of transparency into how they work, are equally applicable to generative AI. It may even amplify some issues – like unpredictable outputs – or introduce new ones. We don’t presume to have the answers. But given the pace of AI development and adoption, there is a pressing need for research, analysis and foresight to begin to understand the impacts of generative AI on children. Here, we explore some of these current opportunities, risks and questions.
What is generative AI and how is it being taken up?
Generative AI is a subset of AI that uses machine learning to discover patterns and structure from training data and then generate new data that have similar characteristics. It has a wide range of uses, such as producing content that mimics human-generated output (including text, images, audio and computer code), completing complex planning tasks and exploring the development of new medicines.
Data on how children or youth around the world are using generative AI tools are limited, but initial studies indicate it is much more than adults. A small online poll in the US indicated only 30 per cent of parents say they have used ChatGPT compared to 58 per cent of children aged 12-18. Many children reported hiding their usage from parents or teachers. In another US survey, young adults (aged 18–29) who had heard of ChatGPT reported using it more than their older counterparts, and they did so for entertainment, to learn something new and for work tasks.
Generative AI can instantly create text-based disinformation that is indistinguishable from, and more persuasive in swaying people’s opinion than, human-generated content.
AI is already central to children’s digital environment, through recommendation algorithms or automated decision-making systems, for example. Generative AI will no doubt take an even greater hold of their digital experiences, and at a rapid pace. First, it is being embedded as chatbots, digital assistants, personal assistants and search engines into software and apps already used by children. Snapchat introduced an ‘AI friend’ chatbot to its 375 million daily active users. Snapchat is very popular with children: almost 60 per cent of 13-17-year-olds in the US and almost half of 3-17-year-olds in the UK use it (it is the fourth most popular app in the UK after YouTube, WhatsApp and TikTok). Meta has announced plans to introduce AI agents into every product used by over 3 billion people every day, including Instagram and WhatsApp.
Second, generative AI systems are being fine-tuned within specific sectors, potentially enabling more targeted and powerful capabilities. For example, Khan Academy has introduced a personal tutor in its platform that was honed from a general purpose chatbot to one particularly strong in learning conversations.
What are some key opportunities and risks for children?
With the increasing power and prevalence of AI, it is easy to see the many opportunities that generative AI can potentially offer children and young people directly or indirectly. For example:
- Personalized learning systems that can explain difficult concepts and help children develop skills, tailored to a child’s specific needs. Additionally, the systems can adapt according to the learning style and speed to maximize the child’s learning experience.
- Tools to support children’s play and creativity in new ways, like generating stories, artwork, music or software (with no or low coding skills).
- Enhanced accessibility for children and young people with disabilities who would benefit from new ways to interface and co-create with digital systems.
- In healthcare, early detection of health and developmental issues as children use the systems directly. Indirectly, generative AI systems can provide insights into medical data to support advances in healthcare.
More broadly, the analysis and generative capabilities can potentially be applied in a range of sectors to improve efficiencies and develop innovative solutions that can positively impact children. For example, in the public sector governments could augment citizen engagement channels to allow for additional languages and a mix of text, images or audio exchanges (for people with low levels of literacy).
At the same time, there are clear risks that the technology could be used by bad actors, or inadvertently cause harm or society-wide disruptions at the cost of children’s well-being and future prospects.
Persuasive disinformation and harmful and illegal content at scale and lower cost. Generative AI can instantly create text-based disinformation that is indistinguishable from, and more persuasive in swaying people’s opinion than, human-generated content. In a test by the Center for Countering Digital Hate, Google’s Bard generated misinformation without disclaimers when prompted on 78 out of 100 false and potentially harmful narratives, including climate, vaccines, LGBTQ+ hate and sexism. AI-generated images are now indistinguishable from – and, in some cases, deemed more trustworthy – than real faces.
Looking ahead, synthetic content (for example, misleading deepfake images, audio, video and text) can potentially be personalized to individual users and, as a result, be harder to combat by automated and human moderators. Future generative AI chatbots could be programmed to impersonate humans and adapt in real-time conversations to attempt to persuade real people about issues or urge them to act. A particularly malicious case might be one-on-one chats, where unwitting individuals would be targeted and the chatbot would attend to the person’s concerns or counterarguments directly, thus likely increasing the odds of persuasion.
When used by propogandists, generative AI models will likely transform online influence operations, posing a threat to democratic processes.
When used by propogandists, generative AI models will likely transform online influence operations, posing a threat to democratic processes. By mimicking human-generated text, generative AI can instantly flood regulatory and legislative systems with automatically composed messages – effectively lobbying for certain interests – and drown out any human input. Meta has reported exponential growth since 2019 in the use of fake profile photos generated by GANs – generative adversarial networks).
The deleterious effect of an internet awash with misleading content is an erosion in trust as the authenticity of all content is questioned, including information from UNICEF. With their cognitive capacities still in development, children are particularly vulnerable to the risks of mis/disinformation and a more uncertain, corroded, and harmful disinformation ecosystem is of great concern.
Regarding illegal content, generative AI is being used to create photo-realistic child sexual abuse material (CSAM). The AI image generation models, which are open-source and can thus be operated with no protective guardrails, are trained on existing CSAM and photos of children from public social media accounts. While the volume of such content is still relatively small, given how quickly generative AI tools are developing, researchers predict the number of cases will only grow. This would flood law enforcement with new ‘fake’ CSAM, which would not only detract from their ability to handle CSAM depicting real children, but also complicates victim-identification and rescue operations.
In addition, a recent FBI alert noted an uptick in reporting of sextortion, including of minors, using AI-generated images. Bad actors use existing photos (for example, from social media accounts) to generate explicit, sexually themed photos (i.e., “deepfakes”) with which to harass and blackmail victims. Reports of other types of scams are also emerging synthetic voices models are being used to con victims by impersonating real people, such as relatives, requesting money.
Given the human-like tone of chatbots, where the line between animate and inanimate blurs, what are the impacts on children’s development – and privacy – when they interact with these systems? Research indicates they may influence children’s perceptions and attributions of intelligence, their cognitive development, and social behaviour – especially during different developmental stages.
When generative AI confidently makes up false information, what impact does this have on children’s understanding and education, especially if they become increasingly reliant on chat-enabled tools? Generative AI has produced dangerous content: Amazon Alexa advised a child to stick a coin in an electrical socket. Snapchat’s AI friend gave inappropriate advice to reporters posing as children. Snap, the creator of Snapchat, has since put in place tools that attempt to detect “non-conforming” language, including references to violence, sexually explicit terms, illicit drug use, child sexual abuse, bullying and hate speech. However, it remains an issue that many generative AI systems are already live and accessible to children while continuing to produce misleading and harmful content or interactions. More broadly, as children interact with generative AI systems and share their personal data in conversation and interactions, what are the implications for children’s privacy?
Modification of children’s behaviour and worldviews – intentionally or not. AI systems already power much of the digital experience, largely in service of business or government interests. Microtargeting used to influence user behaviour can limit and/or heavily influence a child’s worldview, online experience and level of knowledge. As UNICEF has noted, “children are highly susceptible to these techniques which, if used for harmful goals, are unethical and undermine children’s freedom of expression,” freedom of thought and right to privacy. Whereas current algorithms often promote sensationalist content to maximize attention, generative AI chatbots posing as real people could first gain children’s trust and, over time, influence them in more subtle ways for commercial or political gains. The online battleground could thus “shift from attention to intimacy,” according to historian and philosopher Yuval Noah Harari. He is concerned that bad actors could use AI bots to manipulate real users in hidden and persuasive ways.
Another professor, Seth Lazar, warns that we can expect the same challenges with search engines and social media platforms shaping people’s worldview, social interactions and experiences, only now pitched as an “agent that really knows you, and can engage with you in natural language.” Even when there isn’t bad intent, research on the use of generative AI systems to write essays shows that the biases in the models nudge users towards or against certain views in the writing process. The result is that students might be unconsciously pushed to take on certain views when using AI systems to help them learn.
Disruption to the future of work upending education provision today. AI combined with robotics first impacted blue collar work in factories through automation of predictable and repetitive tasks. In recent years, the scope of labour market exposure to AI systems has widened to some professional spheres. Now it is predicted that the daily work of teachers, telemarketers and professionals in legal services and securities, commodities and investments – amongst others – will see high exposure to generative AI tools. More research and foresight are needed to anticipate the impacts of generative AI in the workplace. Its potential disruption has great relevance for children’s future work life, as well as how and what education is provided to them today.
We know that since emerging technologies are not evenly distributed within and between countries, they have the potential to aggravate existing inequalities
Aside from the future, work practices in today’s AI supply chain are questionable. The quality of AI systems is dependent on accurately labeled and annotated training data, which is prepared by humans. Gig workers, often based in developing countries, assess video clips for sexual content, rate chatbot responses, identify objects within images, and upload selfies of various facial expressions, amongst other tasks. These jobs are low paying, stressful, unpredictable, unrecognized and shrouded in secrecy to protect Big Tech companies that run AI systems, bringing into question whether they constitute ethical and decent work.
Aggravation of existing inequalities around the digital divide, with some children more at risk from the harmful effects of generative AI and unable to help shape its development and access its benefits. We know that since emerging technologies are not evenly distributed within and between countries, they have the potential to aggravate existing inequalities. Generative AI is no exception. Equitable, inclusive and responsible generative AI needs to cater to the different contexts and developmental stages of all children, especially those from marginalized communities, and be available to every child when beneficial. If children’s data is used to shape generative AI systems, the collection and processing of their data must be done responsibly, with clearly defined purposes, and safely, regardless of where the data come from. This will not happen through market forces alone, but only by regulation and actively addressing power imbalances and digital exclusion. UN Secretary General António Guterres believes that guardrails, grounded in human rights, transparency, and accountability, are needed to ensure that AI development “benefits all.”
While this list of opportunities and risks is far from exhaustive, it illustrates that the impacts of generative AI will be wide-ranging and demand a pro-active response from those who regulate and develop AI – and related data – systems so that these empower and protect children. It also highlights the critical need for a strong advocacy and education investment to ensure that communities, children, parents, teachers, policymakers and regulators are able to engage in meaningful conversation with tech companies towards child-centred AI.
Children will experience an increasingly high level of exposure to AI systems over the course of their lives, with impacts in childhood having long-term effects. Addressing their needs is critical. Looking further ahead, the way AI is shaped today will have significant bearings on future generations. As noted by Guterres, “present generations have a responsibility to ‘halt and prevent developments that could threaten the survival of future generations … [including] new technologies’”. Concerns around the future of AI-enabled warfare are particularly crucial to address.
As a starting point, existing AI resources provide much direction for responsible AI today. For example, UNICEF’s Policy Guidance on AI for Children has nine requirements to uphold children’s rights in AI policies and practices and the World Economic Forum’s AI for Children toolkit provides advice to tech companies and parents. But advances in generative AI mean existing policies must be interpreted in novel contexts, and new guidance and regulations may need to be developed.
Policymakers, tech companies and others working to protect children and future generations need to act urgently. They should support research on the impacts of generative AI and engage in foresight – including with children – for better anticipatory governance responses. There needs to be greater transparency, responsible development from generative AI providers and advocacy for children’s rights. Global-level efforts to regulate AI, as called for by UN Secretary-General António Guterres, will need the full support of all governments.