AI Safety Handbook curates resources for parents, educators, and policymakers navigating children’s use of artificial intelligence.
For Parents & Families
UNICEF — Parenting in the AI Age
UNICEF’s practical guide encourages parents and children to approach AI as co-learners rather than positioning adults as supervisors of technology they may not fully understand themselves. It covers when to start conversations about AI (earlier than most parents think), how to talk about data privacy in age-appropriate ways, and how to recognize signs of over-reliance or emotional attachment to AI tools. Updated in 2026.
Children and Screens A research institute examining how digital media — including AI — affects child development. Their resource “Youth and Generative AI: A Guide for Parents and Educators” offers evidence-based guidance on AI literacy skills families should develop together, including how AI functions, awareness of risks, identifying biases, and verifying information.
For Educators & Schools
Code.org — AI Curriculum Interactive AI and computer science activities for students of all ages, from the organization behind the Hour of Code movement. Code.org contributed to the development of the OECD-EC AI Literacy Framework and offers classroom-ready activities that balance technical understanding with ethical reasoning.
Policy & Frameworks
Foundational Legal Instruments
The following international instruments form the legal and normative backbone for children’s rights in AI governance. Many of the resources on this page draw directly from these documents.
UN Convention on the Rights of the Child (CRC)
Adopted in 1989, the CRC is the most widely ratified human rights treaty in history (196 States parties). It establishes that in all actions concerning children, the best interests of the child shall be a primary consideration. The CRC is the foundational reference point for virtually all international AI governance instruments that address children.
General Comment No. 25 (2021) on Children’s Rights in Relation to the Digital Environment
Issued by the UN Committee on the Rights of the Child, this General Comment is the authoritative international statement on how the CRC applies in digital contexts. Informed by consultation with over 700 children across 28 countries, it explains States’ obligations regarding children’s privacy, safety, freedom of expression, education, and protection from exploitation in the digital environment.
UN General Assembly Resolution on the Rights of the Child in the Digital Environment (2023)
Adopted by consensus by all 193 UN member states, this resolution represents a global political endorsement of the requirements set out in General Comment No. 25. It calls on States to review national legislation, requires the private sector to conduct child rights due diligence, and emphasizes data protection and privacy.
UN Global Digital Compact (September 2024)
Adopted as an annex to the Pact for the Future, the Global Digital Compact commits member states to strengthen legal and policy frameworks to protect the rights of the child in the digital space, in line with the CRC. It establishes an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance, and calls on the private sector to apply the UN Guiding Principles on Business and Human Rights.
Adopted in 2019 and subsequently endorsed by the G20, the OECD AI Principles promote AI that is innovative, trustworthy, and respects human rights and democratic values. They provide a policy framework widely referenced by national governments designing AI governance, and inform the OECD-EC AI Literacy Framework listed below.
UN Guiding Principles on Business and Human Rights
The authoritative global standard on the responsibilities of businesses to respect human rights, including children’s rights, across their operations and value chains. These principles underpin the Global Digital Compact’s call on the private sector to conduct human rights due diligence in AI development and deployment.
Children’s Rights and Business Principles
Developed jointly by UNICEF, Save the Children, and the UN Global Compact, these are the first comprehensive set of principles guiding companies on actions to respect and support children’s rights in the workplace, marketplace, and community. Increasingly relevant as AI companies and EdTech providers interact directly with children as users.
International Governance
Although geopolitical competition has complicated international AI governance, a growing body of international law and policy now addresses AI’s impact on children specifically. The United Nations maintains resources on AI ethics, and UNESCO’s Recommendation on the Ethics of AI provides a global ethical framework that includes provisions for education and vulnerable populations.
UNESCO has also published widely referenced AI competency frameworks for teachers and students.
Joint Statement on AI and the Rights of the Child (January 2026)
A landmark document co-led by the ITU, the UN Committee on the Rights of the Child, and UNICEF, and co-signed by more than a dozen UN bodies and international organizations including ILO, UNESCO, and OHCHR.
Developed through a year-long effort involving over 60 institutions and children from all UN regions, the Joint Statement sets out a unified global position on designing, deploying, and governing AI to uphold children’s rights. It translates international law into eleven actionable pillars covering child rights impact assessments, safety-by-design, data protection, and meaningful child participation in AI governance. This is the most comprehensive multilateral commitment on children and AI to date.
UNICEF Guidance on AI and Children 3.0 (December 2025) | Full PDF
Updated to reflect the generative AI era, this guidance offers 10 requirements for child-centered AI, drawing on the Convention on the Rights of the Child. Version 3.0 was informed by a twelve-country study with children and caregivers and addresses emerging concerns including AI companions, AI-generated child sexual abuse material, the environmental impacts of AI systems on children, and AI in armed conflict.
The guidance is structured around three pillars: protection (do no harm), provision (do good), and participation (include all children). A practical implementation checklist is also available.
EU AI Act — Child-Specific Provisions
The EU AI Act, which entered into force in August 2024 and will apply fully from August 2027, is the world’s first comprehensive AI law and includes several provisions directly relevant to children. It bans outright any AI system that exploits age-related vulnerabilities (Article 5), classifies AI systems used in education as high-risk requiring rigorous oversight (Annex III), and mandates watermarking of deepfakes and disclosure when users are interacting with AI. The Act explicitly recognizes children’s rights under the UN Convention on the Rights of the Child and the EU Charter.
Council of Europe Framework Convention on AI and Human Rights, Democracy and the Rule of Law
Opened for signature in September 2024, this is the first legally binding international treaty on AI. It covers the entire AI lifecycle and requires parties to conduct risk and impact assessments, provide transparency and accountability, and ensure access to remedies. The Convention is mindful of the UN Convention on the Rights of the Child and has been signed by the EU, the United States, the UK, Japan, Israel, and others. It complements regional instruments like the EU AI Act and establishes a common baseline in international law for rights-respecting AI governance.
OECD-EC AI Literacy Framework for Primary & Secondary Education
Released in May 2025 by the OECD and European Commission, with support from Code.org and an international expert group, this framework defines global AI literacy standards for school-aged children. It is structured around four core domains — Engage with AI, Create with AI, Manage AI, and Design AI — and sets benchmarks for policy, curriculum, teaching, and assessment. This is arguably the most comprehensive international framework for K-12 AI literacy to date.
National Approaches
Both the United States and China are actively shaping AI education policy, though through different political and economic lenses. The April 2025 U.S. Executive Order on AI Education directs federal agencies to promote AI literacy from kindergarten through postsecondary education and to establish public-private partnerships for K-12 AI resources. China has integrated AI education into its national curriculum from primary school, while approaching AI governance through a regulatory framework that balances innovation with state oversight.
The European Union’s approach remains comparatively cautious, embedding AI education within its broader AI Act framework. Whether the EU approach will influence other states — as GDPR did for data privacy — remains to be seen.
Japan’s Ministry of Education (MEXT) has introduced AI and data science into its curriculum guidelines, with an emphasis on “Society 5.0” readiness, though implementation varies significantly across schools and regions.
Research & Academic Resources
ACM ICER 2025 — Integrative Review of AI Literacy
An academic review synthesizing 124 studies on AI literacy published since 2020. It identifies three ways to conceptualize AI literacy (functional, critical, and indirectly beneficial) and three perspectives on AI (technical detail, tool, and sociocultural), forming a useful matrix for anyone designing or evaluating AI education programs.
The Alan Turing Institute — Children and AI Programme
The UK’s national institute for data science and AI runs a dedicated Children and AI research programme that bridges theoretical considerations with empirical research into children’s actual experiences with AI.
In collaboration with Children’s Parliament and the Scottish AI Alliance, the team has engaged children aged 7-12 across Scotland in rights-based workshops exploring generative AI. Their research is funded by the LEGO Group and has produced reports on generative AI’s impact on children’s wellbeing, free classroom resources for primary schools, and policy recommendations. A notable finding: only 11 of 87 children surveyed were confident they understood what AI is and how it appears in their daily lives.
Digital Futures for Children / EU Kids Online — RIGHTS.AI Project
The Digital Futures for Children centre (based at LSE) and the EU Kids Online research network are conducting the RIGHTS.AI project, a child-rights-focused study exploring children’s experiences with generative AI across 20 European countries and four Global South nations (Kenya, Brazil, India, and others). Their February 2026 report, published for Safer Internet Day, draws on survey responses from over 25,000 children and qualitative interviews with 244 young people aged 13-17, providing the first large-scale comparative evidence of how children access, use, and understand generative AI across Europe. The research highlights both emerging opportunities and growing concerns about safety, inequality, and children’s rights.
Influencers & Voices
The following individuals prolifically share the latest academic literature and trends in AI safety, AI literacy, and children’s AI education. They engage with questions from educators and professionals and may hold public events with free participation.
Ethan Mollick Professor at Wharton, studying AI, innovation & startups. Democratizing education using tech.
Fengchun Miao UNESCO HQs, AI and education, PhD & Professor.
Amanda Bickerstaff Educator | AI for Education Founder | Keynote | Researcher | LinkedIn Top Voice in Education.
Dr. Nina Vasan Founder and Director of Stanford Brainstorm Lab for Mental Health Innovation. Leading researcher on the psychological effects of AI companions on adolescents.
This page is maintained by the AI Safety Handbook team. Resources are reviewed periodically for accuracy and relevance. If you know of a resource that should be included, contact us.
Last updated: March 2026
