15.07.2025 16:53:00
Дата публикации
The Alan Turing Institute has released a report titled “Understanding the Impacts of Generative AI Use on Children,” revealing a critical gap: children’s rights and interests are largely ignored in the development of generative AI systems.
Based on the RITEC framework (developed by UNICEF and LEGO Foundation), the study combines surveys and school-based workshops to assess how AI affects children’s well-being, creativity, and digital literacy.
The survey involved 780 children aged 8–12, their parents, and 1,001 teachers across the UK, providing a representative view of how children interact with generative AI.
About 25% of children reported using AI tools. Of these, 58% used ChatGPT, while others preferred Gemini or Snapchat My AI. Common uses include creating images, playing games, and learning. Twelve-year-olds often seek homework help.
Parents are generally supportive but concerned: 82% worry about harmful content, and 77% about inaccurate responses. This highlights the need for built-in safeguards and content moderation.
Teachers see benefits—85% noted increased productivity—but 76% are concerned about reduced critical thinking, and 57% observed plagiarism. Educational strategies must adapt accordingly.
Workshops with 9–11-year-olds revealed children’s sensitivity to identity, inclusion, and environmental impact. They called for transparent, diverse, and safe algorithms.
Children also raised concerns about AI’s energy consumption, emphasizing the need for sustainable and responsible design.
The report urges developers to consider children as a key user group, improve digital literacy, and ensure equitable access to technology. These steps are vital for building a safe and inclusive digital future.
Based on the RITEC framework (developed by UNICEF and LEGO Foundation), the study combines surveys and school-based workshops to assess how AI affects children’s well-being, creativity, and digital literacy.
The survey involved 780 children aged 8–12, their parents, and 1,001 teachers across the UK, providing a representative view of how children interact with generative AI.
About 25% of children reported using AI tools. Of these, 58% used ChatGPT, while others preferred Gemini or Snapchat My AI. Common uses include creating images, playing games, and learning. Twelve-year-olds often seek homework help.
Parents are generally supportive but concerned: 82% worry about harmful content, and 77% about inaccurate responses. This highlights the need for built-in safeguards and content moderation.
Teachers see benefits—85% noted increased productivity—but 76% are concerned about reduced critical thinking, and 57% observed plagiarism. Educational strategies must adapt accordingly.
Workshops with 9–11-year-olds revealed children’s sensitivity to identity, inclusion, and environmental impact. They called for transparent, diverse, and safe algorithms.
Children also raised concerns about AI’s energy consumption, emphasizing the need for sustainable and responsible design.
The report urges developers to consider children as a key user group, improve digital literacy, and ensure equitable access to technology. These steps are vital for building a safe and inclusive digital future.