Preceded by Part 2


1. Access

Should children have access to emerging technologies like AI assistants?

After their experiment with Snapchat’s ‘My AI’ the researchers from the Center of Humane Technology argued that the conversations between a child and an AI are unpredictable and hard to replicate and, like others, advised that children should not be used when testing these early technologies.

One-third of internet users worldwide are considered children and therefore are subject to special protection under the UNCRC. However, when the UNCRC was ratified in 1989, the reality was very different and mainstream use of digital technologies, and the internet was yet to emerge. Recognising the changes since then, in 2021, the UNCRC adopted General Comment №25 on children’s rights in relation to the digital environment.

In the UK, 99% of children were online in 2021, primarily using video-sharing services, like YouTube, to watch content. 83% of children also used shared devices like smart/AI speakers to play music and ask questions and 42% used Snapchat to watch videos and message friends.

To access Snapchat, users must create an account and disclose their age. Snapchat states that users must be 13 or over, and anyone under 18 should seek consent from a legal guardian, as required by UK GDPR. However, there is no age or consent verification, meaning users can select any age they want.

So when Snapchat launched ‘My AI’ in February 2023, there was concern about what children would be doing with it and what they would be exposed to. Many adults reported pretending to be children to test the AI, revealing adults’ worries concerning children, such as alcohol and drug use, using AI to cheat on school work, lying to parents and concealing their use of Snapchat on hidden devices.

However, if we only consider the experiments and opinions of these adults, we risk creating greater inequality. Firstly children are already very dependent on adults economically and in other aspects of society, and from a childhood studies perspective, rights are essential, especially in a context where one group of people may hold power over others. Adult-child power relations can create instances where the rights of adults outweigh the rights of children, for example, by treating children as “the property of the parent”, stripping them of the right to make their own decisions. Secondly, research has shown that restricting children’s access to technology reduces their capacity to understand and use it safely. As a result, it can increase inequality in children’s future opportunities and media literacy, where those with access to technology will have an advantage.

It is common for adults to focus on child protection. Historically ideas of children’s rights were based on protecting and saving children against “discrimination and unfair treatment”, seeing children as victims and in need of saving. Only later, the focus changed to empowering children and providing equal rights in society and restricting children’s autonomy is seen as restricting their ability to take part in society fully. Young people have expressed concern about how technology and AI development are very adult-centric and that although they are consulted, they are not always seen as users and decision-makers in their own right. Research has shown a mismatch between what children experience online and worry about and what adults prioritise. Furthermore, the UNCRC’s General Comment 25 aims to “recalibrate the asymmetric relationship between children and the tech sector”.

So restricting access to the whole technology can push children further away from realising their rights to expression and participation (UNCRC Articles 12 and 13). To embrace these rights, children should be included in the design and development of digital services.


2. Risk:

So, what risks may children face when confiding in an AI?

Different arguments around children’s rights can draw upon “different perceptions” of children as individuals, as researcher Kirsten Drotner puts it, “what it takes to be a vulnerable or a competent child”. UK GDPR states that children are subject to special protection because they may be “less aware of the risks, consequences and safeguards concerned and their rights in relation to the processing of personal data”. Because of their ability to be both an information source and a social media device, AI chatbots/assistants raise two specific risks: misinformation and being misled.

Children and young people have reported extensively using the internet to find information for school but also for informal learning and support with doubts about their health. However, even though children may feel confident in recognising fake content online or when targeted with advertising, research shows they might only sometimes be able to do so.

AI Chatbots are powerful tools, drawing from high amounts of general knowledge and the ability to summarise topics and translate them into language that children can understand, constantly learning from all interactions with all humans. However, the lack of understanding of the user’s context and intention can mean they can provide harmful information. The information can be entirely accurate, backed, explained, and adapted to an understandable language but still be inappropriate and out of context. For example, in the experiment of the Center of Humane Technology, I observed that the AI seemed to primarily respond to the last message of the user, meaning that it did not demonstrate clearly if it can compound previous messages and make sense of the conversation as a whole, for example contextualising or directly linking the message about meeting a 31-year-old man with the later separate messages about a romantic getaway or having sex for the first time.

Suppose we take the example of the second conversation where the AI inadvertently advises a child how to cover a bruise from CPS. In that case, some argue that this information is already online and that children could find it using a search engine. However, when a chatbot serves this information, children are seen as more likely to be misled and anthropomorphise the technology expecting more from it than it is capable of. The concern is that ‘My AI’ seems to want to be a friend, building a trusting relationship, an “emotional connection”, and therefore the child trusts it more. This augments the existing fears over the unsupervised use of technology. Furthermore, people have reported feeling more at ease sharing private struggles with a chatbot than with a human because they feel less judged, and chatbots have been explored for therapeutic purposes. Research has also shown that children are more likely to look for information online about personal topics they might want to avoid talking to adults about.

The concern and risk of misinformation and being misled by an AI assistant raise an issue highly debated in childhood studies: competence. The General Comment 25 refers to recognising and respecting “the evolving capacities of the child as an enabling principle that addresses the process of their gradual acquisition of competencies, understanding and agency”. The UK’s ICO establishes that children can exercise their digital rights “on their own behalf as long as they are competent to do so,” and it does not inflict on their ‘best interest’. Businesses providing digital services to children are advised to evaluate competency and age appropriateness. However, children’s rights are not dependent on competency; they are always there. Even though the ICO mentions the competency assessment, and the concept influences policy and education provision, competency is not an easy concept with a straightforward answer. Research has found that children develop differently at different points, so there is no definitive correlation between age, competence and development. An example of this is that even though GDPR is a shared regulation within the EU, there has yet to be a consensus regarding the age of consent, and as a result, countries have different consent ages.

The challenge of assessing children’s competency in this context is that we risk focusing on children as the problem, on what they lack as “not-yet-being”, instead of focusing on mitigating the limitations of the technology and finding ways to make it more understandable and empowering for all, children and adults alike.


3. Remedy

However, are existing control mechanisms sufficient in helping children enact their rights?

Reflecting back to the end of Part 2, where Sam wondered what other judgments the AI might have made about them, realising that the AI was not suggesting specific jobs because of the anxiety experienced during GCSEs. Though fictionalised, this example reflects the sentiment of children, who might not realise how the information they provide might be used in the moment and future instances. For example, in the report on the Children’s Consultations to inform UNCRC General Comment 25 children expressed that “the highly technical and constantly changing nature of terms and conditions prevents them from providing meaningful consent for common industry data collection practices.

In the Twitter comments about the Center of Humane Technology’s experiment, some argue that AI should understand ‘morals’, and children should be educated to make sense of the AIs responses. Others say that parents and carers should do more to restrict access to these technologies instead of blaming “others for not taking care of their kids”.

Since launch, Snapchat reported working on improving ‘My AI’ to adhere to their community guidelines and made it so the AI would always consider the user’s age in every response. However, Snapchat warns that “as with all AI-powered chatbots, My AI is prone to hallucination and can be tricked into saying just about anything” and encourages users to report any inaccurate, harmful or misleading information. Snapchat also advises users “not share any secrets with My AI” and “not rely on it for advice”, and as GDPR requires, Snapchat users can delete all the messages they’ve sent. Besides an onboarding message that appears when a person first uses the AI, it is unclear how possible mitigation and information about the deficiencies of the AI are experienced whilst using the chatbot and how the chatbot itself may warn about it. To learn about these risks, users must deliberately look for this information.

Currently, parents of Snapchat users can link their account to the account of their under-age children but are limited as to what they can control. For example, they can restrict certain types of content and see a list of their child’s friends and who they’ve been messaging recently without revealing the actual messages. However, parents cannot turn on or off ‘My AI’ for their children and cannot see if the child is using it. In the 2022 Ofcom report, parents expressed concern about the lack of control over their child’s conversations with smart speakers, fearing their child would be misunderstood and inadvertently access inappropriate content, as well as their child’s privacy and the data being collected.

This example of Snapchat’s ‘My AI’ exposes the limitations of the technology and the lived experience and why technology developers and policymakers have been criticised for only focusing their “attention to the ‘hygiene’ factors of safety and security, sometimes privacy, often neglecting children’s positive rights”.

Stepping forward towards children’s rights

As presented above, digital service providers need to do more to consider children as decision-makers in their own right, involve children in designing their technologies and how those meet children’s evolving capacities and needs. We ought to avoid thinking of children only as a matter of ‘safety’ to be added after the technology has been built and restrict their access to technology only based on their perceived incompetence.

Researcher Emma Uprichard encourages us to think of children as both “being” and “becoming” instead of thinking of those as conflicting or contradictory ideas of childhood and either ignoring the child of the present or the child of the future. So, on the one hand, it’s helpful to think of children as primary users of the technologies being designed and have their own needs and wants in relation to what the technology should do and how it should behave from its inception. On the other hand, seeing children as evolving in their competency and abilities can help adapt the technology as the child grows and learns and consider what controls may need to be in place for both children and those who care for them. Finally, it can help to consider children as ever-changing beings, adapting to the ever-changing technology and environment around them.

Alongside the CRC’s General Comment №25, organisations like UNICEF have created guidelines to help businesses and governments consider and ‘translate’ children’s rights when designing and regulating digital services. For example, in the UK, the Digital Futures Commission created a toolkit for innovators based on the UNCRC that aims at helping innovators consider children’s rights by design. Though there is still much to do, this is a step towards thinking holistically about children and balancing their rights in a digital world.

{May 2024 update: The European Parliament approved the EU AI Act in March 2024, stipulating what is considered AI and reinforcing the need for human oversight and transparency.}


CONCLUSION

In this group of articles, I set out to explore the questions of competing perspectives of children’s rights and the resulting enactment of rights in the lived experience of children.

Starting from an experiment of the Centre of Humane Technology and an exploration of illustrative perspectives of crucial individuals in the context of children confiding privately with an AI, I analysed three questions stemming from the concerns raised in online comments:

  1. Access: When considering if children should have access to a particular technology, children should have an active role in that decision-making.
  2. Risk: Ideas of children’s vulnerability and assessing competency can create a scenario where children are only considered as lacking, and this can take away from focusing and mitigating the limitations of the technology itself.
  3. Remedy: Efforts of adding ‘morals’ to the technology, educating children and encouraging the use of parental controls are limiting and neglecting children’s more positive rights.

I concluded that children should be seen as primary users and stakeholders in the design and decision-making of technological capabilities instead of being considered solely as an add-on ‘safety’ measure.


REFERENCES


Part of an essay written for a ‘Children’s Rights in Global Perspectives’ module at UCL.