This is part of a series exploring care-experienced young people's perspectives on AI in UK children's social care. Read the introduction and overview here.


This article examines how children participate in decisions about their own lives in care—and what care-experienced young people want AI to do differently when information shapes their futures. In this article, you'll explore:

  1. Why participation matters — a care-experienced young person's testimony reveals what's at stake when young people make decisions without understanding their consequences
  2. How children participate in decisions in care — how age, competence, and adult assumptions determine who gets information and who gets heard
  3. AI as a facilitator — how AI tools may be used in care conversations, and the risk scores and automation shaping decisions about children
  4. What care-experienced young people and social workers told me — six key findings from my research on AI's potential and risks in support conversations
  5. What this means for AI product development and social care practitioners — practical steps to build AI that genuinely enables participation rather than replacing it

Why participation matters: Lamar's testimony

In February 2025, Lama, a 19-year-old care-experienced young person, testified before the House of Commons Education Committee. Though an involuntary service user of children's social care, Lamar attempted to grasp control where possible—using Maps to navigate unfamiliar environments after placement changes. However, lacking information meant making choices without understanding their consequences, ultimately damaging their sibling relationships and triggering mental health deterioration.

I open this article with Lamar's testimony because it illustrates what happens when children cannot participate meaningfully in decisions affecting their lives. Lamar made an unintentional choice—deciding to stop seeing their parents unexpectedly severed contact with siblings. This reveals fundamental questions: what determines whether children can participate in care decisions and how may children need to be informed? When are they considered competent enough, who decides, and what enables meaningful participation? And as AI systems increasingly inform care decisions, what happens to practitioners' discretion and children's involvement?


Protected child: How children participate in decisions in care

Children in care have a legal right to be heard. Article 12 of the United Nations Convention on the Rights of the Child is unambiguous: no child is too young to have their views taken seriously. There is no minimum age. And yet, studies spanning over a decade reveal a fundamental disconnect—practitioners acknowledge children should be involved, but barriers persist. Age and the risk of causing distress remain the most common reasons given for keeping children out of the room when decisions about their lives are made.

The result is that participation becomes a box to tick rather than something that genuinely changes what happens to a child. Children continue to report feeling powerless, confused about why decisions were made, and unable to find out what is in their own records.

Practitioners prioritise protection over participation, often excluding children from information and decision-making to shield them from distress. Protection becomes the reason not to involve—rather than the reason to find better ways to do it. Despite evidence that participation builds confidence and resilience, and successful examples in councils in the UK, practitioners fear causing further harm. This protectionist approach assumes children can't handle information, rather than asking how information could be made more accessible.

Children's participation remains constrained by adult perceptions of childhood and assumptions about ability, with adults determining not only whether children are consulted but also which subjects are deemed appropriate. Practitioners consistently use age as a proxy for competence, viewing younger children as lacking. However, this contradicts children's rights—Section 53 of the Children Act (2004) requires practitioners to ascertain children's wishes, feelings and feedback regardless of age.

In practice, many children in care struggle to access information about why they are in care, what processes are involved, and what their rights are. Many leave care without ever fully understanding why they were there. Informed and involved children experience better mental health and feel valued—but care processes operate in crisis mode with severe time constraints, creating fundamental barriers. Time pressures mean practitioners feel unable to adjust their approach to meet individual children's needs. Without resources or guidance for doing things differently, familiar, convenient approaches win out.


AI as a facilitator

Debates around AI decision-making in public services reveal contradictory perspectives. Some argue AI reduces human bias, while critics contend it amplifies existing inequalities—technology applications are not neutral and can inadvertently target marginalised groups due to biases in training data. AI systems remain prone to false positives and negatives, with questionable accuracy, and struggle with contextual comprehension, becoming blind to individual needs that deviate from standard patterns.

What Works for Children's Social Care found that predictive technology failed to identify "four out of every five children at risk"—and yet, despite UK GDPR requiring human oversight for significant automated decisions, Sylvain Duranton's research found that people tend to view algorithms as superior to their own judgment. Virginia Eubanks revealed in Automating Inequality that children's caseworkers questioned themselves when they disagreed with algorithmic scores, risking eventual blind adherence to machine rationality.


What care-experienced young people and social workers told me

About this research and its participants

Through qualitative research — one-to-one interviews and a focus group — I engaged seven care-experienced individuals (aged 19-39), two social workers, and three advocates to understand their experiences, perceptions, and hopes regarding AI tools in care workflows. These participants brought diverse stances to our conversations about AI in children's social care — from pragmatic optimism about administrative efficiency to deep scepticism about whether technology would genuinely benefit young people.

I showed them real screenshots of AI tools being used or trialled in children's social care — from transcription and assessment writing to case record exploration and automated risk scoring:

Then I asked: What do you think? What are your fears? What are your hopes?

All participant names have been changed for privacy. Given the sensitivity of participants' experiences, this research was conducted under ethical approval from University College London, with particular care taken around informed consent, participant wellbeing and the writing of findings.

Individual participant stances:

Sam (care leaver, one-to-one interview) — Saw AI's potential for reducing admin, but strongly emphasised children should not be reduced to "AI summaries". AI should help social workers think long-term and empower young people to understand their rights.

Jessie (care leaver, one-to-one interview) — Viewed AI as useful but emphasised "it's only as good as the person using it" and should enable "better tailored help". Their key concern was young people feeling processed rather than cared for.

Safeguarding SW (social worker, one-to-one interview) — Initially worried AI felt like "an intrusion" on relationship-based practice, but became enthusiastic about its admin potential, emphasising the crucial need for human analysis—the "So what?"

Independent SW (social worker, one-to-one interview) — Thought AI could handle everything, but "shouldn't be the be-all, and end-all", cautioning that social workers still need to source check and apply their own analysis.

Sceptically Suspicious group (care-experienced young people and advocates, focus group) — Were deeply sceptical AI would benefit young people, calling it "completely pointless" and worried it would make young people feel terrible and undermine relationships with social workers.

Cautiously Curious group (care-experienced young people and advocates, focus group) — Were open to AI improving care processes, but highlighted the gap between professionals' focus on time-saving and young people's lived reality of needing individual understanding: "It's about someone's life."

What participants understood about AI:

All participants had used ChatGPT, with some also mentioning CoPilot, MidJourney, and transcription tools like Otter. They found AI helpful for 'quick answers' and research. Participants had varying levels of understanding about how AI works — Sam explained Large Language Models and how they mimic language, while others thought of AI as a search tool that synthesises information and talks like a human. All participants recognised AI's intelligence through its ability to make choices and construct responses to human prompts.

Importantly, all participants were well aware AI makes mistakes. They showed strong caution, mainly because they'd witnessed errors themselves — transcription mishearing words, producing nonsensical text, and being particularly prone to number errors. Participants didn't view mistakes as reason to abandon the technology but recognised the need to check outputs and edit them.

Strong support for time-saving, but time saved must be used well

All participants responded positively to AI transcribing meetings and connecting information across organisations. The Safeguarding SW explained: "That's what I would definitely need. For me to do using AI to transcribe, summarise and the assessment." The Independent SW was enthusiastic: "The amount of time we spend copy and pasting information from one form to another... Having a tool that can do that for you with information that you've already determined. Not AI has determined. That's wonderful, and would be time saving, and that time could then be better spent with children and families doing real social work."

Sam felt "it is beyond past time for this kind of tool to exist" and Jessie saw an opportunity for "one source of truth" that would enable sharing transcripts and assessments so young people could sense-check them.

However, there's a tension embedded in this enthusiasm. Whilst social workers focused on the time they'd gain, care-experienced participants found it harder to immediately see direct benefits for those in care beyond helping practitioners. And the Safeguarding SW raised something that complicates the time-saving narrative entirely: they worried they'd become "conscious about what I say, because I know I'm being recorded"—shifting from genuine relationship-building to performing for a transcript. AI transcription may free up attention in one direction whilst introducing self-consciousness in another. Whether that changes the quality of care depends on how the technology is implemented and how social workers are supported to use it.

AI risk scores mirror existing processes, but some decisions are too grave to automate

Social workers explained that families are already manually screened against threshold levels to determine whether referrals progress. The Safeguarding SW saw AI scoring as similar, whilst the Independent SW agreed screening tools weren't inherently problematic: "it's not a huge leap from a manual screening tool that we would use anyway."

However, all participants recognised that scores depend on how information is described, and that life is ever-changing, not deterministic. Sam was "very concerned about big decisions being made automatically by an AI... because situations are flexible and constantly changing." Jessie felt: "you're dealing with human emotions. You're not dealing with facts... it could be a tool of saying. Yeah, maybe that should be flagged. But ultimately the risk assessment should be the social workers jurisdiction."

The Independent SW emphasised the need to not "be dependent on AI to be screening out at the highest end and at the lowest end, which is going to mean automatic intervention at the highest end, and nothing being done at all at the lowest end, without having a person check that." And they added a point about accountability that cuts through the debate: "when things go wrong, somebody is still going to have been responsible."

Local authority AI tools are perceived as safer than publicly available ones

All participants had a strong intuitive sense that AI used within a local authority would treat data very differently from publicly available tools. The Independent SW explained that a tool "trialled by a local authority... would give me confidence to think that the tech people have addressed any of those issues." Jessie drew a clear line: "I wouldn't be concerned about the privacy with this kind of tool, because I know that risks exist with ChatGPT, because it has access to the entirety of the Internet. Whereas if I knew that this tool didn't, and it was just kind of fed on just a set information... there's no risk there."

This distinction matters beyond privacy. It shapes how young people and social workers think about consent—if a tool feels bounded and purposeful, rather than a general-purpose internet-connected AI, people are more willing to engage with it and to have honest conversations about its use. Building that trust requires transparency about what data local authority AI tools are trained on, how information is stored, and who has access.

Checking AI outputs is an integral part of the workflow

There was clear expectation from both social workers and young people that checking AI outputs isn't optional—it's a fundamental part of working with the technology. Jessie placed responsibility firmly on the person using the tool: "If I don't go over the Transcript myself and make sure it's got everything correct, then there is a higher chance of getting things wrong", and "AI it's a tool, right?... I would view it as the social worker's responsibility to make sure it's got things right."

The Independent SW was equally clear that human oversight is non-negotiable: "I think for me it doesn't matter which end of the scale it is, or anywhere then on that scale. It's still going to need human intervention to oversee what has been done by AI and check that."

Explaining AI to young people requires progressive, honest transparency—not a one-off disclosure

All participants recognised that explaining AI use to young people isn't straightforward—it requires balancing legal constraints, young people's emotional states, and their varying levels of understanding. Care-experienced participants emphasised that understanding AI is impossible without first understanding the care process itself, which most said was never adequately explained to them.

Sam explained how to position transcription correctly: "a way of getting accurate what was actually said" so that "the social worker can engage more in the conversation." However, the Cautiously Curious worried that most young people wouldn't understand, especially in stressful situations, because of complex language: "they won't be listening, they're wondering where their family is."

The Cautiously Curious noted: "the confidentiality thing that we all say, and so many times... they've not remembered it because they thought different things. And it might take... every single time that you meet." They emphasised Article 12 of the UNCRC: "people should have a choice, we shouldn't be forced upon them... and that choice should be incorporated into what AI can and can't do."

Jessie emphasised "the social worker would have to be more transparent about how they're actually using it [AI], and kind of their method of how they've used it" and that "the child should be able to access everything said about them. Because in a vulnerable situation like that you want to know what's going on with you."

What should be delegated to an AI?

Participants described a clear synergy between social workers and AI, where specific tasks could be delegated to AI, whilst others required human expertise:

The fear: social workers become AI complacent and overreliant

Care-experienced participants feared social workers might rely only on scores without actually seeing the child. Jessie explained: "it's all dependent on the social worker not being lazy, basically... it's not the AI going too far." The Independent SW shared this concern: "if social workers become complacent enough to depend on those judgments."

These fears revolve around AI replicating biases and lacking cultural nuance: "straight away what's come in my head is how this is going to disadvantage black people." [Independent SW] The Cautiously Curious asked: "how do we balance for cultural differences when we're using AI to pick up on elements of risk. How do you tell if it's just a cultural difference, or if it's risk?"

The combination presents a particular danger—overreliance on AI with embedded biases could produce worse decisions, with the added challenge of believing them more accurate than human judgment.


What this means

The dominant narrative around AI in children's social care is one of efficiency: less time on admin, more capacity, faster processing. But that's not what any participant in this research was asking for—not the social workers, and not the care-experienced young people. What everyone described wanting is time for better conversations, the kind that are unhurried, relational, and genuinely focused on the child. The real tension isn't between social workers and young people's expectations of AI. It's between the efficiency narrative that drives AI adoption in public services and what every person in this research—practitioner and care-leaver alike—said they actually needed.

Participants were not opposed to predictive risk scores as an aid—these were welcomed if they helped social workers care for young people. Their concerns centred on complacency and dehumanisation: the risk that AI becomes a convenient substitute for knowing children directly, standardising support rather than enhancing individual care.

What this means for AI product development and care practitioners

AI tools help scrape, connect, and explore information, but social workers must apply their unique expertise.

For designers and developers:

For children's social care practitioners:

For local authorities and policymakers:

The structural conditions that shape how time is actually used—caseloads, underfunding, crisis response culture—are the subject of AI within children's social care systemic constraints—from 'case' to care. One point is critical here: AI integration must foster higher-quality conversations leading to tailored, responsive support—not simply process more children more quickly. Ask: do young people feel heard in conversations with their social workers? Do they understand why decisions are being made? Are their wishes being reflected in their care plans? Those are the right measures of whether AI is working.


Continue exploring this series on AI in UK children's social care, what care-experienced young people actually want: AI as a guiding storyteller in care—uncovering stories and understanding | AI within children's social care systemic constraints—from 'case' to care

References & Further Reading

Children's participation and decision-making

Cossar, J., Brandon, M., & Jordan, P. (2011). Don't make assumptions: Children's and young people's views of the child protection system and messages for change. Office of the Children's Commissioner.

Dillon, J. (2021). 'Wishes and feelings': Misunderstandings and missed opportunities for participation in child protection proceedings. Child & Family Social Work, 26(4), 664-676.

Friel, C., Symonds, J., & Cartwright, M. (2025). Views and experiences of children and social workers about communication in social work exchanges: A mixed methods systematic review. Child & Family Social Work, 1--21.

Grauballe, A. (2025). Facilitating children's perspectives: Three dimensions for customizing participatory methodologies. Child & Family Social Work, 1--13.

Juul, R., Husby, I., Kaalvik, H., & Salkauskiene, I. (2025). Characteristics of the qualitative research on inclusion of the youngest children in child welfare and protection work processes: A qualitative systematic review. Child & Family Social Work.

Muench, K., Diaz, C., & Wright, R. (2017). Children and parent participation in child protection conferences: A study in one English local authority. Child Care in Practice, 23(1), 49-63.

Nolas, S.-M. (2015). Children's Participation, Childhood Publics and Social Change: A Review. Child & Society, 29: 157-167.

O'Keefe, R., Geddes, E., Vincent, S., & Davies, P. (2025). Enabling child-centred case recording in children's social work: The voice of practitioners. Child & Family Social Work.

Race, T., & Frost, N. (2022). Hearing the voice of the child in safeguarding processes: Exploring different voices and competing narratives. Child Abuse Review, 31(6), e2779.

Roy, J., Staines, J., & Stone, B. (2025). Is it a positive or a negative? Children's participation in discharge of care order proceedings. European Journal of Social Work, 28(4), 743-756.

Stoilova, M., Livingstone, S., & Nandagiri, R. (2020). Digital by Default: Children's Capacity to Understand and Manage Online Data and Privacy. Media and Communication, 8(4), 197-207.

Toros, K. (2021). A systematic review of children's participation in child protection decision-making: Tokenistic presence or not? Children & Society, 35, 395--411.

UK Parliament. (2025, February 11). [Children's social care / Education Committee] [Video]. Parliament Live TV.

UNICEF UK. (2019). A summary of the UN Convention on the Rights of the Child.

van Bijleveld, G. G., Bunders-Aelen, J. F. G., & Dedding, C. W. M. (2020). Exploring the essence of enabling child participation within child protection services. Child & Family Social Work, 25, 286--293.

AI, technology, and decision-making

Duranton, S. [TED]. (2020). "How humans and AI can work together to create better businesses | Sylvain Duranton" [Video]. YouTube.

Eubanks, V. (2018). "Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor." St Martin's Press.

ICO. (n.d.). "What does the UK GDPR say about automated decision-making and profiling?"

WWCSC, What Works for Children's Social Care. (2020). Machine Learning in Children's Services.


Composed with the help of AI, drawing on my dissertation in Sociology of Childhood and Children's Rights (UCL, 2025).