This is part of a series exploring care-experienced young people's perspectives on AI in UK children's social care. Read the introduction and overview here.


This article examines the structural forces shaping children's experiences in care—and what care-experienced young people fear when AI is positioned as the solution to a system that already makes them feel like burdens. In this article, you'll explore:


Why systemic context matters: Lamar's testimony

In February 2025, Lamar, a 19-year-old care-experienced young person, testified before the House of Commons Education Committee. Their account contributed to a report recommending that young people be consulted to improve services. Their testimony reveals why that recommendation—like so many before it—is urgently needed.

Lamar described a childhood defined by instability: frequent placement changes, learning to navigate four different areas in six months, making unintentional choices with lifelong consequences. Speaking to Parliament, they reflected: "For foster carers and for social workers too, we have all been let down by the system." And despite doing everything asked of them—A-levels, a part-time job, navigating benefits and housing at 18—they described "a pit in my stomach that is absolutely terrified for the future."

I open this article with Lamar's testimony because it reveals why these problems endure despite repeated policy interventions. The systemic challenges Lamar describes—inadequate information, fractured relationships, administrative chaos—persist across decades of reviews, from the Munro Review of Child Protection (2011) to the MacAlister Review (2022). They raise fundamental questions: what structural forces shape care experiences? And as AI is positioned as the solution to social work deficiencies, might it risk exacerbating the problems it aims to solve?


How children's social care's systemic challenges shape care experiences

The 2025 report of the House of Commons Education Committee into the state of Children's Social Care in England reiterated systemic challenges documented over the past 20 years with minimal signs of improvement: increased numbers of children and families needing support; reduced budgets despite rising costs; social worker turnover and lack of training preventing sustained relationships with service users; and detrimental impacts on children's experiences and futures.

Against these pressures, the UK government has positioned technology adoption, particularly AI, as essential for addressing service demand while reducing costs. Organisations pursue automation driven by the promises of efficiency, an ideal that envisions technological systems that operate without error and deliver predictable outcomes. However, critics warn that AI may reproduce bias and inequality, and amplify existing challenges for service users.

The MacAlister Review (2022) highlights a system that prioritises bureaucratic and formal structures over authentic personal relationships and community networks. These processes create significant barriers to children's participation, with time constraints severely impacting trust and relationship-building. When social workers focus solely on collecting evidence for decision-making rather than building relationships, this creates significant tension with the values of relationship-based practice.

Research by O'Keefe and colleagues shows that 56% of practitioners agree case recording takes time away from direct work with children. High caseloads, low resourcing, and administrative demands significantly impact professionals' ability to build meaningful relationships with children. Research has documented that current technology diverts social workers' attention from meeting children's needs to satisfying forms, processes, and inefficient systems—practitioners experience software that is difficult to use and time-consuming, requiring repetitive data entry across systems and preventing timely access to information from other professionals.


AI within systemic constraints

Administrative work—including form-filling and record-keeping—consumes 60-80% of social workers' time and is often conducted outside official working hours. Technology has demonstrated the ability to produce significant time savings: a the MacAlister Review found a 48% reduction, while AI tools like Magic Notes demonstrate potential to save 8 to 10 hours a week.

Research identifies opportunities in digital communication tools: enhanced accessibility, more direct contact between children and social workers, and increased flexibility through asynchronous communication giving young people choice over when to respond. However, significant challenges exist: not all service users have access to technology; social workers struggle to assess risk through digital channels; and tools risk shifting from relationship-building to surveillance. Practitioners express concern that efficiency gains can come at the cost of the relational work that makes care meaningful.

This tension—between efficiency promise and relational reality—is precisely what care-experienced young people in my research had most to say about.


What care-experienced young people and social workers told me

About this research and its participants

Through qualitative research — one-to-one interviews and a focus group — I engaged seven care-experienced individuals (aged 19-39), two social workers, and three advocates to understand their experiences, perceptions, and hopes regarding AI tools in care workflows. These participants brought diverse stances to our conversations about AI in children's social care — from pragmatic optimism about administrative efficiency to deep scepticism about whether technology would genuinely benefit young people.

I showed them real screenshots of AI tools being used or trialled in children's social care — from transcription and assessment writing to case record exploration and automated risk scoring:

Then I asked: What do you think? What are your fears? What are your hopes?

All participant names have been changed for privacy. Given the sensitivity of participants' experiences, this research was conducted under ethical approval from University College London, with particular care taken around informed consent, participant wellbeing and the writing of findings.

Individual participant stances:

Sam (care leaver, one-to-one interview) — Saw AI's potential for reducing admin, but strongly emphasised children should not be reduced to "AI summaries". AI should help social workers think long-term and empower young people to understand their rights.

Jessie (care leaver, one-to-one interview) — Viewed AI as useful but emphasised "it's only as good as the person using it" and should enable "better tailored help". Their key concern was young people feeling processed rather than cared for.

Safeguarding SW (social worker, one-to-one interview) — Initially worried AI felt like "an intrusion" on relationship-based practice, but became enthusiastic about its admin potential, emphasising the crucial need for human analysis—the "So what?"

Independent SW (social worker, one-to-one interview) — Thought AI could handle everything, but "shouldn't be the be-all, and end-all", cautioning that social workers still need to source check and apply their own analysis.

Sceptically Suspicious group (care-experienced young people and advocates, focus group) — Were deeply sceptical AI would benefit young people, calling it "completely pointless" and worried it would make young people feel terrible and undermine relationships with social workers.

Cautiously Curious group (care-experienced young people and advocates, focus group) — Were open to AI improving care processes, but highlighted the gap between professionals' focus on time-saving and young people's lived reality of needing individual understanding: "It's about someone's life."

What participants understood about AI:

All participants had used ChatGPT, with some also mentioning CoPilot, MidJourney, and transcription tools like Otter. They found AI helpful for 'quick answers' and research. Participants had varying levels of understanding about how AI works — Sam explained Large Language Models and how they mimic language, while others thought of AI as a search tool that synthesises information and talks like a human. All participants recognised AI's intelligence through its ability to make choices and construct responses to human prompts.

Importantly, all participants were well aware AI makes mistakes. They showed strong caution, mainly because they'd witnessed errors themselves — transcription mishearing words, producing nonsensical text, and being particularly prone to number errors. Participants didn't view mistakes as reason to abandon the technology but recognised the need to check outputs and edit them.

Values-first: caution around how time-saving will actually be used

The Safeguarding SW initially worried AI felt like "an intrusion" to relationship-based practice: "I put my values around relationship building... using MI technique to build that relationship so that the person felt listened to and understood, and that is the core value of how I feel as a safeguarding social worker."

The Independent SW was clear that time-saving wouldn't be as significant as hoped: "So there will be some time saving but I don't think it's going to be as great as people perhaps think it might be... I couldn't with a clear conscience let a machine do the judgments in isolation... Any time saved should be spent doing direct work with the client always... but I also think we need to be careful around assumptions around time saving, because you are still going to have to go back and read all of that and make sure you know that it matches with your analysis."

Time-saving alone addresses symptoms, not underlying problems. What matters for care-experienced participants is how that time gets used. Despite their caution, all participants saw potential in AI enabling more informed decisions and a more comprehensive understanding.

Care-experienced participants expected AI to be available to them—not just social workers

When it came to accessing transcripts, recordings, assessments, browsing records, and understanding regulations and policies affecting their rights, care-experienced individuals wanted the same AI tools. Though beneficial for social workers, young people's own access was seen as having a groundbreaking, empowering effect—a tool not just to make existing systems more efficient, but a potential means to fundamentally reshape the quality of care and redistribute power.

Jessie imagined: "So this is the kind of tool that would also be accessible to younger people as well, if they want to see records about themselves... Because it's going to help the social worker to do their job. But it's going to help the child to live more comfortably, if they have access to the information about what's going on with them."

Sam saw incredible potential in AI creating equity in navigating policies and understanding rights: "when I was in care there was like a charter for Care leavers... it was like 30 plus pages long... if I had a question that was like, 'I'm going to college. Can I get money for the bus?' I would want AI to be able to say, 'yes, [the council] provides up to 20 pounds a week for bus.'" They described manually searching the Children's Act for hours to find one sentence to challenge their local authority.

The Cautiously Curious agreed, acknowledging it would help with "leveling, how to understand your rights, like, it's really complicated if you just go and read through your rights, but if you can go in and ask, like, a certain topic... What rights do I have that would be really, really useful."

How should time saved actually be used?

If AI does save time, participants had clear visions for how it should be spent—none of which involved simply processing more cases. Five priorities emerged:

The fear: AI reinforces feeling like a case to be processed

Sam described something that shaped everything else in their interview: "my entire time in foster care all I ever heard from workers was how high their caseload was, and that really impacted my mental health... they never had time for meetings, and everything was always very rushed." This constant message about high caseloads didn't just explain rushed meetings—it made young people feel like burdens.

As Jessie named it directly: "I'm not actually being cared about... they're just kind of processing me." The Sceptically Suspicious feared AI would intensify this: "AI taking away human connection... Can take away from the real meaning of what's happening." They worried about risk scoring: "scoring them is going to make them feel horrible about themselves."

And they didn't believe saved time would actually reach young people: "They would be reliant on AI + not want to have connection... because I feel like they'll look at the AI stuff and be like great. We just need this." The Cautiously Curious worried local authorities "would just add more people to their case load"—that AI becomes justification for not hiring more social workers.


What this means

Focusing solely on time saved — though easier to measure — misses what matters most for care-experienced participants: the quality of social workers' practice and young people's lived experiences. Sam's words cut to it clearly: "my entire time in foster care all I ever heard from workers was how high their caseload was, and that really impacted my mental health... they never had time for meetings, and everything was always very rushed." The problem isn't just efficiency. It's that the pressures of an under-resourced system communicate to children that they are a burden, not a priority.

Crucially, participants fundamentally reframed the question from 'how can AI help social workers?' to 'how can AI empower us to understand our rights and participate meaningfully in our own lives?' Young people didn't just fear AI would make them feel 'like a case' — they explained they already feel this way, and AI risks intensifying it unless intentionally designed otherwise.

The Sceptically Suspicious believed social workers wouldn't actually spend saved time with young people — but would instead take on more cases. This scepticism isn't unfounded pessimism. It reflects young people's lived experience of being deprioritised despite rhetoric about their importance. It challenges the assumption in current AI adoption discourse that benefits will automatically trickle down to young people if social workers save time.

Instead, care-experienced participants in my research call for AI intentionally designed as a shared resource — what Jessie called the difference between "it's your job" and "it's my life." Because resisting AI's tendency to deepen dehumanisation involves intentional design that prioritises the quality of caring and being in care over time savings. That means redistributing power: giving young people access to the same tools as social workers, designing AI that sees them as whole people with futures beyond their case files, and measuring success not by how many cases are processed — but by whether young people feel genuinely cared for.

What this means for AI product development and care practitioners

Without addressing underlying systemic constraints—underfunding, high caseloads, crisis response culture—AI risks making social workers more efficient at processing more children rather than supporting them. Measuring success through efficiency metrics alone fundamentally misses the point.

For designers and developers:

For children's social care practitioners:

For local authorities and policymakers:

Continue exploring this series on AI in UK children's social care, what care-experienced young people actually want: AI as a guiding storyteller in care—uncovering stories and understanding | AI as a facilitator of information in care—encouraging better support conversations

References & Further Reading

Systemic challenges and policy reviews

  • Booth, R. (2025, January 27). AI prototypes for UK welfare system dropped as officials lament 'false starts'. The Guardian.
  • Education Committee. (2025, July 10). Children's social care (HC 430). House of Commons, UK Parliament.
  • HM Government. (2025, January 13). AI Opportunities Action Plan: Government response (CP 1242). Department for Science, Innovation and Technology.
  • MacAlister, J. (2022). The independent review of children's social care—Final report. The Independent Review of Children's Social Care.
  • Munro, E. (2011). The Munro Review of Child Protection: Final Report—A child-centred system (Cm 8062). Department for Education.
  • Murphy, C. (2023). How learning from the lived experiences of child protection social workers can help us to understand the factors underpinning workforce stability within the English child protection system. Journal of Social Work Practice, 37(2), 263-276.
  • O'Keefe, R., Geddes, E., Vincent, S., & Davies, P. (2025). Enabling child-centred case recording in children's social work: The voice of practitioners. Child & Family Social Work.
  • UK Parliament. (2025, February 11). [Children's social care / Education Committee] [Video]. Parliament Live TV.
  • Winter, K., Cree, V., Hallett, S., Hadfield, M., Ruch, G., Morrison, F., & Holland, S. (2017). Exploring communication between social workers, children and young people. The British Journal of Social Work, 47(5), 1427-1444.

Technology, AI, and participation

  • Beam. (2025, February 3). Magic Notes external evaluation—Key findings [PDF].
  • Duranton, S. [TED]. (2020). "How humans and AI can work together to create better businesses | Sylvain Duranton" [Video]. YouTube.
  • Eubanks, V. (2018). "Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor." St Martin's Press.
  • Friel, C., Symonds, J., & Cartwright, M. (2025). Views and experiences of children and social workers about communication in social work exchanges: A mixed methods systematic review. Child & Family Social Work, 1--21.
  • Graeber, D. (2016). The utopia of rules: On technology, stupidity, and the secret joys of bureaucracy. Brooklyn, NY: Melville House.
  • Henze-Pedersen, S., & Kirkegaard, S. (2025). Digital communication and child participation in child welfare services: A scoping review of potentials and challenges. European Journal of Social Work, 28(2), 300-312.
  • Juul, R., Husby, I., Kaalvik, H., & Salkauskiene, I. (2025). Characteristics of the qualitative research on inclusion of the youngest children in child welfare and protection work processes: A qualitative systematic review. Child & Family Social Work.
  • Wilkins, D., & Benett, M. (2025). Can AI be a better decision-maker than social workers? British Journal of Social Work.

Composed with the help of AI, drawing on my dissertation in Sociology of Childhood and Children's Rights (UCL, 2025).