← Preceded by Part 1— The ideal machine
Before moving on to analyse the ‘automated decision-making policy’, I will pause here to take a deeper look into a sketch of the TV series ‘Little Britain’. The sketch illustrates clearly the main topics I will explore in the next Part, in terms of characteristics and risks within Weber’s Bureaucracy and technological limitations.
In this sketch, a child and their carer enter a hospital for an appointment to remove the child’s tonsils, which they had scheduled previously on a separate occasion. The pair interact with receptionist Carol, who promptly asks a series of questions provided by her computer, and informs them that the child is instead scheduled for a hip replacement.
I wonder if what Weber meant by “mechanized petrification embellished with a sort of convulsive self-importance” is translated by this sketch of Little Britain, and the bureaucracy Carol is part of.
- Following diligently the questions on the computer: Carol, the receptionist, acts as an intermediary between the computer system and the patients, she doesn’t even start talking before tapping on her computer first. It’s clear Carol has a set procedure and rules that she follows without deviation, independently of who walks through the door. This ensures Carol, and any other receptionist in the hospital delivers the exact same questions. At first sight, these actions seem to lead to equal treatment, but what might be the unintended consequences? How might this dehumanise a patient’s experience and lead to negative outcomes?
- Showing no emotion or sympathy for the specific need of this patient: Carol doesn’t change the way she asks questions to adapt to the way a 5-year-old may interpret the world. When she asks, “Where do you live” and the little girl responds, “with mummy and daddy”, Carol acts bothered by the incompetence of the child in answering this question. So instead of blaming the system, the way the procure follows, and how questions are asked, Carol blames the user. Carol is also unable to see the ridiculousness of the fact that the system says the child is booked for a hip replacement, even though what she needs is to have her tonsils removed.
- Willingness and motivation to help: When the carer says, “there must be some kind of mistake”, Carol replies, “computer says no”, and goes on to reply the same to all the pleas from the carer. When asked to help, maybe to “speak to someone” because of this error in the scheduling, Carol shrugs her shoulders, alluding to the fact that she could, but it wouldn’t change much. Maybe Carol knows that she might be blamed for not following the rules, and so she sticks to them dutifully. Maybe she would get into trouble, and lose her job if she questioned the system. So what in this organisation motivates Carol to act this way even in the face of ridicule and when the ultimate goal of helping the patient is being compromised? What aspects of Carol’s relationship with both her computer and her employer are at play? What forms of structural violence or stupidity may be happening?
- Acting like a machine, Carol can eventually be replaced by a robot: Ritzer argues that “when human robots are found, mechanical robots cannot be far behind.” If Carol is not afforded the ability to creatively think about how to solve problems that arise with the uncertainty and changes of real life, then she might as well be a computer. We can imagine that in trying to reduce discrimination and cater for large numbers of users, the hospital rationalised and formalised its reception process to a set of questions and steps that everyone ought to follow. But in making its process more efficient, the hospital may have forgotten about its interactions with the realities of its patients and the outside world that is not in its control. So how may the hospital consider Carol’s evolving role in this interaction when the computer replaces her? How may we consider the implications of Carol’s humane characteristics and the computer’s calculated precision? In what way will they influence each other?
- Reviewing of Carol, not the outcome: Eventually, the child’s carer gives up and starts to leave. Carol promptly asks them to rate ‘her helpfulness’ on a set scale. I wonder how this rating motivates Carol to do her job in the way that she does. She did everything the computer and organisation asked her to do, and so in Weber’s ideal bureaucracy, Carol would have completed her task successfully. Two important considerations here. Firstly that the focus of the rating is on Carol and her performance. Doesn’t that mean that Carol is the only one to blame if something goes wrong? And so the individual is blamed for systemic or social challenges. Secondly, it shows how creating an impersonal experience, leads to a state of dehumanisation where the goal of the service wasn’t achieved, in this case, the patient did not have their tonsils removed, and ultimately this may have detrimental consequences to their health.
It’s true this sketch is not the account of a real interaction between a patient and a hospital. However, it is a caricature, that allows us to illustrate and interpret the social situations that people may be experiencing when interacting with public services, helping us inspect what aspects we need to address to truly achieve the aims of the service, to help patients, not itself.
→ Continues in Part 3 — The ‘human-in-the-loop’.
Part of an essay written for a ‘Social Theory and the Study of Contemporary Social Problems’ module at UCL.