← Preceded by Part 2 — ‘The computer says no’
THE POLICY
In the EU and UK, many governments have been through a process of optimisation of the way they provide services to the public, through digitalisation and the use of technology. This is led both by criticism of lengthy and bureaucratic ways of interacting with public services, but also a need to reduce cost and provide more by spending less. For example, the UK’s Government Digital service aims to “make digital government simpler, clearer and faster for everyone. Good digital services are better for users, and cheaper for the taxpayer.”
But technological evolution and digitalisation haven’t happened just within governments, but also in all other aspects of life, through social media, digital banking, ride-hailing digital services, and anything else in between. This led the EU to reconsider its regulations and protect the rights of people both as citizens and as consumers. The General Data Protection Regulation (GDPR) was implemented in 2016 and had a period of transition until 2018 when it is fully in force. The regulation was created by the EU and published in 2016, and though the UK left the EU, it adopted the regulation. GDPR protects the personal data rights of individuals when using services online, e.g. apps, software, and social media. UK GDPR defines an individual’s rights regarding to how and why organisations collect and process their data.
The previous regulation, the 1995’s ‘Data Protection Directive’, was seen as outdated because it had been created prior to the mainstream use of the Internet by the public. GDPR was created to address “new challenges” arising from internet use and technological development, where people can now share a high amount of information with each other and businesses, at a scale never seen before. The EU recognised the risks to personal privacy in this information sharing, acknowledging how difficult it may be to regulate and inspect these digital processes, to ensure people’s rights are being met. (opinion)
At the time GDPR is introduced and implemented, the public visibility around data privacy increased. Many large organisations make the news for mishandling personal data. This increases pressure on policymakers to enforce the regulation and businesses to implement it and leads to fining organisations that mishandle personal data.
Some have criticised the regulation for being overly cautious and therefore hindering innovation, but others celebrate the innovative nature of the regulation as one of the first and most unique in the world and forward-thinking.
One of the policies that could be seen to limit innovation is the ‘UK GDPR gives people the right not to be subject to legal decisions that are solely automated’, because it constrains the types of products and services businesses can provide.
On automated decision-making
Automated decision-making is when the process of considering, selecting and executing an outcome is carried out by a machine without the review or reasoning of a human being. These decisions can be convenient, e.g. topping up a travel card automatically, or very annoying, e.g. flight price increase depending on demand. Nonetheless, automated decisions can have highly impactful effects, legal or otherwise. So they are subject to specific restrictions under the UK GDPR.
The policy protects how personal data is used to make life-changing decisions about an individual, for example, using machine learning to identify children at risk, and it requires a human to help make that decision.
Despite not being legally binding to UK businesses, some of the EU guidance is still quoted and recommended by the UK’s ICO, as is the case of guidelines around identifying what decisions can be fully automated. Decisions can be fully automated when, for example, it’s required or mandated by law, or when they are made by a 3rd party and used for a contractual reason, for example, the use of a credit score for a loan application.
If we think back to the example of Little Britain, Carol was already completely automating the process of taking in a patient and could have been replaced completely by a computer. But what types of technologies does this policy relate to? What are the opportunities that arise from these technologies? What are the issues and risks that the policy hopes to address? And what might Weber’s ideal of bureaucracy help us consider in this context?
THE MACHINE
“Today, we have ceded much of that decision-making power to sophisticated machines. Automated eligibility systems, ranking algorithms, and predictive risk models control which neighbourhoods get policed, which families attain needed resources, who is short-listed for employment, and who is investigated for fraud.” (Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor)
Many processes can benefit from the efficiency of automation, and it’s easy to imagine how the procedure followed by Carol, the receptionist from Little Britain, could be replaced by a computer. Still, it’s precisely the outcome, the decision being made, that concerns the UK GDPR. When a decision dramatically impacts a person, then the outputs of any automated processing need to be reviewed by a human before they affect the individual.
An opportunity to reduce the uncertainty of humans
“There is a broad body of literature that would suggest that humans are not particularly good crystal balls. Instead, what we are saying is, let’s train an algorithm to identify which of those children fit a profile where the long-arc risk would suggest future system involvement.” (Emily Putman-Hornstein, Director Children’s Data Network)
Ritzer argues that people will always act in unpredictable ways, and so the rationalisation and formalisation of processes are a way to overcome it and reduce the risk of unknown results. So, replacing humans with machines may seem like the most rational next step.
In an aim to reduce the wastefulness of their bureaucratic processes, companies and governments are attracted by the promise of the efficiency afforded by automation and artificial intelligence (AI). Technology comes has the means to achieve the ultimate efficiency promised by Weber’s bureaucracy. Technological automation functions without error and with the most accurate and predictable results. However, some warn that AI and automation work just like a bureaucracy and can lead to similar limitations and augmented harmful effects.
Just like with Weber’s bureaucracy, today’s focus of the conversation around technology is how to make it faster, and more efficient, as opposed to discussing what outcome we want and how it would be better. And technology, in the way it’s applied, is not neutral, it will have behind the interests of those making a profit from it.
This technology presents risks and limitations:
- targeting and higher surveillance of certain disadvantaged or marginalised groups of people;
- augmentation of historical bias and inequalities existing in the decision-making and processes it ought to automate;
- blindness to specific needs of individuals that deviate from the norm;
- the collection of the data itself and how its standardised can have a negative impact on the result produced;
- the technology presents many false positives and vice versa, accuracy is still in question.
Furthermore, for a machine to reach a decision, it needs to understand the underlying rules, policies or social norms at play. It needs to understand the context in which the decision is being made and what is the right decision. It needs to understand what is the wrong decision and what parameters to take into consideration or not. However, is hard for machines to understand context, and this can lead to unintended consequences. For example, should the machine make a decision based on a history of decisions made previously by humans? Are we confident those are the norms we want to project into our future? And if not, how do we decide what is?
Automation can lead to unintended consequences
In 2015 Virginia Eubanks studied the impacts of technology being used in US public services to automate decision-making or aid it in some form. One of the systems she look into was a risk model that aimed to identify children at risk of abuse and neglect. Launched in 2016, the Allegheny Family Screening Tool, looks through a collection of diverse administrative records, across departments, to make a decision. The implementation of this tool came as a way to manage the reception of a high number of referrals for children at risk.
Because it looks through past data and the family’s interaction with public services, and that data is kept forever, it has discouraged some parents from seeking help when they most need it. Parents are afraid that using public services, such as mental health, might target them, and the automated systems will make it more likely that their children will be removed from them. But the poorest, who cannot afford to pay for other services, have no alternative. And hence it increases discrimination against the poor, as opposed to others who might not have any traceable data in the system.
Just like with bureaucracy, the need to make the technology better might distract from looking at what goal we are actually trying to achieve and the negative consequences of its current use.
Faced with this uncertainty and the limitations of the technology, the UK GDPR’s policy requires a human to be involved in all legal or highly impactful decisions about an individual. Is this enough to address the limitations of the technology? And what other social implications might arise?
THE HUMAN-IN-THE-LOOP
Many have suggested that humans should be placed in the loop of powerful technologies. However, this human is placed in a context where the perception is that the algorithm is superior in many ways to human judgement. Humans tend to trust intelligent machines more than other humans.
In their study, Eubanks found that the caseworkers who use the Allegheny Family Screening Tool are mostly working in cooperation with it. When the caseworker gets a referral, they will do their research to understand if the case should be flagged for investigation or not. At the end of their research and after making a recommendation, they run the algorithm. The system will produce a score between 1 and 20, visualised on a graph similar to a thermometer, from green to red. Caseworkers feel compelled to compare their response to the response of the algorithm and look for ways to understand what has happened when there are greater discrepancies between their recommendations. Did they miss or overlook some information? Naturally, they hope to learn from the model because they feel it may be more accurate, and more infallible than them. Will this lead to eventually ignoring their own judgments and blindly following the rationalisation of the machine, ignoring negative outcomes, just like author David Graeber alerted to?
Furthermore, caseworkers have realised that families which have been referred to the system more times or have been recorded for longer will likely increase their likelihood of being flagged by the system and have a higher score. There is a risk that once the algorithm is learnt, it can be misused and manipulated to produce certain outcomes. For example, this can have an impact on how social workers record information about families, in order to manipulate the algorithm to produce an intended result.
Simply placing a human in the loop might not address the limitations of the technologies because of the way people may perceive the technology’s judgment as superior. So we ought to consider how humans and technologies can work together and influence each other in what creates positive outcomes for people using public services. Many of the questions and critiques of Weber’s Bureaucracy apply in this context and can be helpful and raising questions and understanding social implications and relationships. What happens in these moments of disagreement between humans and machines? How much discretion should the human or the machine be afforded? Where and with whom does the accountability of errors lie? What are the qualities of the process of dispute and amending/altering the machine’s decision? And what forces and values might influence the motivations of the humans working with the machines?
Conclusion
Weber’s ideal of Bureaucracy was a way of rationalising the functioning of an office, in order to coordinate large numbers of people and make the organisation work efficiently. Hierarchy, rules and procedures, specialised tasks and workers, were some of the key characteristics that made bureaucracy work. However, in the pursuit of efficiency and removing the uncertainty of human emotion, the bureaucracy became impersonal and dehumanised, blind to the specific need of individuals, which led to unintended consequences.
Replacing humans with full technological automation may seem like the next stage of bureaucracy. However, the current technology presents risks of augmenting inequality and reinforcing the bias that exists in the historic decisions it learns from, disproportionately affecting negatively some groups of people.
The UK GDPR gives people the right not to be subject to legal decisions that are solely automated, requiring a human to be part of these types of decisions. But considering the effects of the structures these humans may be integrated into, and how machines are considered as having more accurate and rational judgment, this might impact how freely these humans make decisions. So it’s important to consider how humans are placed in the loop and what may have an impact on their motivations and their ability to make a decision against a machine.
REFERENCES
- Classical and contemporary sociological theory: Text and readings [Book, 2016]
- Three years of GDPR: the biggest fines so far [Article, 2021]
- Cambridge Dictionary. (n.d.). bureaucracy
- Machine Learning in Children’s Services: Does it work? [Report, 2020]
- How humans and AI can work together to create better businesses | Sylvain Duranton [YouTube, 2020]
- The History of the General Data Protection Regulation
- Opinion on EC Communication ‘A comprehensive approach on personal data protection in EU’ [2010]
- Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor [Book, 2018]
- A GDS Story [Article]
- The utopia of rules: On technology, stupidity, and the secret joys of bureaucracy [Book, 2016]
- DAY II: David Graeber — Re-Thinking Resistance: Smashing Bureaucracies and Classes (2017) [YouTube Video, 2017]
- The 15 biggest data breaches of the 21st century [Article, 2022]
- What does the UK GDPR say about automated decision-making and profiling?
- When can we carry out this type of processing?
- The Trials of Gabriel Fernandez [Netflix Series, 2020]
- How Cambridge Analytica Sparked the Great Privacy Awakening [Article, 2019]
- Little Britain USA: Episode #1.1 (Season 1, Episode 1) [TV series, 2018]
- How Data Protection Regulation Affects Startup Innovation [Paper, 2019]
- Maybe it is time to rediscover bureaucracy [Article, 2006]
- The ‘McDonaldization’ of Society [Book, 1983]
- Review of The McDonaldization of Society: 20th Anniversary Edition, by George Ritzer [2013]
- 141 | The Problem with Political Leaders [Podcast, 2019]
- The banality of evil on trial, Future perspectives on international criminal justice [Paper, 2009]
- From Max Weber: Essays in Sociology [Book, 2009]
- Episode 144 … Max Weber — Iron Cage [Podcast, 2020]
- The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power [Book, 2019]
Part of an essay written for a ‘Social Theory and the Study of Contemporary Social Problems’ module at UCL.