Automated decision-making within a service is when the process of considering, selecting and executing an outcome is carried out without the review or reasoning of a human being. These decisions can be convenient, like when you run out of money on your travel card, and it tops up automatically from your bank account. Nevertheless, decisions can also be very annoying, for example, when you see the prices of your flights’ on the website increase without any apparent reason.
“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly affects him or her”
Article 22(1)
What does the UK GDPR say about automated decision-making and profiling?
Restrictions under UK GDPR
Services relying on automated decisions can have highly impactful effects, legal or otherwise. So they are subject to specific use restrictions. UK GDPR defines an individual’s rights regarding how organisations collect and process their data. The person also needs to be informed of what purposes the data is used for. Ultimately, this means people can move their data to other service providers (e.g. your bank transactions) and request a copy of all of their data or its deletion.
Furthermore, this specific policy around automated decision-making protects how personal data is used to make life-changing decisions about an individual. For example, using machine learning to identify children at risk and remove them from harm.
So why does it matter, and what should we consider when designing services that rely on automated decision-making or are informed by some form of automated decision-making?
Many processes can benefit from the efficiency of automation. Still, it’s precisely the outcome, the decision being made, that concerns the UK GDPR. When a decision dramatically impacts a person, then the outputs of any automated processing need to be reviewed by a human before they affect the individual. This process of human deliberation can be considered across three stages of intervention:
- Recipe — the moment of determining how decisions are made and the reasoning behind them.
What data is used to make the decision? Who’s involved in deciding what data is used and how? How are inferences agreed on and tested, or how confident are we about them? When is the recipe for automated decision-making done and reviewed?
For a machine to reach a decision, it needs to understand the underlying rules, policies or social norms at play. It needs to understand the context in which the decision is being made and what is the right decision. It needs to understand what is the wrong decision and what parameters to take into consideration or not. For example, should the machine make a decision based on a history of decisions made previously by humans? Are we confident those are the norms we want to project into our future? And if not, how do we decide what is? - Interaction — the moment when a machine advises a human on a decision.
What aspects of the decision are visible to the human? How is this decision communicated to the human, and how can the human query/inspect the decision? Who is/are these humans?
Even if the decision is not automatically acted on, its existence can have a detrimental effect on the individual. And so, the way the system displays this information and allows a human to inspect it is as important as the decision itself. Moreover, as well as deciding what can/should be done with this information. It reminds me of an example Kate Crawford shared in her talk about the Politics of AI at The Royal Society. In the 1930s, the German government purchased Hollerith machines from IBM. It used them “to do racial registries of the population, marking out everybody who was Jewish or Roman and other ethnic groups.” René Carmille, responsible for the Hollerith machines in occupied France, reprogrammed the machines so that they would never record Jewish identity, saving countless lives as a result. - Resolution — the moment a human disagrees with a decision made by a machine.
What is the nature of the process of disagreeing? Who is involved in this process and has the ultimate decision power? How does this process take place? When and what do machines and humans learn from this?
We tend to trust machines, intelligent machines, more than humans. So, how do we avoid making this process feel like a dispute between humans and machines? For example, is the human free to take the machine’s advice into consideration or not, or if they disagree, will they have to go through a process of dispute, a process of amending/altering the machine’s decision?
I find that considering these 3 stages helps me make more thoughtful decisions around the automated decisions of the services I design.