When FAT fails: Why we need better process infrastructure in public services.

I’ve been thinking about FAT, and the explainability of decision making.

There may be few decisions about people at scale, today in the public sector, in which computer stored data aren’t used. For some, computers are used to make or help make decisions.

How we understand those decisions in a vital part of the obligation of fairness, in data processing. How I know that *you* have data about me, and are processing it, in order to make a decision that affects me. So there’s an awful lot of good things that come out of that. The staff member does their job with better understanding. The person affected has an opportunity to question and correct if necessary, the inputs to the decision. And one hopes, that the computer support can make many decisions faster, and with more information in useful ways, than the human staff member alone.

But, why then, does it seem so hard to get this understood and processes in place to make the decision making understandable?

And more importantly, why does there seem to be no consistency in how such decision-making is documented, and communicated?

From school progress measures, to PIP and Universal Credit applications, to predictive  ‘risk scores’ for identifying gang membership and child abuse. In a world where you need to be computer literate but there may be no computer to help you make an application, the computers behind the scenes are making millions of life changing decisions.

We cannot see them happen, and often don’t see the data that goes into them. From start to finish, it is a hidden process.

The current focus on FAT —  fairness, accountability, and transparency of algorithmic systems — often makes accountability for the computer part of the decision-making in the public sector, appear something that has become too hard to solve and needs complex thinking around.

I want conversations to go back to something more simple. Humans taking responsibility for their actions. And to do so, we need better infrastructure for whole process delivery, where it involves decision making, in public services.

Academics, boards, conferences, are all spending time on how to make the impact of the algorithms fair, accountable, and transparent. But in the search for ways to explain legal and ethical models of fairness, and to explain the mathematics and logic behind algorithmic systems and machine learning, we’ve lost sight of why anyone needs to know. Who cares and why?

People need to get redress when things go wrong or appear to be wrong. If things work, the public at large generally need not know why.  Take TOEIC. The way the Home Office has treated these students makes a mockery of the British justice system. And the impact has been devastating. Yet there is no mechanism for redress and no one in government has taken responsibility for its failures.

That’s a policy decision taken by people.

Routes for redress on decisions today are often about failed policy and processes. They are costly and inaccessible, such as fighting Local Authorities decisions not to provide services required by law.

That’s a policy decision taken by people.

Rather in the same way that the concept of ethics has become captured and distorted by companies to suit their own agenda, so if anything, the focus on FAT has undermined the concept of whole process audit and responsibility for human choices, decisions, and actions.

The effect of a machine-made decision on those who are included in the system response, — and more rarely those who may be left out of it, or its community effects, — has been singled out for a lot of people’s funding and attention as what matters to understand and audit in the use of data for making safe and just decisions.

It’s right to do so, but not as a stand alone cog in the machine.

The computer and its data processing have been unjustifiably deified. Rather than supporting public sector staff they are disempowered in the process as a whole. It is assumed the computer knows best, and can be used to justify a poor decision — “well, what could I do, the data told me to do it?” is rather like, “it was not my job to pick up the fax from the fax machine.” But that’s not a position we should encourage.

We have become far too accommodating of this automated helplessness.

If society feels a need to take back control, as a country and of our own lives, we also need to see decision makers take back responsibility.

The focus on FAT emphasises the legal and ethical obligations on companies and organisations, to be accountable for what the computer says, and the narrow algorithmic decision(s) in it.  But it is rare that an outcome in most things in real life, is the result of a singular decision.

So does FAT fit these systems at all?

Do I qualify for PIP? Can your child meet the criteria needed for additional help at school?  Does the system tag your child as part of a ‘Troubled Family’? These outcomes are life affecting in the public sector. It should therefore be made possible to audit *if* and *how* the public sector should offer to change lives as a holistic process.

That means re-looking at if and how we audit that whole end-to-end process > from policy idea, to legislation, through design to delivery.

There are no simple, clean, machine readable results in that.

Yet here again, the current system-process-solution encourages the public sector to use *data* to assess and incentivise the process to measure the process, and award success and failure, packaged into surveys and payment-by-results.

The data driven measurement, assesses data driven processes, that compound the problems of this infinite human-out-of-the-loop.

This clean laser-like focus misses out on the messy complexity of our human lives.  And the complexity of public service provision makes it very hard to understand the process of delivery. As long as the end-to-end system remains weighted to self preservation, to minimise financial risk to the institution for example, or to find a targeted number of interventions, people will be treated unfairly.

Through a hyper focus on algorithms and computer-led decision accountability, the tech sector, academics and everyone involved, is complicit in a debate that should be about human failure. We already have algorithms in every decision process. Human and machine-led algorithms. Before we decide if we need a new process of fairness, accountability and transparency, we should know who’s responsible now for the outcomes and failure in any given activity, and ask, ‘Does it really need to change?’

To restore some of the power imbalance to the public on decisions about us made by authorities today, we urgently need public bodies to compile, publish and maintain at very minimum, some of the basic underpinning and auditable infrastructure — the ‘plumbing’ — inside these processes:

  1. a register of data analytics systems used by Local and Central Government, including but not only those where algorithmic decision-making affects individuals.
  2. a register of data sources used in those analytics systems.
  3. a consistently identifiable and searchable taxonomy of the companies and third-parties delivering those analytics systems.
  4. a diagrammatic mapping of core public service delivery activities, to understand the tasks, roles, and responsibilities within the process. It would benefit government at all levels to be able to see themselves where decision points sit, understand flows of data and cash, and see where which law supports the task, and accountability sits.

Why? Because without knowing what is being used at scale, how and by whom, we are poorly informed and stay helpless. It allows for enormous and often unseen risks without adequate checks and balances like named records with the sexual orientation data of almost 3.2 million people, and religious belief data on 3.7 million sitting in multiple distributed databases and with the massive potential for state-wide abuse by any current or future government.  And the responsibility for each part of a process remains unclear.

If people don’t know what you’re doing, they don’t know what you’re doing wrong, after all. But it also means the system is weighted unfairly against people. Especially those who least fit the model.

We need to make increasingly lean systems more fat and stuff them with people power again. Yes we need fairness accountability and transparency. But we need those human qualities to reach across thinking beyond computer code. We need to restore humanity to automated systems and it has to be re-instated across whole processes.

FAT focussed only on computer decisions, is a distraction from auditing failure to deliver systems that work for people. It’s a failure to manage change and of governance, and to be accountable for when things go wrong.

What happens when FAT fails? Who cares and what do they do?