In 2018, ethics became the new fashion in UK data circles.
The launch of the Women Leading in AI principles of responsible AI, has prompted me to try and finish and post these thoughts, which have been on my mind for some time. If two parts of 1K is tl:dr for you, then in summary, we need more action on:
- Ethics as a route to regulatory avoidance.
- Framing AI and data debates as a cost to the Economy.
- Reframing the debate around imbalance of risk.
- Challenging the unaccountable and the ‘inevitable’.
And in the next post on:
- Corporate Capture.
- Corporate Accountability, and
- Creating Authentic Accountability.
Ethics as a route to regulatory avoidance
In 2019, the calls to push aside old wisdoms for new, for everyone to focus on the value-laden words of ‘innovation’ and ‘ethics’, appears an ever louder attempt to reframe regulation and law as barriers to business, asking to cast them aside.
On Wednesday evening, at the launch of the Women Leading in AI principles of responsible AI, the chair of the CDEI said in closing, he was keen to hear from companies where, “they were attempting to use AI effectively and encountering difficulties due to regulatory structures.”
In IBM’s own words to government recently,
“A rush to further regulation can have the effect of chilling innovation and missing out on the societal and economic benefits that AI can bring.”
The vague threat is very clear, if you regulate, you’ll lose. But the the societal and economic benefits are just as vague.
So far, many talking about ethics are trying to find a route to regulatory avoidance. ‘We’ll do better,’ they promise.
In Ben Wagner’s recent paper, “Ethics as an Escape from Regulation: From ethics-washing to ethics-shopping,” he asks how to ensure this does not become the default engagement with ethical frameworks or rights-based design. He sums up, “In this world, ‘ethics’ is the new ‘industry self-regulation.”
Perhaps it’s ingenious PR to make sure that what is in effect self-regulation, right across the business model, looks like it comes imposed from others, from the very bodies set up to fix it.
But as I think about in part 2, is this healthy for UK public policy and the future not of an industry sector, but a whole technology, when it comes to AI?
Framing AI and data debates as a cost to the Economy
Companies, organisations and individuals arguing against regulation are framing the debate as if it would come at a great cost to society and the economy. But we rarely hear, what effect do they expect on their company. What’s the cost/benefit expected for them. It’s disingenuous to have only part of that conversation. In fact the AI debate would be richer were it to be included. If companies think their innovation or profits are at risk from non-use, or regulated use, and there is risk to the national good associated with these products, we should be talking about all of that.
And in addition, we can talk about use and non-use in society. Too often, the whole debate is intangible. Show me real costs, real benefits. Real risk assessments. Real explanations that speak human. Industry should show society what’s in it for them.
You don’t want it to ‘turn out like GM crops’? Then learn their lessons on transparency, trustworthiness, and avoid the hype. And understand sometimes there is simply tech, people do not want.
Reframing the debate around imbalance of risk
And while we often hear about the imbalance of power associated with using AI, we also need to talk about the imbalance of risk.
While a small false positive rate for a company product may be a great success for them, or for a Local Authority buying the service, it might at the same time, mean lives forever changed, children removed from families, and individual reputations ruined.
And where company owners may see no risk from the product they assure is safe, there are intangible risks that need factored in, for example in education where a child’s learning pathway is determined by patterns of behaviour, and how tools shape individualised learning, as well as the model of education.
Companies may change business model, ownership, and move on to other sectors after failure. But with the levels of unfairness already felt in the relationship between the citizen and State — in programmes like Troubled Families, Universal Credit, Policing, and Prevent — where use of algorithms and ever larger datasets is increasing, long term harm from unaccountable failure will grow.
Society needs a rebalance of the system urgently to promote transparent fairness in interactions, including but not only those with new applications of technology.
We must find ways to reframe how this imbalance of risk is assessed, and is distributed between companies and the individual, or between companies and state and society, and enable access to meaningful redress when risks turn into harm.
If we are to do that, we need first to separate truth from hype, public good from self-interest and have a real discussion of risk across the full range from individual, to state, to society at large.
That’s not easy against a non-neutral backdrop and scant sources of unbiased evidence and corporate capture.
Challenging the unaccountable and the ‘inevitable’.
In 2017 the Care Quality Commission reported into online services in the NHS, and found serious concerns of unsafe and ineffective care. They have a cross-regulatory working group.
By contrast, no one appears to oversee that risk and the embedded use of automated tools involved in decision-making or decision support, in children’s services, or education. Areas where AI and cognitive behavioural science and neuroscience are already in use, without ethical approval, without parental knowledge or any transparency.
Meanwhile, as all this goes on, academics many are busy debating fixing algorithmic bias, accountability and its transparency.
Few are challenging the narrative of the ‘inevitability’ of AI.
Julia Powles and Helen Nissenbaum recently wrote that many of these current debates are an academic distraction, removed from reality. It is under appreciated how deeply these tools are already embedded in UK public policy. “Trying to “fix” A.I. distracts from the more urgent questions about the technology. It also denies us the possibility of asking: Should we be building these systems at all?”
Challenging the unaccountable and the ‘inevitable’ is the title of the conclusion of the Women Leading in AI report on principles, and makes me hopeful.
“There is nothing inevitable about how we choose to use this disruptive technology. […] And there is no excuse for failing to set clear rules so that it remains accountable, fosters our civic values and allows humanity to be stronger and better.”
Next: Part 2– Policy shapers, product makers, and profit takers on
- Corporate Capture.
- Corporate Accountability, and
- Creating Authentic Accountability.