Category Archives: deregulation

When the gold standard no longer exists: data protection and trust

Last week the DCMS announced that consultation on changes to Data Protection laws is coming soon.

  • UK announces intention for new multi-billion pound global data partnerships with the US, Australia and Republic of Korea
  • International privacy expert John Edwards named as preferred new Information Commissioner to oversee shake-up
  • Consultation to be launched shortly to look at ways to increase trade and innovation through data regime.

The Telegraph reported, Mr Dowden argues that combined, they will enable Britain to set the “gold standard” in data regulation, “but do so in a way that is as light touch as possible”.

It’s an interesting mixture of metaphors. What is a gold standard? What is light touch? These rely on assumptions in the reader to assume meaning, but don’t convey any actual content. Whether there will be substantive changes or not, we need to wait for the full announcement this month.

Oliver Dowden’s recent briefing to the Telegraph (August 25) was not the first trailer for changes that are yet to be announced. He wrote in the FT in February this year, that, “the UK has an opportunity to be at the forefront of global, data-driven growth,” and it looks like he has tried to co-opt the rights’ framing as his own.  …”the beginning of a new era in the UK — one  where we start asking ourselves not just whether we have the right to use data, but whether,  given its potential for good, we have the right not to.”

There was nothing more on that in this week’s announcement, but the focus was on international trade. The Government says it is prioritising six international agreements with “the US, Australia, Colombia, Singapore, South Korea and Dubaibut in the future it also intends to target the world’s fastest growing economies, among them, India, Brazil, Kenya and Indonesia.” (my bold)

Notably absent from the ‘fastest growing’ among them mentions’ list is China. What those included in the list have in common, is that they are countries not especially renowned for protecting human rights.

Human rights like privacy. The GDPR and in turn the UK-GDPR recognised that rights matter.  Data Protection is not designed in other regimes to be about prioritising the protection of rights but harmonisation of data in trade, and that may be where we are headed. If so, it would be out of step with how the digital environment has changed since those older laws were seen as satisfactory. But weren’t.  And the reason why the EU countries moved towards both better harmonisation *and* rights protection.

At the same time, while data protection laws increasingly align towards a high interoperable and global standard, data sovereignty and protectionism is growing too where transfers to the US remain unprotected from government surveillance.

Some countries are establishing stricter rules on the cross-border transfer of personal information, in the name of digital sovereignty, security or business growth. such as Hessen’s decision on Microsoft and “bring the data home” moves to German-based data centres.

In the big focus on data-for-trade post-Brexit fire sale,  the DCMS appears to be ignoring these risks of data distribution, despite having a good domestic case study on its doorstep in 2020. The Department for Education has been giving data away sensitive pupil data since 2012. Millions of people, including my own children, have no idea where it’s gone. The lack of respect for current law makes me wonder how I will trust that our own government, and those others we trade with, will respect our rights and risks in future trade deals.

Dowden complains in the Telegraph about the ICO that, “you don’t know if you have done something wrong until after you’ve done it”.  Isn’t that the way that enforcement usually works? Should the 2019-20 ICO audit have turned a blind eye to  the Department for Education lack of prioritisation of the rights of the named records of over 21 million pupils? Don’t forget even gambling companies had access to learners’ records of which the Department for Education claimed to be unaware. To be ignorant of law that applies to you, is a choice.

Dowden claims the changes will enable Britain to set the “gold standard” in data regulation. It’s an ironic analogy to use, since the gold standard while once a measure of global trust between countries, isn’t used by any country today. Our government sold off our physical gold over 20 years ago, after being the centre of the global gold market for over 300 years. The gold standard is a meaningless thing of the past that sounds good. A true international gold standard existed for fewer than 50 years (1871 to 1914). Why did we even need it? Because we needed a consistent trusted measure of monetary value, backed by trust in a commodity. “We have gold because we cannot trust governments,” President Herbert Hoover famously said in 1933 in his statement to Franklin D. Roosevelt. The gold standard was all about trust.

At defenddigitalme we’ve very recently been talking with young people about politicians’ use of language in debating national data policy.  Specifically, data metaphors. They object to being used as the new “oil” to “power 21st century Britain” as Dowden described it.

A sustainable national data strategy must respect human rights to be in step with what young people want. It must not go back to old-fashioned data laws only  shaped by trade and not also by human rights; laws that are not fit for purpose even in the current digital environment. Any national strategy must be forward-thinking. It otherwise wastes time in what should be an urgent debate.

In fact, such a strategy is the wrong end of the telescope from which to look at personal data at all— government should be focussing on the delivery of quality public services to support people’s interactions with the State and managing the administrative data that comes out of digital services as a by-product and externality. Accuracy. Interoperability. Registers. Audit. Rights’ management infrastructure. Admin data quality is quietly ignored while we package it up hoping no one will notice it’s really. not. good.

Perhaps Dowden is doing nothing innovative at all. If these deals are to be about admin data given away in international trade deals he is simply continuing a long tradition of selling off the family silver. The government may have got to the point where there is little left to sell. The question now would be whose family does it come from?

To use another bad metaphor, Dowden is playing with fire here if they don’t fix the issue of the future of trust. Oil and fire don’t mix well. Increased data transfers—without meaningful safeguards including minimized data collection to start with—will increase risk, and transfer that risk to you and me.

Risks of a lifetime of identity fraud are not just minor personal externalities in short term trade. They affect nation state security. Digital statecraft. Knowledge of your public services is business intelligence. Loss of trust in data collection creates lasting collective harm to data quality, with additional risk and harm as a result passed on to public health programmes and public interest research.

I’ll wait and see what the details of the plans are when announced. We might find it does little more than package up recommendations on Codes of Practice, Binding Corporate Contracts and other guidance that the EDPB has issued in the last 12 months. But whatever it looks like, so far we are yet to see any intention to put in place the necessary infrastructure of rights management that admin data requires. While we need data registers, those we had have been axed. Few new ones under the Digital Economy Act replaced them. Transparency and controls for people to exercise rights are needed if the government wants our personal data to be part of new deals.

 

img: René Magritte The False Mirror Paris 1929

=========

Join me at the upcoming lunchtime online event, on September 17th from 13:00 to talk about the effect of policy makers’ language in the context of the National Data Strategy: ODI Fridays: Data is not an avocado – why it matters to Gen Z https://theodi.org/event/odi-fridays-data-is-not-an-avocado-why-it-matters-to-gen-z/

Data Protection law is being set up as a patsy.

After Dominic Cummings’ marathon session at the Select Committee, the Times published an article on,”The heroes and villains of the pandemic, according to Dominic Cummings”

One of Dom’s villains left out, was data protection law. He claimed, “if someone somewhere in the system didn’t say, ‘ignore GDPR’ thousands of people were going to die,” and that “no one even knew if that itself was legal—it almost definitely wasn’t.”

Thousands of people have died since that event he recalled from March 2020, but as a result of Ministers’ decisions, not data laws.

Data protection laws are *not* barriers, but permissive laws to *enable* use of personal data within a set of standards and safeguards designed to protect people. The opposite of what its detractors would have us believe.

The starting point is fundamental human rights. Common law confidentially. But the GDPR and its related parts on public health, are in fact specifically designed to enable data processing that overrules those principles for pandemic response purposes . In recognition of emergency needs for a limited time period, data protection laws permit interference with our fundamental rights and freedoms, including overriding privacy.

We need that protection of our privacy sometimes from government itself. And sometimes from those who see themselves as “the good guys” and above the law.

The Department of Health appears to have no plan to tell people about care.data 2,  the latest attempt at an NHS data grab, despite the fact that data protection laws require that they do. From September 1st (delayed to enable it to be done right, thanks to campaign efforts from medConfidential et supporters) all our GP medical records will be copied into a new national database for re-use, unless we actively opt out.

It’s groundhog day for the Department of Health. It is baffling why the government cannot understand or accept the need to do the right thing, and instead is repeating the same mistake of recent memory, all over again. Why the rush without due process and steamrollering any respect for the rule of law?

Were it not so serious, it might amuse me that some academic researchers appear to fail to acknowledge this matters, and they are getting irate on Twitter that *privacy* or ‘campaigners’ will prevent them getting hold of the data they appear to feel entitled to. Blame the people that designed a policy that will breach human rights and the law, not the people who want your rights upheld. And to blame the right itself is just, frankly, bizarre.

Such rants prompt me to recall the time when early on in my lay role on the Administrative Data Research Network approvals panel, a Director attending the meeting *as a guest* became so apoplectic with rage, that his face was nearly purple. He screamed, literally, at the panel of over ten well respected academics and experts in research and / or data because he believed the questions being asked over privacy and ethics principles in designing governance documents were unnecessary.

Or I might recall the request at my final meeting two years later in 2017 by another then Director, for access to highly sensitive and linked children’s health and education data to do (what I believed was valuable) public interest research involving the personal data of children with Down Syndrome. But the request came through the process with no ethical review. A necessary step before it should even have reached the panel for discussion.

I was left feeling from those two experiences, that both considered themselves and their work to be in effect “above the law” and expected special treatment, and a free pass without challenge. And that it had not improved over the two years.

If anyone in the research community cannot support due process, law, and human rights when it comes to admin data access, research using highly sensitive data about people’s lives with potential for significant community and personal impacts, then you are part of the problem.  There was extensive public outreach in 2012-13 across the UK about the use of personal if de-identified data in safe settings. And in 2014 the same concerns and red-lines were raised by hundreds of people in person, almost universally with the same reactions at a range of care.data public engagement events. Feedback which institutions say matters, but continue to ignore.

It seems nothing has changed since I wrote,

“The commercial intermediaries still need to be told, don’t pee in the pool. It spoils it, for everyone else.”

We could also look back to when Michael Gove as Secretary of State for Education, changed the law in 2012 to permit pupil level, identifying and sensitive personal data to be given away to third parties. Journalists. Charities. Commercial companies, even included an online tutoring business, pre-pandemic and an agency making heat maps of school catchment areas from identifying pupil data for estate agents — notably, without any SEND pupils’ data. (Cummings was coincidentally a Gove SpAd at the Department for Education.)  As a direct result of that decision to give away pupils’ personal data in 2012, (in effect ‘re-engineering’ how the education sector was structured and the roles of the local authority and non-state providers and creating a market for pupil data)  an ICO audit of the DfE in February 2020 found unlawful practice and made 139 recommendations for change. We’re still waiting to see if and how it will be fixed.  At the moment it’s business as usual. Literally. The ICO don’t appear even to have stopped further data distribution until made lawful.

In April 2021, in answer to a written Parliamentary Question Nick Gibb, Schools Minister, made a commitment to “publish an update to the audit in June 2021 and further details regarding the release mechanism of the full audit report will be contained in this update.”  Will they promote openess, transparency, accountablity,or continue to skulk from publishing the whole truth?

Children have lost control of their digital footprint in state education by their fifth birthday.  The majority of parents polled in 2018 do not know the National Pupil Database even exists. 69% of over 1,004 parents asked, replied that they had not been informed that the Department for Education may give away children’s data to third-parties at all.

Thousands of companies continue to exploit children’s school records, without opt-in or opt-out, including special educational needs, ethnicity, and other sensitive data at pupil level.

Data protection law alone is in fact so enabling of data flow, that it is inadequate to protect children’s rights and freedoms across the state education sector in England; whether from public interest, charity or commercial research interventions without opt in or out, without parental knowledge. We shouldn’t need to understand our rights or to be proactive, in order to have them protected by default but data protection law and the ICO in particular have been captured by the siren call of data as a source of ‘innovation’ and economic growth.

Throughout 2018 and questions over Vote Leave data uses, Cummings claimed to know GDPR well. It was everyone else who didn’t. On his blog that July he suggested, “MPs haven’t even bothered to understand GDPR, which they mis-explain badly,” and in April he wrote,  The GDPR legislation is horrific. One of the many advantages of Brexit is we will soon be able to bin such idiotic laws.” He lambasted the Charter of Fundamental Rights the protections of which the government went on to take away from us under European Union Withdrawal Act.

But suddenly, come 2020/21 he is suggesting he didn’t know the law that well after all, “no one even knew if that itself was legal—it almost definitely wasn’t.”

Data Protection law is being set up as a patsy, while our confidentiality is commodified. The problem is not the law. The problem is those in power who fail to respect it, those who believe themselves to be above it, and who feel an entitlement to exploit that for their own aims.


Added 21/06/2021: Today I again came across a statement that I thought worth mentioning, from the Explanatory Notes for the Data Protection Bill from 2017:

Accordingly, Parliament passed the Data Protection Act 1984 and ratified the Convention in 1985, partly to ensure the free movement of data. The Data Protection Act 1984 contained principles which were taken almost directly from Convention 108 – including that personal data shall be obtained and processed fairly and lawfully and held only for specified purposes.”

The Data Protection Directive (95/46/EC) (“the 1995 Directive”) provides the current basis for the UK’s data protection regime. The 1995 Directive stemmed from the European Commission’s concern that a number of Member States had not introduced national law related to Convention 108 which led to concern that barriers may be erected to data flows. In addition, there was a considerable divergence in the data protection laws between Member States. The focus of the 1995 Directive was to protect the right to privacy with respect to the processing of personal data and to ensure the free flow of personal data between Member States. “

When FAT fails: Why we need better process infrastructure in public services.

I’ve been thinking about FAT, and the explainability of decision making.

There may be few decisions about people at scale, today in the public sector, in which computer stored data aren’t used. For some, computers are used to make or help make decisions.

How we understand those decisions in a vital part of the obligation of fairness, in data processing. How I know that *you* have data about me, and are processing it, in order to make a decision that affects me. So there’s an awful lot of good things that come out of that. The staff member does their job with better understanding. The person affected has an opportunity to question and correct if necessary, the inputs to the decision. And one hopes, that the computer support can make many decisions faster, and with more information in useful ways, than the human staff member alone.

But, why then, does it seem so hard to get this understood and processes in place to make the decision making understandable?

And more importantly, why does there seem to be no consistency in how such decision-making is documented, and communicated?

From school progress measures, to PIP and Universal Credit applications, to predictive  ‘risk scores’ for identifying gang membership and child abuse. In a world where you need to be computer literate but there may be no computer to help you make an application, the computers behind the scenes are making millions of life changing decisions.

We cannot see them happen, and often don’t see the data that goes into them. From start to finish, it is a hidden process.

The current focus on FAT —  fairness, accountability, and transparency of algorithmic systems — often makes accountability for the computer part of the decision-making in the public sector, appear something that has become too hard to solve and needs complex thinking around.

I want conversations to go back to something more simple. Humans taking responsibility for their actions. And to do so, we need better infrastructure for whole process delivery, where it involves decision making, in public services.

Academics, boards, conferences, are all spending time on how to make the impact of the algorithms fair, accountable, and transparent. But in the search for ways to explain legal and ethical models of fairness, and to explain the mathematics and logic behind algorithmic systems and machine learning, we’ve lost sight of why anyone needs to know. Who cares and why?

People need to get redress when things go wrong or appear to be wrong. If things work, the public at large generally need not know why.  Take TOEIC. The way the Home Office has treated these students makes a mockery of the British justice system. And the impact has been devastating. Yet there is no mechanism for redress and no one in government has taken responsibility for its failures.

That’s a policy decision taken by people.

Routes for redress on decisions today are often about failed policy and processes. They are costly and inaccessible, such as fighting Local Authorities decisions not to provide services required by law.

That’s a policy decision taken by people.

Rather in the same way that the concept of ethics has become captured and distorted by companies to suit their own agenda, so if anything, the focus on FAT has undermined the concept of whole process audit and responsibility for human choices, decisions, and actions.

The effect of a machine-made decision on those who are included in the system response, — and more rarely those who may be left out of it, or its community effects, — has been singled out for a lot of people’s funding and attention as what matters to understand and audit in the use of data for making safe and just decisions.

It’s right to do so, but not as a stand alone cog in the machine.

The computer and its data processing have been unjustifiably deified. Rather than supporting public sector staff they are disempowered in the process as a whole. It is assumed the computer knows best, and can be used to justify a poor decision — “well, what could I do, the data told me to do it?” is rather like, “it was not my job to pick up the fax from the fax machine.” But that’s not a position we should encourage.

We have become far too accommodating of this automated helplessness.

If society feels a need to take back control, as a country and of our own lives, we also need to see decision makers take back responsibility.

The focus on FAT emphasises the legal and ethical obligations on companies and organisations, to be accountable for what the computer says, and the narrow algorithmic decision(s) in it.  But it is rare that an outcome in most things in real life, is the result of a singular decision.

So does FAT fit these systems at all?

Do I qualify for PIP? Can your child meet the criteria needed for additional help at school?  Does the system tag your child as part of a ‘Troubled Family’? These outcomes are life affecting in the public sector. It should therefore be made possible to audit *if* and *how* the public sector should offer to change lives as a holistic process.

That means re-looking at if and how we audit that whole end-to-end process > from policy idea, to legislation, through design to delivery.

There are no simple, clean, machine readable results in that.

Yet here again, the current system-process-solution encourages the public sector to use *data* to assess and incentivise the process to measure the process, and award success and failure, packaged into surveys and payment-by-results.

The data driven measurement, assesses data driven processes, that compound the problems of this infinite human-out-of-the-loop.

This clean laser-like focus misses out on the messy complexity of our human lives.  And the complexity of public service provision makes it very hard to understand the process of delivery. As long as the end-to-end system remains weighted to self preservation, to minimise financial risk to the institution for example, or to find a targeted number of interventions, people will be treated unfairly.

Through a hyper focus on algorithms and computer-led decision accountability, the tech sector, academics and everyone involved, is complicit in a debate that should be about human failure. We already have algorithms in every decision process. Human and machine-led algorithms. Before we decide if we need a new process of fairness, accountability and transparency, we should know who’s responsible now for the outcomes and failure in any given activity, and ask, ‘Does it really need to change?’

To restore some of the power imbalance to the public on decisions about us made by authorities today, we urgently need public bodies to compile, publish and maintain at very minimum, some of the basic underpinning and auditable infrastructure — the ‘plumbing’ — inside these processes:

  1. a register of data analytics systems used by Local and Central Government, including but not only those where algorithmic decision-making affects individuals.
  2. a register of data sources used in those analytics systems.
  3. a consistently identifiable and searchable taxonomy of the companies and third-parties delivering those analytics systems.
  4. a diagrammatic mapping of core public service delivery activities, to understand the tasks, roles, and responsibilities within the process. It would benefit government at all levels to be able to see themselves where decision points sit, understand flows of data and cash, and see where which law supports the task, and accountability sits.

Why? Because without knowing what is being used at scale, how and by whom, we are poorly informed and stay helpless. It allows for enormous and often unseen risks without adequate checks and balances like named records with the sexual orientation data of almost 3.2 million people, and religious belief data on 3.7 million sitting in multiple distributed databases and with the massive potential for state-wide abuse by any current or future government.  And the responsibility for each part of a process remains unclear.

If people don’t know what you’re doing, they don’t know what you’re doing wrong, after all. But it also means the system is weighted unfairly against people. Especially those who least fit the model.

We need to make increasingly lean systems more fat and stuff them with people power again. Yes we need fairness accountability and transparency. But we need those human qualities to reach across thinking beyond computer code. We need to restore humanity to automated systems and it has to be re-instated across whole processes.

FAT focussed only on computer decisions, is a distraction from auditing failure to deliver systems that work for people. It’s a failure to manage change and of governance, and to be accountable for when things go wrong.

What happens when FAT fails? Who cares and what do they do?

Policy shapers, product makers, and profit takers (1)

In 2018, ethics became the new fashion in UK data circles.

The launch of the Women Leading in AI principles of responsible AI, has prompted me to try and finish and post these thoughts, which have been on my mind for some time. If two parts of 1K is tl:dr for you, then in summary, we need more action on:

  • Ethics as a route to regulatory avoidance.
  • Framing AI and data debates as a cost to the Economy.
  • Reframing the debate around imbalance of risk.
  • Challenging the unaccountable and the ‘inevitable’.

And in the next post on:

  • Corporate Capture.
  • Corporate Accountability, and
  • Creating Authentic Accountability.

Ethics as a route to regulatory avoidance

In 2019, the calls to push aside old wisdoms for new, for everyone to focus on the value-laden words of ‘innovation’ and ‘ethics’, appears an ever louder attempt to reframe regulation and law as barriers to business, asking to cast them aside.

On Wednesday evening, at the launch of the Women Leading in AI principles of responsible AI, the chair of the CDEI said in closing, he was keen to hear from companies where, “they were attempting to use AI effectively and encountering difficulties due to regulatory structures.”

In IBM’s own words to government recently,

A rush to further regulation can have the effect of chilling innovation and missing out on the societal and economic benefits that AI can bring.”

The vague threat is very clear, if you regulate, you’ll lose. But the the societal and economic benefits are just as vague.

So far, many talking about ethics are trying to find a route to regulatory avoidance. ‘We’ll do better,’ they promise.

In Ben Wagner’s recent paper, Ethics as an Escape from Regulation: From ethics-washing to ethics-shopping,he asks how to ensure this does not become the default engagement with ethical frameworks or rights-based design. He sums up, “In this world, ‘ethics’ is the new ‘industry self-regulation.”

Perhaps it’s ingenious PR to make sure that what is in effect self-regulation, right across the business model, looks like it comes imposed from others, from the very bodies set up to fix it.

But as I think about in part 2, is this healthy for UK public policy and the future not of an industry sector, but a whole technology, when it comes to AI?

Framing AI and data debates as a cost to the Economy

Companies, organisations and individuals arguing against regulation are framing the debate as if it would come at a great cost to society and the economy. But we rarely hear, what effect do they expect on their company. What’s the cost/benefit expected for them. It’s disingenuous to have only part of that conversation. In fact the AI debate would be richer were it to be included. If companies think their innovation or profits are at risk from non-use, or regulated use, and there is risk to the national good associated with these products, we should be talking about all of that.

And in addition, we can talk about use and non-use in society. Too often, the whole debate is intangible. Show me real costs, real benefits. Real risk assessments. Real explanations that speak human. Industry should show society what’s in it for them.

You don’t want it to ‘turn out like GM crops’? Then learn their lessons on transparency, trustworthiness, and avoid the hype. And understand sometimes there is simply tech, people do not want.

Reframing the debate around imbalance of risk

And while we often hear about the imbalance of power associated with using AI, we also need to talk about the imbalance of risk.

While a small false positive rate for a company product may be a great success for them, or for a Local Authority buying the service, it might at the same time, mean lives forever changed, children removed from families, and individual reputations ruined.

And where company owners may see no risk from the product they assure is safe, there are intangible risks that need factored in, for example in education where a child’s learning pathway is determined by patterns of behaviour, and how tools shape individualised learning, as well as the model of education.

Companies may change business model, ownership, and move on to other sectors after failure. But with the levels of unfairness already felt in the relationship between the citizen and State — in programmes like Troubled Families, Universal Credit, Policing, and Prevent — where use of algorithms and ever larger datasets is increasing, long term harm from unaccountable failure will grow.

Society needs a rebalance of the system urgently to promote transparent fairness in interactions, including but not only those with new applications of technology.

We must find ways to reframe how this imbalance of risk is assessed, and is distributed between companies and the individual, or between companies and state and society, and enable access to meaningful redress when risks turn into harm.

If we are to do that, we need first to separate truth from hype, public good from self-interest and have a real discussion of risk across the full range from individual, to state, to society at large.

That’s not easy against a non-neutral backdrop and scant sources of unbiased evidence and corporate capture.

Challenging the unaccountable and the ‘inevitable’.

In 2017 the Care Quality Commission reported into online services in the NHS, and found serious concerns of unsafe and ineffective care. They have a cross-regulatory working group.

By contrast, no one appears to oversee that risk and the embedded use of automated tools involved in decision-making or decision support, in children’s services, or education. Areas where AI and cognitive behavioural science and neuroscience are already in use, without ethical approval, without parental knowledge or any transparency.

Meanwhile, as all this goes on, academics many are busy debating fixing algorithmic bias, accountability and its transparency.

Few are challenging the narrative of the ‘inevitability’ of AI.

Julia Powles and Helen Nissenbaum recently wrote that many of these current debates are an academic distraction, removed from reality. It is under appreciated how deeply these tools are already embedded in UK public policy. “Trying to “fix” A.I. distracts from the more urgent questions about the technology. It also denies us the possibility of asking: Should we be building these systems at all?”

Challenging the unaccountable and the ‘inevitable’ is the title of the conclusion of the Women Leading in AI report on principles, and makes me hopeful.

“There is nothing inevitable about how we choose to use this disruptive technology. […] And there is no excuse for failing to set clear rules so that it remains accountable, fosters our civic values and allows humanity to be stronger and better.”

[1] Powles, Nissenbaum, 2018,The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence, Medium

Next: Part  2– Policy shapers, product makers, and profit takers on

  • Corporate Capture.
  • Corporate Accountability, and
  • Creating Authentic Accountability.

Policy shapers, product makers, and profit takers (2)

Corporate capture

Companies are increasingly in controlling positions of the tech narrative in the press. They are funding neutral third-sector orgs’ and think tanks’ research. Supporting organisations advising on online education. Closely involved in politics. And sit increasingly, within the organisations set up to lead the technology vision, advising government on policy and UK data analytics, or on social media, AI and ethics.

It is all subject to corporate capture.

But is this healthy for UK public policy and the future not of an industry sector, but a whole technology, when it comes to AI?

If a company’s vital business interests seem unfazed by the risk and harm they cause to individuals — from people who no longer trust the confidentiality of the system to measurable harms — why should those companies sit on public policy boards set up to shape the ethics they claim we need, to solve the problems and restore loss of trust that these very same companies are causing?

We laud people in these companies as co-founders and forward thinkers on new data ethics institutes. They are invited to sit on our national boards, or create new ones.

What does that say about the entire board’s respect for the law which the company breached? It is hard not to see it signal acceptance of the company’s excuses or lack of accountability.

Corporate accountability

The same companies whose work has breached data protection law, multiple ways, seemingly ‘by accident’ on national data extractions, are those companies that cross the t’s and dot the i’s on even the simplest conference call, and demand everything is said in strictest confidence. Meanwhile their everyday business practices ignore millions of people’s lawful rights to confidentiality.

The extent of commercial companies’ influence on these boards is  opaque. To allow this ethics bandwagon to be driven by the corporate giants surely eschews genuine rights-based values, and long-term integrity of the body they appear to serve.

I am told that these global orgs must be in the room and at the table, to use the opportunity to make the world a better place.

These companies already have *all* the opportunity. Not only monopoly positions on their own technology, but the datasets at scale which underpin it, excluding new entrants to the market. Their pick of new hires from universities. The sponsorship of events. The political lobbying. Access to the media. The lawyers. Bottomless pockets to pay for it all. And seats at board tables set up to shape UK policy responses.

It’s a struggle for power, and a stake in our collective future. The status quo is not good enough for many parts of society, and to enable Big Tech or big government to maintain that simply through the latest tools, is a missed chance to reshape for good.

You can see it in many tech boards’ make up, and pervasive white male bias. We hear it echoed in London think tank conferences, even independent tech design agencies, or set out in some Big Tech reports. All seemingly unconnected, but often funded by the same driving sources.

These companies are often those that made it worse to start with, and the very ethics issues the boards have been set up to deal with, are at the core of their business models and of their making.

The deliberate infiltration of influence on online safety policy for children, or global privacy efforts is very real, explicitly set out in the #FacebookEmails, for example.

We will not resolve these fundamental questions, as long as the companies whose business depend on them, steer national policy. The odds will be ever in their favour.

At the same time, some of these individuals are brilliant. In all senses.

So what’s the answer. If they are around the table, what should the UK public expect of their involvement, and ensure in whose best interests it is? How do we achieve authentic accountability?

Whether it be social media, data analytics, or AI in public policy, can companies be safely permitted to be policy shapers if they wear all the hats; product maker, profit taker, *and* process or product auditor?

Creating Authentic Accountability

At minimum we must demand responsibility for their own actions from board members who represent or are funded by companies.

  1. They must deliver on their own product problems first before being allowed to suggest solutions to societal problems.
  2. There should be credible separation between informing policy makers, and shaping policy.
  3. There must be total transparency of funding sources across any public sector boards, of members, and those lobbying them.
  4. Board members must be meaningfully held accountable for continued company transgressions on rights and freedoms, not only harms.
  5. Oversight of board decision making must be decentralised, transparent and available to scrutiny and meaningful challenge.

While these new bodies may propose solutions that include public engagement strategies, transparency, and standards, few propose meaningful oversight. The real test is not what companies say in their ethical frameworks, but in what they continue to do.

If they fail to meet legal or regulatory frameworks, minimum accountability should mean no more access to public data sets and losing positions of policy influence.

Their behaviour needs to go above and beyond meeting the letter of the law, scraping by or working around rights based protections. They need to put people ahead of profit and self interests. That’s what ethics should mean, not be a PR route to avoid regulation.

As long as companies think the consequences of their platforms and actions are tolerable and a minimal disruption to their business model, society will be expected to live with their transgressions, and our most vulnerable will continue to pay the cost.


This is part 2 of thoughts on Policy shapers, product makers, and profit takers — data and AI. Part 1 is here.

The power of imagination in public policy

“A new, a vast, and a powerful language is developed for the future use of analysis, in which to wield its truths so that these may become of more speedy and accurate practical application for the purposes of mankind than the means hitherto in our possession have rendered possible.” [on Ada Lovelace, The First tech Visionary, New Yorker, 2013]

What would Ada Lovelace have argued for in today’s AI debates? I think she may have used her voice not only to call for the good use of data analysis, but for her second strength.The power of her imagination.

James Ball recently wrote in The European [1]:

“It is becoming increasingly clear that the modern political war isn’t one against poverty, or against crime, or drugs, or even the tech giants – our modern political era is dominated by a war against reality.”

My overriding take away from three days spent at the Conservative Party Conference this week, was similar. It reaffirmed the title of a school debate I lost at age 15, ‘We only believe what we want to believe.’

James writes that it is, “easy to deny something that’s a few years in the future“, and that Conservatives, “especially pro-Brexit Conservatives – are sticking to that tried-and-tested formula: denying the facts, telling a story of the world as you’d like it to be, and waiting for the votes and applause to roll in.”

These positions are not confined to one party’s politics, or speeches of future hopes, but define perception of current reality.

I spent a lot of time listening to MPs. To Ministers, to Councillors, and to party members. At fringe events, in coffee queues, on the exhibition floor. I had conversations pressed against corridor walls as small press-illuminated swarms of people passed by with Queen Johnson or Rees-Mogg at their centre.

In one panel I heard a primary school teacher deny that child poverty really exists, or affects learning in the classroom.

In another, in passing, a digital Minister suggested that Pupil Referral Units (PRU) are where most of society’s ills start, but as a Birmingham head wrote this week, “They’ll blame the housing crisis on PRUs soon!” and “for the record, there aren’t gang recruiters outside our gates.”

This is no tirade on failings of public policymakers however. While it is easy to suspect malicious intent when you are at, or feel, the sharp end of policies which do harm, success is subjective.

It is clear that an overwhelming sense of self-belief exists in those responsible, in the intent of any given policy to do good.

Where policies include technology, this is underpinned by a self re-affirming belief in its power. Power waiting to be harnessed by government and the public sector. Even more appealing where it is sold as a cost-saving tool in cash strapped councils. Many that have cut away human staff are now trying to use machine power to make decisions. Some of the unintended consequences of taking humans out of the process, are catastrophic for human rights.

Sweeping human assumptions behind such thinking on social issues and their causes, are becoming hard coded into algorithmic solutions that involve identifying young people who are in danger of becoming involved in crime using “risk factors” such as truancy, school exclusion, domestic violence and gang membership.

The disconnect between perception of risk, the reality of risk, and real harm, whether perceived or felt from these applied policies in real-life, is not so much, ‘easy to deny something that’s a few years in the future‘ as Ball writes, but a denial of the reality now.

Concerningly, there is lack of imagination of what real harms look like.There is no discussion where sometimes these predictive policies have no positive, or even a negative effect, and make things worse.

I’m deeply concerned that there is an unwillingness to recognise any failures in current data processing in the public sector, particularly at scale, and where it regards the well-known poor quality of administrative data. Or to be accountable for its failures.

Harms, existing harms to individuals, are perceived as outliers. Any broad sweep of harms across policy like Universal Credit, seem perceived as political criticism, which makes the measurable failures less meaningful, less real, and less necessary to change.

There is a worrying growing trend of finger-pointing exclusively at others’ tech failures instead. In particular, social media companies.

Imagination and mistaken ideas are reinforced where the idea is plausible, and shared. An oft heard and self-affirming belief was repeated in many fora between policymakers, media, NGOs regards children’s online safety. “There is no regulation online”. In fact, much that applies offline applies online. The Crown Prosecution Service Social Media Guidelines is a good place to start. [2] But no one discusses where children’s lives may be put at risk or less safe, through the use of state information about them.

Policymakers want data to give us certainty. But many uses of big data, and new tools appear to do little more than quantify moral fears, and yet still guide real-life interventions in real-lives.

Child abuse prediction, and school exclusion interventions should not be test-beds for technology the public cannot scrutinise or understand.

In one trial attempting to predict exclusion, this recent UK research project in 2013-16 linked children’s school records of 800 children in 40 London schools, with Metropolitan Police arrest records of all the participants. It found interventions created no benefit, and may have caused harm. [3]

“Anecdotal evidence from the EiE-L core workers indicated that in some instances schools informed students that they were enrolled on the intervention because they were the “worst kids”.”

Keeping students in education, by providing them with an inclusive school environment, which would facilitate school bonds in the context of supportive student–teacher relationships, should be seen as a key goal for educators and policy makers in this area,” researchers suggested.

But policy makers seem intent to use systems that tick boxes, and create triggers to single people out, with quantifiable impact.

Some of these systems are known to be poor, or harmful.

When it comes to predicting and preventing child abuse, there is concern with the harms in US programmes ahead of us, such as both Pittsburgh, and Chicago that has scrapped its programme.

The Illinois Department of Children and Family Services ended a high-profile program that used computer data mining to identify children at risk for serious injury or death after the agency’s top official called the technology unreliable, and children still died.

“We are not doing the predictive analytics because it didn’t seem to be predicting much,” DCFS Director Beverly “B.J.” Walker told the Tribune.

Many professionals in the UK share these concerns. How long will they be ignored and children be guinea pigs without transparent error rates, or recognition of the potential harmful effects?

Helen Margetts, Director of the Oxford Internet Institute and Programme Director for Public Policy at the Alan Turing Institute, suggested at the IGF event this week, that stopping the use of these AI in the public sector is impossible. We could not decide that, “we’re not doing this until we’ve decided how it’s going to be.” It can’t work like that.” [45:30]

Why on earth not? At least for these high risk projects.

How long should children be the test subjects of machine learning tools at scale, without transparent error rates, audit, or scrutiny of their systems and understanding of unintended consequences?

Is harm to any child a price you’re willing to pay to keep using these systems to perhaps identify others, while we don’t know?

Is there an acceptable positive versus negative outcome rate?

The evidence so far of AI in child abuse prediction is not clearly showing that more children are helped than harmed.

Surely it’s time to stop thinking, and demand action on this.

It doesn’t take much imagination, to see the harms. Safe technology, and safe use of data, does not prevent the imagination or innovation, employed for good.

If we continue to ignore views from Patrick Brown, Ruth Gilbert, Rachel Pearson and Gene Feder, Charmaine Fletcher, Mike Stein, Tina Shaw and John Simmonds I want to know why.

Where you are willing to sacrifice certainty of human safety for the machine decision, I want someone to be accountable for why.

 


References

[1] James Ball, The European, Those waging war against reality are doomed to failure, October 4, 2018.

[2] Thanks to Graham Smith for the link. “Social Media – Guidelines on prosecuting cases involving communications sent via social media. The Crown Prosecution Service (CPS) , August 2018.”

[3] Obsuth, I., Sutherland, A., Cope, A. et al. J Youth Adolescence (2017) 46: 538. https://doi.org/10.1007/s10964-016-0468-4 London Education and Inclusion Project (LEIP): Results from a Cluster-Randomized Controlled Trial of an Intervention to Reduce School Exclusion and Antisocial Behavior (March 2016)

Notes on Not the fake news

Notes and thoughts from Full Fact’s event at Newspeak House in London on 27/3 to discuss fake news, the misinformation ecosystem, and how best to respond. The recording is here. The contributions and questions part of the evening began from 55.55.


What is fake news? Are there solutions?

1. Clickbait: celebrity pull to draw online site visitors towards traffic to an advertising model – kill the business model
2. Mischief makers: Deceptive with hostile intent – bots, trolls, with an agenda
3. Incorrectly held views: ‘vaccinations cause autism’ despite the evidence to the contrary. How can facts reach people who only believe what they want to believe?

Why does it matter? The scrutiny of people in power matters – to politicians, charities, think tanks – as well as the public.

It is fundamental to remember that we do in general believe that the public has a sense of discernment, however there is also a disconnect between an objective truth and some people’s perception of reality. Can this conflict be resolved? Is it necessary to do so? If yes, when is it necessary to do so and who decides that?

There is a role for independent tracing of unreliable information, its sources and its distribution patterns and identifying who continues to circulate fake news even when asked to desist.

Transparency about these processes is in the public interest.

Overall, there is too little public understanding of how technology and online tools affect behaviours and decision-making.

The Role of Media in Society

How do you define the media?
How can average news consumers distinguish between self-made and distributed content compared with established news sources?
What is the role of media in a democracy?
What is the mainstream media?
Does the media really represent what I want to understand? > Does the media play a role in failure of democracy if news is not representative of all views? > see Brexit, see Trump
What are news values and do we have common press ethics?

New problems in the current press model:

Failure of the traditional media organisations in fact checking; part of the problem is that the credible media is under incredible pressure to compete to gain advertising money share.

Journalism is under resourced. Verification skills are lacking and tools can be time consuming. Techniques like reverse image search, and verification take effort.

Press releases with numbers can be less easily scrutinised so how do we ensure there is not misinformation through poor journalism?

What about confirmation bias and reinforcement?

What about friends’ behaviours? Can and should we try to break these links if we are not getting a fair picture? The Facebook representative was keen to push responsibility for the bubble entirely to users’ choices. Is this fair given the opacity of the model?
Have we cracked the bubble of self-reinforcing stories being the only stories that mutual friends see?
Can we crack the echo chamber?
How do we start to change behaviours? Can we? Should we?

The risk is that if people start to feel nothing is trustworthy, we trust nothing. This harms relations between citizens and state, organisations and consumers, professionals and public and between us all. Community is built on relationships. Relationships are built on trust. Trust is fundamental to a functioning society and economy.

Is it game over?

Will Moy assured the audience that there is no need to descend into blind panic and there is still discernment among the public.

Then, it was asked, is perhaps part of the problem that the Internet is incapable in its current construct to keep this problem at bay? Is part of the solution re-architecturing and re-engineering the web?

What about algorithms? Search engines start with word frequency and neutral decisions but are now much more nuanced and complex. We really must see how systems decide what is published. Search engines provide but also restrict our access to facts and ‘no one gets past page 2 of search results’. Lack of algorithmic transparency is an issue, but will not be solved due to commercial sensitivities.

Fake news creation can be lucrative. Mangement models that rely on user moderation or comments to give balance can be gamed.

Are there appropriate responses to the grey area between trolling and deliberate deception through fake news that is damaging? In what context and background? Are all communities treated equally?

The question came from the audience whether the panel thought regulation would come from the select committee inquiry. The general response was that it was unlikely.

What are the solutions?

The questions I came away thinking about went unanswered, because I am not sure there are solutions as long as the current news model exists and is funded in the current way by current players.

I believe one of the things that permits fake news is the growing imbalance of money between the big global news distributors and independent and public interest news sources.

This loss of balance, reduces our ability to decide for ourselves what we believe and what matters to us.

The monetisation of news through its packaging in between advertising has surely contaminated the news content itself.

Think of a Facebook promoted post – you can personalise your audience to a set of very narrow and selective characteristics. The bubble that receives that news is already likely to be connected by similar interest pages and friends and the story becomes self reinforcing, showing up in  friends’ timelines.

A modern online newsroom moves content on the webpage around according to what is getting the most views and trending topics in a list encourage the viewers to see what other people are reading, and again, are self reinforcing.

There is also a lack of transparency of power. Where we see a range of choices from which we may choose to digest a range of news, we often fail to see one conglomerate funder which manages them all.

The discussion didn’t address at all the fundamental shift in “what is news” which has taken place over the last twenty years. In part, I believe the responsibility for the credibility level of fake news in viewers lies with 24/7 news channels. They have shifted the balance of content from factual bulletins, to discussion and opinion. Now while the news channel is seen as a source of ‘news’ much of the time, the content is not factual, but opinion, and often that means the promotion and discussion of the opinions of their paymaster.

Most simply, how should I answer the question that my ten year old asks – how do I know if something on the Internet is true or not?

Can we really say it is up to the public to each take on this role and where do we fit the needs of the vulnerable or children into that?

Is the term fake news the wrong approach and something to move away from? Can we move solutions away from target-fixation ‘stop fake news’ which is impossible online, but towards what the problems are that fake news cause?

Interference in democracy. Interference in purchasing power. Interference in decision making. Interference in our emotions.

These interferences with our autonomy is not something that the web is responsible for, but the people behind the platforms must be accountable for how their technology works.

In the mean time, what can we do?

“if we ever want the spread of fake news to stop we have to take responsibility for calling out those who share fake news (real fake news, not just things that feel wrong), and start doing a bit of basic fact-checking ourselves.” [IB Times, Eliot Higgins is the founder of Bellingcat]

Not everyone has the time or capacity to each do that. As long as today’s imbalance of money and power exists, truly independent organisations like Bellingcat and FullFact have an untold value.


The billed Google and Twitter speakers were absent because they were invited to a meeting with the Home Secretary on 28/3. Speakers were Will Moy, Director of Jenni Sargent Managing Director of , Richard Allan, Facebook EMEA Policy Director and the event was chaired by Bill Thompson.

Destination smart-cities: design, desire and democracy (Part two)

Smart cities: private reach in public space and personal lives

Smart-cities are growing in the UK through private investment and encroachment on public space. They are being built by design at home, and supported by UK money abroad, with enormous expansion plans in India for example, in almost 100 cities.

With this rapid expansion of “smart” technology not only within our living rooms but my living space and indeed across all areas of life, how do we ensure equitable service delivery, (what citizens generally want, as demonstrated by strength of feeling on the NHS) continues in public ownership, when the boundary in current policy is ever more blurred between public and private corporate ownership?

How can we know and plan by-design that the values we hope for, are good values, and that they will be embedded in systems, in policies and planning? Values that most people really care about. How do we ensure “smart” does not ultimately mean less good? That “smart” does not in the end mean, less human.

Economic benefits seem to be the key driver in current government thinking around technology – more efficient = costs less.

While using technology progressing towards replacing repetitive work may be positive, how will we accommodate for those whose skills will no longer be needed? In particular its gendered aspect, and the more vulnerable in the workforce, since it is women and other minorities who work disproportionately in our part-time, low skill jobs. Jobs that are mainly held by women, even what we think of as intrinsically human, such as carers, are being trialed for outsourcing or assistance by technology. These robots monitor people, in their own homes and reduce staffing levels and care home occupancy. We’ll no doubt hear how good it is we need fewer carers because after all, we have a shortage of care staff. We’ll find out whether it is positive for the cared, or whether they find it it less ‘human'[e]. How will we measure those costs?

The ideal future of us all therefore having more leisure time sounds fab, but if we can’t afford it, we won’t be spending more of our time employed in leisure. Some think we’ll simply be unemployed. And more people live in the slums of Calcutta than in Soho.

One of the greatest benefits of technology is how more connected the world can be, but will it also be more equitable?

There are benefits in remote sensors monitoring changes in the atmosphere that dictate when cars should be taken off the roads on smog-days, or indicators when asthma risk-factors are high.

Crowd sourcing information about things which are broken, like fix-my-street, or lifts out-of-order are invaluable in cities for wheelchair users.

Innovative thinking and building things through technology can create things which solve simple problems and add value to the person using the tool.

But what of the people that cannot afford data, cannot be included in the skilled workforce, or will not navigate apps on a phone?

How this dis-incentivises the person using the technology has not only an effect on their disappointment with the tool, but the service delivery, and potentially wider still even to societal exclusion or stigma.These were the findings of the e-red book in Glasgow explained at the Digital event in health, held at the King’s Fund in summer 2015.

Further along the scale of systems and potential for negative user experience, how do we expect citizens to react to finding punishments handed out by unseen monitoring systems, finding out our behaviour was ‘nudged’ or find decisions taken about us, without us?

And what is the oversight and system of redress for people using systems, or whose data are used but inaccurate in a system, and cause injustice?

And wider still, while we encourage big money spent on big data in our part of the world how is it contributing to solving problems for millions for whom they will never matter? Digital and social media makes increasingly transparent our one connected world, with even less excuse for closing our eyes.

Approximately 15 million girls worldwide are married each year – that’s one girl, aged under 18, married off against her will every two seconds. [Huff Post, 2015]

Tinder-type apps are luxury optional extras for many in the world.

Without embedding values and oversight into some of what we do through digital tools implemented by private corporations for profit, ‘smart’ could mean less fair, less inclusive, less kind. Less global.

If digital becomes a destination, and how much it is implemented is seen as a measure of success, by measuring how “smart” we become risks losing sight of seeing technology as solutions and steps towards solving real problems for real people.

We need to be both clever and sensible, in our ‘smart’.

Are public oversight and regulation built in to make ‘smart’ also be safe?

If there were public consultation on how “smart” society will look would we all agree if and how we want it?

Thinking globally, we need to ask if we are prioritising the wrong problems? Are we creating more tech that we already have invented solutions for place where governments are willing to spend on them? And will it in those places make the society more connected across class and improve it for all, or enhance the lives of the ‘haves’ by having more, and the ‘have-nots’ be excluded?

Does it matter how smart your TV gets, or carer, or car, if you cannot afford any of these convenient add-ons to Life v1.1?

As we are ever more connected, we are a global society, and being ‘smart’ in one area may be reckless if at the expense or ignorance of another.

People need to Understand what “Smart” means

“Consistent with the wider global discourse on ‘smart’ cities, in India urban problems are constructed in specific ways to facilitate the adoption of “smart hi-tech solutions”. ‘Smart’ is thus likely to mean technocratic and centralized, undergirded by alliances between the Indian government and hi-technology corporations.”  [Saurabh Arora, Senior Lecturer in Technology and Innovation for Development at SPRU]

Those investing in both countries are often the same large corporations. Very often, venture capitalists.

Systems designed and owned by private companies provide the information technology infrastructure that i:

the basis for providing essential services to residents. There are many technological platforms involved, including but not limited to automated sensor networks and data centres.’

What happens when the commercial and public interest conflict and who decides that they do?

Decision making, Mining and Value

Massive amounts of data generated are being mined for making predictions, decisions and influencing public policy: in effect using Big Data for research purposes.

Using population-wide datasets for social and economic research today, is done in safe settings, using deidentified data, in the public interest, and has independent analysis of the risks and benefits of projects as part of the data access process.

Each project goes before an ethics committee review to assess its considerations for privacy and not only if the project can be done, but should be done, before it comes for central review.

Similarly our smart-cities need ethics committee review assessing the privacy impact and potential of projects before commissioning or approving smart-technology. Not only assessing if they are they feasible, and that we ‘can’ do it, but ‘should’ we do it. Not only assessing the use of the data generated from the projects, but assessing the ethical and privacy implications of the technology implementation itself.

The Committee recommendations on Big Data recently proposed that a ‘Council of Data Ethics’ should be created to explicitly address these consent and trust issues head on. But how?

Unseen smart-technology continues to grow unchecked often taking root in the cracks between public-private partnerships.

We keep hearing about Big Data improving public services but that “public” data is often held by private companies. In fact our personal data for public administration has been widely outsourced to private companies of which we have little oversight.

We’re told we paid the price in terms of skills and are catching up.

But if we simply roll forward in first gear into the connected city that sees all, we may find we arrive at a destination that was neither designed nor desired by the majority.

We may find that the “revolution, not evolution”, hoped for in digital services will be of the unwanted kind if companies keep pushing more and more for more data without the individual’s consent and our collective public buy-in to decisions made about data use.

Having written all this, I’ve now read the Royal Statistical Society’s publication which eloquently summarises their recent work and thinking. But I wonder how we tie all this into practical application?

How we do governance and regulation is tied tightly into the practicality of public-private relationships but also into deciding what should society look like? That is what our collective and policy decisions about what smart-cities should be and may do, is ultimately defining.

I don’t think we are addressing in depth yet the complexity of regulation and governance that will be sufficient to make Big Data and Public Spaces safe because companies say too much regulation risks choking off innovation and creativity.

But that risk must not be realised if it is managed well.

Rather we must see action to manage the application of smart-technology in a thoughtful way quickly, because if we do not, very soon, we’ll have lost any say in how our service providers deliver.

*******

I began my thoughts about this in Part one, on smart technology and data from the Sprint16 session and after this (Part two), continue to look at the design and development of smart technology making “The Best Use of Data” with a UK company case study (Part three) and “The Best Use of Data” used in predictions and the Future (Part four).

Breaking up is hard to do. Restructuring education in England.

This Valentine’s I was thinking about the restructuring of education in England and its wide ranging effects. It’s all about the break up.

The US EdTech market is very keen to break into the UK, and our front door is open.

We have adopted the model of Teach First partnered with Teach America, while some worry we do not ask “What is education for?

Now we hear the next chair of Oftsed is to be sought from the US, someone who is renowned as “the scourge of the unions.”

Should we wonder how long until the management of schools themselves is US-sourced?

The education system in England has been broken up in recent years into manageable parcels  – for private organisations, schools within schools, charity arms of commercial companies, and multi-school chains to take over – in effect, recent governments have made reforms that have dismantled state education as I knew it.

Just as the future vision of education outlined in the 2005 Direct Democracy co-authored by Michael Gove said, “The first thing to do is to make existing state schools genuinely independent of the state.”

Free schools touted as giving parents the ultimate in choice, are in effect another way to nod approval to the outsourcing of the state, into private hands, and into big chains. Despite seeing the model fail spectacularly abroad, the government seems set on the same here.

Academies, the route that finagles private corporations into running public-education is the preferred model, says Mr Cameron. While there are no plans to force schools to become academies, the legislation currently in ping-pong under the theme of coasting schools enables just that. The Secretary of State can impose academisation. Albeit only on Ofsted labeled ‘failing’ schools.

What fails appears sometimes to be a school that staff and parents cannot understand as anything less than good, but small. While small can be what parents want, small pupil-teacher ratios, mean higher pupil-per teacher costs. But the direction of growth is towards ‘big’ is better’.

“There are now 87 primary schools with more than 800 pupils, up from 77 in 2014 and 58 in 2013. The number of infants in classes above the limit of 30 pupils has increased again – with 100,800 pupils in these over-sized classes, an increase of 8% compared with 2014.” [BBC]

All this restructuring creates costs about which the Department wants to be less than transparent.  And has lost track of.

If only we could see that these new structures raised standards?  But,” while some chains have clearly raised attainment, others achieve worse outcomes creating huge disparities within the academy sector.”

If not delivering better results for children, then what is the goal?

A Valentine’s view of Public Service Delivery: the Big Break up

Breaking up the State system, once perhaps unthinkable is possible through the creation of ‘acceptable’ public-private partnerships (as opposed to outright privatisation per se). Schools become academies through a range of providers and different pathways, at least to start with, and as they fail, the most successful become the market leaders in an oligopoly. Ultimately perhaps, this could become a near monopoly. Delivering ‘better’. Perhaps a new model, a new beginning, a new provider offering salvation from the flood of ‘failing’ schools coming to the State’s rescue.

In order to achieve this entry to the market by outsiders, you must first remove conditions seen as restrictive, giving more ‘freedom’ to providers; to cut corners make efficiency savings on things like food standards, required curriculum, and numbers of staff, or their pay.

And what if, as a result, staff leave, or are hard to recruit?

Convincing people that “tech” and “digital” will deliver cash savings and teach required skills through educational machine learning is key if staff costs are to be reduced, which in times of austerity and if all else has been cut, is the only budget left to slash.

Self-taught systems’ providers are convincing in their arguments that tech is the solution.

Sadly I remember when a similar thing was tried on paper. My first year of GCSE maths aged 13-14  was ‘taught’ at our secondary comp by working through booklets in a series that we self-selected from the workbench in the classroom. Then we picked up the master marking-copy once done. Many of the boys didn’t need long to work out the first step was an unnecessary waste of time. The teacher had no role in the classroom. We were bored to bits. By the final week at end of the year they sellotaped the teacher to his chair.

I kid you not.

Teachers are so much more than knowledge transfer tools, and yet by some today seem to be considered replaceable by technology.

The US is ahead of us in this model, which has grown hand-in-hand with commercialism in schools. Many parents are unhappy.

So is the DfE setting us up for future heartbreak if it wants us to go down the US route of more MOOCs, more tech, and less funding and fewer staff? Where’s the cost benefit risk analysis and transparency?

We risk losing the best of what is human from the classroom, if we will remove the values they model and inspire. Unions and teachers and educationalists are I am sure, more than aware of all these cumulative changes. However the wider public seems little engaged.

For anyone ‘in education’ these changes will all be self-evident and their balance of risks and benefits a matter of experience, and political persuasion. As a parent I’ve only come to understand these changes, through researching how our pupils’ personal and school data have been commercialised,  given away from the National Pupil Database without our consent, since legislation changed in 2013; and the Higher Education student and staff data sold.

Will more legislative change be needed to keep our private data accessible in public services operating in an increasingly privately-run delivery model? And who will oversee that?

The Education Market is sometimes referred to as ‘The Wild West’. Is it getting a sheriff?

The news that the next chair of Oftsed is to be sought from the US did set alarm bells ringing for some in the press, who fear US standards and US-led organisations in British schools.

“The scourge of unions” means not supportive of staff-based power and in health our junior doctors have clocked exactly what breaking their ‘union’ bargaining power is all about.  So who is driving all this change in education today?

Some ed providers might be seen as profiting individuals from the State break up. Some were accused of ‘questionable practices‘. Oversight has been lacking others said. Margaret Hodge in 2014 was reported to have said: “It is just wrong to hand money to a company in which you have a financial interest if you are a trustee.”

I wonder if she has an opinion on a lead non-executive board member at the Department for Education also being the director of one of the biggest school chains? Or the ex Minister now employed by the same chain? Or that his campaign was funded by the same Director?  Why this register of interests is not transparent is a wonder.

It could appear to an outsider that the private-public revolving door is well oiled with sweetheart deals.

Are the reforms begun by Mr Gove simply to be executed until their end goal, whatever that may be, through Nikky Morgan or she driving her own new policies?

If Ofsted were  to become US-experience led, will the Wild West be tamed or US providers invited to join the action, reshaping a new frontier? What is the end game?

Breaking up is not hard to do, but in whose best interest is it?

We need only look to health to see the similar pattern.

The structures are freed up, and boundaries opened up (if you make the other criteria) in the name of ‘choice’. The organisational barriers to break up are removed in the name of ‘direct accountability’. And enabling plans through more ‘business intelligence’ gathered from data sharing, well, those plans abound.

Done well, new efficient systems and structures might bring public benefits, the right technology can certainly bring great things, but have we first understood what made the old less efficient if indeed it was and where are those baselines to look back on?

Where is the transparency of the end goal and what’s the price the Department is prepared to pay in order to reach it?

Is reform in education, transparent in its ideology and how its success is being measured if not by improved attainment?

The results of change can also be damaging. In health we see failing systems and staff shortages and their knock-on effects into patient care. In schools, these failures damage children’s start in life, it’s not just a ‘system’.

Can we assess if and how these reforms are changing the right things for the right reasons? Where is the transparency of what problems we are trying to solve, to assess what solutions work?

How is change impact for good and bad being measured, with what values embedded, with what oversight, and with whose best interests at its heart?

2005’s Direct Democracy could be read as a blueprint for co-author Mr Gove’s education reforms less than a decade later.

Debate over the restructuring of education and its marketisation seems to have bypassed most of us in the public, in a way health has not.

Underperformance as measured by new and often hard to discern criteria, means takeover at unprecedented pace.

And what does this mean for our most vulnerable children? SEN children are not required to be offered places by academies. The 2005 plans co-authored by Mr Gove also included: “killing the government’s inclusion policy stone dead,” without an alternative.

Is this the direction of travel our teachers and society supports?

What happens when breakups happen and relationship goals fail?

Who picks up the pieces? I fear the state is paying heavily for the break up deals, investing heavily in new relationships, and yet will pay again for failure. And so will our teaching staff, and children.

While Mr Hunt is taking all the heat right now, for his part in writing Direct Democracy and its proposals to privatise health – set against the current health reforms and restructuring of junior doctors contracts – we should perhaps also look to Mr Gove co-author, and ask to better understand the current impact of his recent education reforms, compare them with what he proposed in 2005, and prepare for the expected outcomes of change before it happens (see p74).

One outcome was that failure was to be encouraged in this new system, and Sweden held up as an exemplary model:

“Liberating state schools would also allow the all-important freedom to fail.”

As Anita Kettunen, principal of JB Akersberga in Sweden reportedly said when the free schools chain funded by a private equity firm failed:

“if you’re going to have a system where you have a market, you have to be ready for this.”

Breaking up can be hard to do. Failure hurts. Are we ready for this?
******

 

Abbreviated on Feb 18th.

 

Blue Sky Thinking – Civil Aviation Authority plans to cut medical services – public consultation appears to be tick-box

Updated March 2016: the world class services at the centre have been closed. Class 1 and 3 medical certificates are no longer provided via the aeromedical centre at Gatwick.

The most recent CAA update of January 2016 confirmed that the plans would go ahead despite almost universal objection to many principles and the way it would be done. Unsurprisingly, there were only 15 responses to the 3rd consultation.

CAP 1338 was the third of three documents published by the Civil Aviation Authority in 2014-15, which had only 15 responses. The first two were CAP 1214 (www.caa.co.uk/cap1214)
and CAP 1276 (www.caa.co.uk/cap1276) which includes the 40 original responses to consultation, including major airlines, BALPA, the Honourable Company of Air Pilots (guild founded in 1929), aeromedical doctors and other professionals. All of which objected.

*****

Blog published October 28, 2015:

If government divests the state of our expertise along with our infrastructure, how will we ensure services continue to deliver universal public good?

The NHS is struggling to monitor the safety and efficacy of its services outsourced to private providers, according to a report published in the Independent in April. Now consider an outsourced medical service where the safety and efficacy is reduced, for our commercial airline pilots. #Whatcouldpossiblygowrong?

If you looked very hard at the Civil Aviation Authority’s website over the last year you would be forgiven for missing the links to the consultation to outsource or divest from its medical services. [1]  This is the service organisation of 30 or so staff who ensure in a part state owned set up, that newly qualifying pilots for commercial airlines and air traffic controllers are fit for the job. And not only British pilots, but others come from outside the country, so great is its reputation. It is the last state-owned of 4 such centres, and based at Gatwick.

Pilots have unique needs and unique fitness-to-fly checks to pass, as documented by Aida Edemariam in last weekend’s Guardian: “The medicals especially, Bor says, mean facing “the risk of losing one’s job… as often as every six months”.”

The initial consultation, now a year ago, suggested outsourcing the service to the private sector. Today it seems the prefered path is complete divestment from the delivery of its services. The second part of the seemingly tick-box exercise closed today. [5]

Tick-box, because the plans are going ahead despite almost every response to the consultation voicing concerns or serious questions, including from major airlines. Balpa at the time hadn’t been able to adequately respond in the original Oct-Dec 2014 timeframe. Many other suggestions and ideas were raised, but from the CAA consultation response to criticisms it seemed blue-sky thinking, creative alternative solutions differing from the CAA plans, was not welcomed.

It seems that in a bid to become lean, akin to having less to pay for on the balance sheet, the government is selling off not only concrete assets but losing British state-led skills in services at which we excel. It is asking commercial companies to fill the gap and many question if there is sufficient expertise in the commercial market to deliver.

There are five key concerns here. The first, is that without the state to hold accountable for the service, airlines and pilots must foot the bill they can no longer control, in a near monopoly market. Elsewhere in health, spending on outsourcing these services has  reportedly rocketed.

The second, is quality control. How will quality of delivery be maintained for services which operate entirely for the benefit of the public good, but are now be required to turn a profit?

And the third is continuity of service. How will the universality of these services be maintained, offered fairly and to whom?

The fourth is whether the UK should sacrifice its unique leadership position of respected medical expertise in European and global flight safety?

And finally and most importantly, pilots, airlines, and healthcare professionals questioned in the last quarter of 2014 whether safety may be put at risk if the cost cutting move at the Civil Aviation Authority goes ahead.

This cut to regulatory oversight is part of the bonfire of red tape.

Responding to plans outlined by the Civil Aviation Authority in a public consultation [1] last autumn, professionals overwhelmingly suggested service improvements could be made without outsourcing what one airline called “the priceless nerve centre of expertise in the CAA”.

Based at the CAA’s Gatwick headquarters, the aeromedical centre offers the initial medical examinations required for commercial airline pilot and air traffic controllers and periodic checks thereafter. It also undertakes assessments of the fitness of pilots to return to flying after illness.

“All pilots who hold a commercial licence undergo an annual Class 1 medical assessment with an Aeromedical Examiner, increasing to every six months from the age of 60, or 40 if they are undertaking single pilot operations. [source: whatdotheyknow.com ]
The CAA expects to reduce overall costs by outsourcing all of its aeromedical non-mandatory functions, outlined in consultation plans that were discussed with potential providers at meetings in mid April. [2] But unions suggested the CAA is putting commercial pressures before the public interest and denounced the plans.

Steve Jary, Prospect national officer for aviation, said:

“The CAA executive board needs to listen and put safety, not commercial interests at the heart of its decision making.”

In its follow up consultation response in early 2015, the CAA said it does not believe in putting a price on public safety and it realised that cost and value are sensitive issues.

The national value of excellent medical services to pilots in any business model on paper however, may be impossible to put a price on in practice. It was especially sensitive earlier this year as the plans for change coincided with the climate of raised passenger awareness following the Germanwings flight 9525 on March 24.

Long before this, in response to the October 2014 consultation, the Honourable Company of  Air Pilots, a professional guild, wrote:

“As long as human pilots are part of the aviation safety chain, it is essential that their fitness to operate is monitored and supported by an expert community without fear of or bias from commercial pressures.”

Lacking in detailed financial analysis it is hard to see from the consultation how alternative solutions measure up against private provision. Specifically there was no estimate in the document of the cost of the CAA meeting its statutory obligations. [3]

One of the three airlines that responded in consultation, suggested the CAA could be seen to be outsourcing the commercially viable part of the service:

“The Aeromedical centre only seems to account for £500k out of the £3 million …and could potentially be seen as the most profitable element.”

By contracting out to a commercial provider, and introducing the need to make a profit, some respondents are concerned it would further increase costs to industry or individuals, and the CAA acknowledged this. Fees could potentially rise in what will be effectively a monopoly market, as in April there were only three other approved providers for this service, across all of the UK. Two are already operated by the same public private partnership and although part owned by government, are essentially commercially run.

Free from Treasury control, the Civil Aviation Authority is self-funding but sits under the wing of the Department of Transport, accountable to the Secretary of State for Transport. [4] But Government support was questioned by a pilot in the consultation, who wrote:

“If the CAA and the Department for Transport cannot resolve this without destroying the CAA medical service then we might as well pack it all in.”

Pressure has reportedly come from EASA, the European Aviation Safety Agency to follow regulatory best practices and separate the duties of the authority from the delivery of services.

However a group representing 15 CAA (UK) approved medical examiners, with a mean of 22 years experience, suggested this regulatory issue could be resolved in other ways, and said:

“Outsourcing any part of the medical department would remove essential functions, weakening the ability to respond to or promote future regulatory changes.

“Fragmentation will introduce inefficiency as work which should be integrated will be on at least two sites – never helpful.”

The medical service provided by the CAA is recognised as a market leader across Europe. It influences European and worldwide aeromedical policy and as one airline wrote in the consultation, has “rebuffed some of the more non evidence based demands of the European Aviation Safety Agency.”

Maintaining that globally respected expertise, say the CAA plans, is a third reason for redesigning a medical department fit for the future but many respondents believe that outsourcing will achieve the opposite.

The Honourable Company of  Air Pilots suggested the plans would have:

“an adverse impact on flight safety and diminish unacceptably the UK’s aviation medicine competency, research capability and global reputation for excellence and leadership.”

Headcount of currently over 30 full time equivalent staff could be reduced to eight if outsourcing plans go ahead and the service operates at its minimum regulatory duties.

Last year’s preparations for the outsourcing included an event in May open to providers through the NHS Partner’s Network of the NHS Confederation at which one was an NHS provider, but all others were private sector contracting organisations.

The Public and Commercial Services Union believes there is no provider which could fill the gap if the CAA stops providing its services in the current form. They said:

“We are requesting that this consultation be halted and consultation commences with the recognised trade unions on options within this paper to retain all existing services in-house.”

In April, the CAA said: “We are continuing to explore options for the future provision of medical services. Safety remains our number one priority and we will ensure that any changes that are made will be designed to enhance the UK’s excellent safety record. All medical requirements relating to pilots are set at international level and regulated nationally and will remain in force and unchanged regardless of any decisions relating to the provision of medical services in the future.”

Mr Haines,  explained at the April Board Meeting that “there would be a further discussion at the Board on the outcome of the CAA’s medical review consultation.” What that was is yet to be published. Transparency has not been the board’s strongest point in 2015.

Consultations are about allowing the public a chance to participate in democratic processes in order to play their part in determining the outcome. This consultation appears to have changed little of the plans.

There should be public debate around what we need our service institutions for, what value we place on a universal public good where cost and benefit cannot be personalised, and where change requires meaningful public consultation.  These changes are too important to be reserved for niche interested parties or for them to be a tick box exercise in which the planned outcome goes ahead regardless of the majority feedback. Public consultation in its present form, appears to offer little in the way of checks and balances in today’s democracy. Some are described as farcical.

Changes made in the public interest should be transparent, accountable, and robust to stand up to meaningful challenge.

As the Treasury seems set on its course, I wonder if they are using blue sky thinking to divest from our wealth of knowledge, staff and skills wisely, or plucking justification for ideology out of thin air?

###

References:

[1] Responses to consultation on the future structure of the CAA’s Medical Department: http://www.caa.co.uk/docs/33/CAP%201276%20Future%20structure%20of%20CAA%20Medical%20Department.pdf

[2] The prior information notice: http://ted.europa.eu/udl?uri=TED:NOTICE:99734-2015:TEXT:EN:HTML  (not yet a full tender notice)

[3] Financial detail limited: https://www.caa.co.uk/default.aspx?catid=1350&pagetype=90&pageid=16369

[4] CAA independent but accountable to Department of Transport http://www.parliament.uk/business/publications/written-questions-answers-statements/written-question/Commons/2014-09-10/209031

[5] Autumn 2015 consultation part two