All posts by jenpersson

The devil craves DARPA

‘People, ideas, machines — in that order.’ This quote in that  latest blog by Dominic Cummings is spot on, but the blind spots or the deliberate scoping the blog reveals, are both just as interesting.

If you want to “figure out what characters around Putin might do”, move over Miranda. If your soul is for sale, then this might be the job for you. This isn’t anthropomorphism of Cummings, but an excuse to get in the parallels to Meryl Streep’s portrayal of Priestly.

“It will be exhausting but interesting and if you cut it you will be involved in things at the age of 21 that most people never see.”

Comments like these make people who are not of that mould, feel of less worth. Commitment comes in many forms. People with kids and caring responsibilities, may be some of your most loyal staff. You may not want them as your new PA, but you will almost certainly, not want to lose them across the board.

Some words would be wise in follow up to existing staff, the thousands of public servants we have today, after his latest post.

1. The blog is aimed at a certain kind of men. Speak to women too.

The framing of this call for staff is problematic, less for its suggested work ethic, than the structural inequalities it appears to purposely perpetuate. Despite the poke at public school bluffers. Do you want the best people around you, able to play well with others, or not?

I am disappointed that asking for “the sort of people we need to find” is designed, intentionally or not, to appeal to a certain kind of men. Even if he says it should be diverse and includes people, “like that girl hired by Bigend as a brand ‘diviner.'”

If Cummings is intentional about hiring the best people, then he needs to do by better by women. We already have a PM that many women would consider toxic to work around, and won’t as a result.

Some of the most brilliant, cognitively diverse, young people I know who fit these categories well, — and across the political spectrum–are themselves diverse by nature and expect their surroundings to be. They (unlike our generation), do not “babble about ‘gender identity diversity blah blah’.” Woke is not an adjective that needs explained, but a way of life. Put such people off by appearing to devalue their norms, and you’ll miss out on some potential brilliant applicants from the pool, which will already be self-selecting, excluding many who simply won’t work for you, or Boris, or Brexit blah blah. People prepared to burn out as you want them to, aren’t going to be at their best for long. And it takes a long time to recover.

‘That girl’ was the main character, and her name was Cayce Pollard.  Women know why you should say her name. Fewer women will have worked at CERN, perhaps for related reasons, compared with “the ideal candidate” described in this call.

“If you want an example of the sort of people we need to find in Britain, look at this’ he writes of C.C. Myers, with a link to, ‘On the Cover:  The World’s Fastest Man.

Charlie Munger, Warren Buffett, Alexander Grothendieck, Bret Victor, von Neumann, Cialdini. Groves, Mueller, Jain, Pearl, Kay, Gibson, Grove, Makridakis, Yudkowsky, Graham and Thiel.

The *men illustrated* list, goes on and on.

What does it matter how many lovers you have if none of them gives you the universe?

Not something I care to discuss over dinner either.

But women of all ages do care that our PM appears to be a cad. It matters therefore that your people be seen to work to a better standard. You want people loyal to your cause, and the public to approve, even if they don’t of your leader. Leadership goes far beyond electoral numbers and a mandate.

Women — including those that tick the skill boxes need, yet again, to look beyond the numbers and have to put up with a lot. This advertorial appeals to Peter Parker, when the future needs more of Miles Morales. Fewer people with the privilege and opportunity to work at the Large Hadron Collider, and more of those who stop Kingpin’s misuse and shut it down.

A different kind of the same kind of thing, isn’t real change. This call for something new, is far less radical than it is being portrayed as.

2. Change. Don’t forget to manage it by design.

In fact, the speculation that this is all change, hiring new people for new stuff [some of which elsewhere he has genuinely interesting ideas on, like, “decentralisation and distributed control to minimise the inevitable failures of even the best people”] doesn’t really feature here, rather it is something of a precursor. He’s starting less with building the new, and rather with let’s ‘drain the swamp’ of bureaucracy. The Washington-style of 1980’s Reagan, including, ‘let’s put in some more of our kind of people’.

His personal brand of longer-term change may not be what some of his cheerleaders think it will be, but if the outcome is the same and seen to be ‘showing these Swamp creatures the zero mercy they deserve‘, [sic] does intent matter? It does, and he needs to describe his future plans better, if he wants to have a civil service  that works well.

The biggest content gap (leaving actual policy content aside) is any appreciation of the current, and need for change management.

Training gets a mention; but new process success, depends on effectively communicating on change, and delivering training about it to all, not only those from whom you expect the most high performance. People not projects, remember?

Change management and capability transfer delivered by costly consultants, is not needed, but making it understandable not elitist, is.

  • genuinely present an understanding of the as-is,  (I get you and your org, for change *with* you, not to force change upon you)
  • communicating what the future model is going to move towards (this is why you want to change and what good looks like), and
  • a roadmap of how you expect the organisation to get there (how and when), that need not be constricted by artificial comms grids.

Because people and having their trust, are what make change work.

On top of the organisational model, *every* member of staff must know where their own path fits in, and if their role is under threat, whether training will be offered to adapt, or whether they will be made redundant. Uncertainty around this over time, is also toxic. You might not care if you lose people along the way. You might consider these the most expendable people. But if people are fearful and unhappy in your organisation, or about their own future, it will hold them back from delivering at their best, and the organisation as a result.  And your best will leave, as much as those who are not.

“How to build great teams and so on”, is not a bolt-on extra here, it is fundamental.  You can’t forget the kitchens. But changing the infrastructure alone, cannot deliver real change you want to see.

3. Communications. Neither propaganda and persuasion nor PR.

There is not such a vast difference between the business of communications as a campaign tool, and tool for control. Persuasion and propaganda. But where there may be a blind spot in the promotion of the Cialdini-six style comms, is that behavioural scientists that excel at these, will not use the kind of communication tools that either the civil service nor the country needs for the serious communications of change, beyond the immediate short term.

Five thoughts:

  1. Your comms strategy should simply be “Show the thing. Be clear. Be brief.”
  2. Communicating that failure is acceptable, is only so if it means learning from it.
  3. If policy comms plans depend on work led by people like you,  who like each other and like you, you’ll be told what you want to hear.
  4. Ditto, think tanks that think the same are not as helpful as others.
  5. And you need grit in the oyster for real change.

As an aside, for anyone having kittens about using an unofficial email to get around FOI requests and think it a conspiracy to hide internal communications, it really doesn’t work that way. Don’t panic, we know where our towel is.

4. The Devil craves DARPA. Build it with safe infrastructures.

Cumming’s long-established fetishing of technology and fascination with Moscow will be familiar to those close, or blog readers. They are also currently fashionable, again. The solution is therefore no surprise, and has been prepped in various blogs for ages. The language is familiar. But single-mindedness over this length of time, can make for short sightedness.

In the US. DARPA was set up in 1958 after the Soviet Union launched the world’s first satellite, with a remit to “prevent technological surprise” and pump money into “high risk, high reward” projects. (Sunday Times, Dec 28, 2019)

In  March, Cummings wrote in praise of Project Maven;

“The limiting factor for the Pentagon in deploying advanced technology to conflict in a useful time period was not new technical ideas — overcoming its own bureaucracy was harder than overcoming enemy action.”

Almost a year after that project collapsed, its most interesting feature was surely not the role of bureaucracy among tech failure. Maven was a failure not of tech, nor bureaucracy, but to align its values with the decency of its workforce. Whether the recallibration of its compass as a company is even possible, remains to be seen.

If firing staff who hold you to account against a mantra of ‘don’t be evil’ is championed, this drive for big tech values underpinning your staff thinking and action, will be less about supporting technology moonshots, than a shift to the Dark Side of capitalist surveillance.

The incessant narrative focus on man and the machine –machine learning, ⁠—the machinery of government, quantitative models and the frontiers of the science of prediction is an obsession with power. The downplay of the human in that world ⁠—is displayed in so many ways, but the most obvious is the press and political narrative of a need to devalue human rights, ⁠— and yet to succeed, tech and innovation needs an equal and equivalent counterweight, in accountability under human rights and the law, so that when systems fail people, they do not cause catastrophic harm at scale.

“Practically nobody is ever held accountable regardless of the scale of failure, you say? How do you measure your own failure? Or the failure of policy? Transparency over that, and a return to Ministerial accountability are changes I would like to see. Or how about demanding accountability for algorithms that send children to social care, of which the CEO has said his failure is only measured by a Local Authority not saving money as a result of using their system?

We must stop state systems failing children, if they are not to create a failed society.

A UK DARPA-esque, devolved hothousing for technology will fail, if you don’t shore up public trust. Both in the state and commercial sectors. An electoral mandate won’t last, nor reach beyond its scope for long. You need a social licence to have legitimacy for tech that uses public data, that is missing today. It is bone-headed and idiotic that we can’t get this right as a country.  Despite knowing how to, if government keeps avoiding doing it safely, it will come at a cost.

The Pentagon certainly cares about the implications for national security when the personal data of millions of people could be open to exploitation, blackmail or abuse.

You might of course, not care. But commercial companies will when they go under. The electorate will. Your masters might if their legacy will suffer and debate about the national good and the UK as a Life Sciences centre, all come to naught.

There was little in this blog, of the reality of what these hires should deliver beyond more tech and systems’ change. But the point is to make systems that work for people, not see more systems at work.

We could have it all, but not if you spaff our data laws up the wall.

“But the ship can’t sink.”

“She is made of iron, sir. I assure you, she can. And she will. It is a mathematical certainty.

[Attributed to Thomas Andrews, Chief Designer of the RMS Titanic.]

5. The ‘circle of competence’ needs values, not only to value skills.

It’s important and consistent behaviour that Cummings says he recognises his own weaknesses, that some decisions are beyond his ‘circle of competence’ and that he should in in effect become redundant, having brought in, “the sort of expertise supporting the PM and ministers that is needed.” Founder’s syndrome is common to organisations and politics is not exempt. But neither is the Peter principle a phenomenon particular to only the civil service.

“One of the problems with the civil service is the way in which people are shuffled such that they either do not acquire expertise or they are moved out of areas they really know to do something else.”

But so what? what’s worse, is politics has not only the Peter’s but the Dilbert principle when it comes to senior leadership. You can’t put people in positions expected to command respect when they tell others to shut up and go away. Or fire without due process. If you want orgs to function together at scale, especially beyond the current problems with silos, they need people on the ground who can work together, and have a common goal who respect those above them, and feel it is all worthwhile. Their politics don’t matter. But integrity, respect and trust do, even if they don’t matter to you personally.

I agree wholeheartedly that circles of competence matter [as I see the need to build some in education on data and edTech]. Without the appropriate infrastructure change, radical change of policy is nearly impossible. But skill is not the only competency that counts when it comes to people.

If the change you want is misaligned with people’s values, people won’t support it, no matter who you get to see it through. Something on the integrity that underpins this endeavour,  will matter to the applicants too. Most people do care how managers treat their own.

The blog was pretty clear that Cummings won’t value staff, unless their work ethic, skills and acceptance will belong to him alone to judge sufficient or not, to be “binned within weeks if you don’t fit.”

This government already knows it has treated parts of the public like that for too long. Policy has knowingly left some people behind on society’s  scrap heap, often those scored by automated systems as inadequate. Families in-work moved onto Universal Credit, feed their children from food banks for #5WeeksTooLong. The rape clause. Troubled families. Children with special educational needs battling for EHC plan recognition without which schools won’t take them, and DfE knowingly underfunding suitable Alternative Provision in education by a colossal several hundred per cent amount per place, by design.

The ‘circle of competence’ needs to recognise what happens as a result of policy, not only to place value on the skills in its delivery or see outcomes on people as inevitable or based on merit. Charlie Munger may have said, “At the end of the day – if you live long enough – most people get what they deserve.”

An awful lot of people deserve a better standard of living and human dignity than the UK affords them today. And we can’t afford not to fix it. A question for new hires: How will you contribute to doing this?

6. Remember that our civil servants, are after all, public servants.  

The real test of competence, and whether the civil service delivers for the people whom they serve, is inextricably bound with government policy. If its values, if its ethics are misguided, building a new path with or without new people, will be impossible.

The best civil servants I have worked with, have one thing in common. They have a genuine desire to make the world better. [We can disagree on what that looks like and for whom, on fraud detection, on immigration, on education, on exploitation of data mining and human rights, or the implications of the law. Their policy may bring harm, but their motivation is not malicious.] Your goal may be a ‘better’ civil service. They may be more focussed on better outcomes for people, not systems. Lose sight of that, and you put the service underpinning government, at risk. Not to bring change for good, but to destroy the very point of it.  Keep the point of a better service, focussed on the improvement for the public.

Civil servants civilly serve in the words of asked, so should we all ask Cummings to outline his thoughts on:

  • “What makes the decisions which civil servants implement legitimate?
  • Where are the boundaries of that legitimacy and how can they be detected?
  • What should civil servants do if those boundaries are reached and crossed?”

Self-destruction for its own sake, is not a compelling narrative for change, whether you say you want to control that narrative, or not.

Two hands are a lot, but many more already work in the civil service. If Cummings only works against them, he’ll succeed not in building change, but resistance.

Shifting power and sovereignty. Please don’t spaff our data laws up the wall.

Duncan Green’s book, How Change Happens reflects on how power and systems shape change, and its key theme is most timely post the General Election.

Critical junctures shake the status quo and throw all the power structures in the air.

The Sunday Times ran several post-election stories this weekend. Their common thread is about repositioning power; realigning the relationships across Whitehall departments, and with the EU.

It appears that meeting the political want, to be seen by the public to re-establish sovereignty for Britain, is going to come at a price.

The Sunday Times article suggests our privacy and data rights are likely to be high up on the list, in any post-Brexit fire sale:

“if they think we are going to be signing up to stick to their data laws and their procurement rules, that’s not going to happen”.

Whether it was simply a politically calculated statement or not, our data rights are clearly on the table in current wheeling and dealing.

Since there’s nothing in EU data protection law that is a barrier to trade doing what is safe, fair and transparent with personal data it may be simply be politically opportunistic to be seen to be doing something that was readily associated with the EU. “Let’s take back control of our cookies”, no less.

But reality is that either way the UK_GDPR is already weaker for UK residents than what is now being labelled here as EU_#GDPR.

If anything, GDPR is already too lenient to organisations and does little especially for children, to shift the power balance required to build the data infrastructures we need to use data well. The social contract for research and other things, appropriate to  ever-expanding technological capacity, is still absent in UK practice.

But instead of strengthening it, what lies ahead is expected divergence between the UK_GDPR and the EU_GDPR in future, via the powers in the European Union (Withdrawal) Act 2017.

A post-Brexit majority government might pass all the law it likes to remove the ability to exercise our human rights or data rights under UK Data protection law.  Henry VIII powers adopted in the last year, allow space for top down authoritarian rule-making across many sectors. The UK government was alone among other countries when the government created its own exemption for immigration purposes in the UK Data Protection Act in 2018. That removed the ability from all of us,  to exercise rights under GDPR. It might choose to further reduce our freedom of speech, and access to the courts.

But would the harmful economic side effects be worth it?

If Britain is to become a ‘buzz of tech firms in the regions’, and since  much of tech today relies on personal data processing, then a ‘break things and move fast’ approach (yes, that way round), won’t protect  SMEs from reputational risk, or losing public trust. Divergence may in fact break many businesses. It will cause confusion and chaos, to have UK self-imposed double standards, increasing workload for many.

Weakened UK data laws for citizens, will limit and weaken UK business both in terms of their own positioning in being able to trade with others, and being able to manage trusted customer relations. Weakened UK data laws will weaken the position of UK research.

Having an accountable data protection officer can be seen as a challenge. But how much worse might challenges in court be, when you cock up handling millions of patients’ pharmaceutical records [1], or school children’s biometric data? Save nothing of the potential implications for national security [2] or politicians when lists of millions of people could be open to blackmail or abuse for a generation.

The level playing field that every company can participate in, is improved, not harmed, by good data protection law. Small businesses that moan about it, might simply never have been good at doing data well. Few significant changes have been of substance in Britain’s Data Protection laws over the last twenty years.

Data laws are neither made-up, bonkers banana-shaped standards,  nor a meaningful symbol of sovereignty.

GDPR is also far from the only law the UK must follow when it comes to data.  Privacy and other rights may be infringed unlawfully, even where data protection law is no barrier to processing. And that’s aside from ethical questions too.

There isn’t so much a reality of “their data laws”, but rather *our* data laws, good for our own protection, for firms, *and* the public good.

Policy makers who might want such changes to weaken rights, may not care, looking out for fast headlines, not slow-to-realise harms.

But if they want a legacy of having built a better infrastructure that positions the UK for tech firms, for UK research, for citizens and for the long game, then they must not spaff our data laws up the wall.


Duncan Green’s book, How Change Happens is available via Open Access.


Updated December 26, 2019 to add links to later news:

[1]   20/12/2019 The Information Commissioner’s Office (ICO) has fined a London-based pharmacy £275,000 for failing to ensure the security of special category data. https://ico.org.uk/action-weve-taken/enforcement/doorstep-dispensaree-ltd-mpn/

[2] 23/12/2019 Pentagon warns military members DNA kits pose ‘personal and operational risks’ https://www.yahoo.com/news/pentagon-warns-military-members-dna-kits-pose-personal-and-operational-risks-173304318.html

The consent model fails school children. Let’s fix it.

The Joint Committee on Human Rights report, The Right to Privacy (Article 8) and the Digital Revolution,  calls for robust regulation to govern how personal data is used and stringent enforcement of the rules.

“The consent model is broken” was among its key conclusions.

Similarly, this summer,  the Swedish DPA found, in accordance with GDPR, that consent was not a valid legal basis for a school pilot using facial recognition to keep track of students’ attendance given the clear imbalance between the data subject and the controller.

This power imbalance is at the heart of the failure of consent as a lawful basis under Art. 6, for data processing from schools.

Schools, children and their families across England and Wales currently have no mechanisms to understand which companies and third parties will process their personal data in the course of a child’s compulsory education.

Children have rights to privacy and to data protection that are currently disregarded.

  1. Fair processing is a joke.
  2. Unclear boundaries between the processing in-school and by third parties are the norm.
  3. Companies and third parties reach far beyond the boundaries of processor, necessity and proportionality, when they determine the nature of the processing: extensive data analytics,  product enhancements and development going beyond necessary for the existing relationship, or product trials.
  4. Data retention rules are as unrespected as the boundaries of lawful processing. and ‘we make the data pseudonymous / anonymous and then archive / process / keep forever’ is common.
  5. Rights are as yet almost completely unheard of for schools to explain, offer and respect, except for Subject Access. Portability for example, a requirement for consent, simply does not exist.

In paragraph 8 of its general comment No. 1, on the aims of education, the UN Convention Committee on the Rights of the Child stated in 2001:

“Children do not lose their human rights by virtue of passing through the school gates. Thus, for example, education must be provided in a way that respects the inherent dignity of the child and enables the child to express his or her views freely in accordance with article 12, para (1), and to participate in school life.”

Those rights currently unfairly compete with commercial interests. And that power balance in education is as enormous, as the data mining in the sector. The then CEO of Knewton,  Jose Ferreira said in 2012,

“the human race is about to enter a totally data mined existence…education happens to be today, the world’s most data mineable industry– by far.”

At the moment, these competing interests and the enormous power imbalance between companies and schools, and schools and families, means children’s rights are last on the list and oft ignored.

In addition, there are serious implications for the State, schools and families due to the routine dependence on key systems at scale:

  • Infrastructure dependence ie Google Education
  • Hidden risks [tangible and intangible] of freeware
  • Data distribution at scale and dependence on third party intermediaries
  • and not least, the implications for families’ mental health and stress thanks to the shift of the burden of school back office admin from schools, to the family.

It’s not a contract between children and companies either

Contract GDPR Article 6 (b) does not work either, as a basis of processing between the data processing and the data subject, because again, it’s the school that determines the need for and nature of the processing in education, and doesn’t work for children.

The European Data Protection Board published Guidelines 2/2019 on the processing of personal data under Article 6(1)(b) GDPR in the context of the provision of online services to data subjects, on October 16, 2019.

Controllers must, inter alia, take into account the impact on data subjects’ rights when identifying the appropriate lawful basis in order to respect the principle of fairness.

They also concluded that, on the capacity of children to enter into contracts, (footnote 10, page 6)

“A contractual term that has not been individually negotiated is unfair under the Unfair Contract Terms Directive “if, contrary to the requirement of good faith, it causes a significant imbalance in the parties’ rights and obligations arising under the contract, to the detriment of the consumer”.

Like the transparency obligation in the GDPR, the Unfair Contract Terms Directive mandates the use of plain, intelligible language.

Processing of personal data that is based on what is deemed to be an unfair term under the Unfair Contract Terms Directive, will generally not be consistent with the requirement under Article5(1)(a) GDPR that processing is lawful and fair.’

In relation to the processing of special categories of personal data, in the guidelines on consent, WP29 has also observed that Article 9(2) does not recognize ‘necessary for the performance of a contract’ as an exception to the general prohibition to process special categories of data.

They too also found:

it is completely inappropriate to use consent when processing children’s data: children aged 13 and older are, under the current legal framework, considered old enough to consent to their data being used, even though many adults struggle to understand what they are consenting to.

Can we fix it?

Consent models fail school children. Contracts can’t be between children and companies. So what do we do instead?

Schools’ statutory tasks rely on having a legal basis under data protection law, the public task lawful basis Article 6(e) under GDPR, which implies accompanying lawful obligations and responsibilities of schools towards children. They cannot rely on (f) legitimate interests. This 6(e) does not extend directly to third parties.

Third parties should operate on the basis of contract with the school, as processors, but nothing more. That means third parties do not become data controllers. Schools stay the data controller.

Where that would differ with current practice, is that most processors today stray beyond necessary tasks and become de facto controllers. Sometimes because of the everyday processing and having too much of a determining role in the definition of purposes or not allowing changes to terms and conditions; using data to develop their own or new products, for extensive data analytics, the location of processing and data transfers, and very often because of excessive retention.

Although the freedom of the mish-mash of procurement models across UK schools on an individual basis, learning grids, MATs, Local Authorities and no-one-size-fits-all model may often be a good thing, the lack of consistency today means your child’s privacy and data protection are in a postcode lottery. Instead we need:

  • a radical rethink the use of consent models, and home-school agreements to obtain manufactured ‘I agree’ consent.
  • to radically articulate and regulate what good looks like, for interactions between children and companies facilitated by schools, and
  • radically redesign a contract model which enables only that processing which is within the limitations of a processors remit and therefore does not need to rely on consent.

It would mean radical changes in retention as well. Processors can only process for only as long as the legal basis extends from the school. That should generally be only the time for which a child is in school, and using that product in the course of their education. And certainly data must not stay with an indefinite number of companies and their partners, once the child has left that class, year, or left school and using the tool. Schools will need to be able to bring in part of the data they outsource to third parties for learning, *if* they need it as evidence or part of the learning record, into the educational record.

Where schools close (or the legal entity shuts down and no one thinks of the school records [yes, it happens], change name, and reopen in the same walls as under academisation) there must be a designated controller communicated before the change occurs.

The school fence is then something that protects the purposes of the child’s data for education, for life, and is the go to for questions. The child has a visible and manageable digital footprint. Industry can be confident that they do indeed have a lawful basis for processing.

Schools need to be within a circle of competence

This would need an independent infrastructure we do not have today, but need to draw on.

  • Due diligence,
  • communication to families and children of agreed processors on an annual basis,
  • an opt out mechanism that works,
  • alternative lesson content on offer to meet a similar level of offering for those who do,
  • and end-of-school-life data usage reports.

The due diligence in procurement, in data protection impact assessment, and accountability needs to be done up front, removed from the classroom teacher’s responsibility who is in an impossible position having had no basic teacher training in privacy law or data protection rights, and the documents need published in consultation with governors and parents, before beginning processing.

However, it would need to have a baseline of good standards that simply does not exist today.

That would also offer a public safeguard for processing at scale, where a company is not notifying the DPA due to small numbers of children at each school, but where overall group processing of special category (sensitive) data could be for millions of children.

Where some procurement structures might exist today, in left over learning grids, their independence is compromised by corporate partnerships and excessive freedoms.

While pre-approval of apps and platforms can fail where the onus is on the controller to accept a product at a point in time, the power shift would occur where products would not be permitted to continue processing without notifying of significant change in agreed activities, owner, storage of data abroad and so on.

We shift the power balance back to schools, where they can trust a procurement approval route, and children and families can trust schools to only be working with suppliers that are not overstepping the boundaries of lawful processing.

What might school standards look like?

The first principles of necessity, proportionality, data minimisation would need to be demonstrable — just as required under data protection law for many years, and is more explicit under GDPR’s accountability principle. The scope of the school’s authority must be limited to data processing for defined educational purposes under law and only these purposes can be carried over to the processor. It would need legislation and a Code of Practice, and ongoing independent oversight. Violations could mean losing the permission to be a provider in the UK school system. Data processing failures would be referred to the ICO.

  1. Purposes: A duty on the purposes of processing to be for necessary for strictly defined educational purposes.
  2. Service Improvement: Processing personal information collected from children to improve the product would be very narrow and constrained to the existing product and relationship with data subjects — i.e security, not secondary product development.
  3. Deletion: Families and children must still be able to request deletion of personal information collected by vendors which do not form part of the permanent educational record. And a ‘clean slate’ approach for anything beyond the necessary educational record, which would in any event, be school controlled.
  4. Fairness: Whilst at school, the school has responsibility for communication to the child and family how their personal data are processed.
  5. Post-school accountability as the data, resides with the school: On leaving school the default for most companies, should be deletion of all personal data, provided by the data subject, by the school, and inferred from processing.  For remaining data, the school should become the data controller and the data transferred to the school. For any remaining company processing, it must be accountable as controller on demand to both the school and the individual, and at minimum communicate data usage on an annual basis to the school.
  6. Ongoing relationships: Loss of communication channels should be assumed to be a withdrawal of relationship and data transferred to the school, if not deleted.
  7. Data reuse and repurposing for marketing explicitly forbidden. Vendors must be prohibited from using information for secondary [onward or indirect] reuse, for example in product or external marketing to pupils or parents.
  8. Families must still be able to object to processing, on an ad hoc basis, but at no detriment to the child, and an alternative method of achieving the same aims must be offered.
  9. Data usage reports would become the norm to close the loop on an annual basis.  “Here’s what we said we’d do at the start of the year. Here’s where your data actually went, and why.”
  10.  In addition, minimum acceptable ethical standards could be framed around for example, accessibility, and restrictions on in-product advertising.

There must be no alternative back route to just enough processing

What we should not do, is introduce workarounds by the back door.

Schools are not to carry on as they do today, manufacturing ‘consent’ which is in fact unlawful. It’s why Google, despite the objection when I set this out some time ago, is processing unlawfully. They rely on consent that simply cannot and does not exist.

The U.S. schools model wording would similarly fail GDPR tests, in that schools cannot ‘consent’ on behalf of children or families. I believe that in practice the US has weakened what should be strong protections for school children, by having the too expansive  “school official exception” found in the Family Educational Rights and Privacy Act (“FERPA”), and as described in Protecting Student Privacy While Using Online Educational Services: Requirements and Best Practices.

Companies can also work around their procurement pathways.

In parallel timing, the US Federal Trade Commission’s has a consultation open until December 9th, on the Implementation of the Children’s Online Privacy Protection Rule, the COPPA consultation.

The COPPA Rule “does not preclude schools from acting as intermediaries between operators and schools in the notice and consent process, or from serving as the parents’ agent in the process.”

‘There has been a significant expansion of education technology used in classrooms’, the FTC mused before asking whether the Commission should consider a specific exception to parental consent for the use of education technology used in the schools.

In a backwards approach to agency and the development of a rights respecting digital environment for the child, the consultation in effect suggests that we mould our rights mechanisms to fit the needs of business.

That must change. The ecosystem needs a massive shift to acknowledge that if it is to be GDPR compliant, which is a rights respecting regulation, then practice must become rights respecting.

That means meeting children and families reasonable expectations. If I send my daughter to school, and we are required to use a product that processes our personal data, it must be strictly for the *necessary* purposes of the task that the school asks of the company, and the child/ family expects, and not a jot more.

Borrowing on Ben Green’s smart enough city concept, or Rachel Coldicutt’s just enough Internet, UK school edTech suppliers should be doing just enough processing.

How it is done in the U.S. governed by FERPA law is imperfect and still results in too many privacy invasions, but it offers a regional model of expertise for schools to rely on, and strong contractual agreements of what is permitted.

That, we could build on. It could be just enough, to get it right.

Swedish Data Protection Authority decision published on facial recognition (English version)

In August 2019, the Swedish DPA fined Skellefteå Municipality, Secondary Education Board 200 000 SEK (approximately 20 000 euros) pursuant to the General Data Protection Regulation (EU) 2016/679 for using facial recognition technology to monitor the attendance of school children.

The Authority has now made a 14-page translation of the decision available in English on its site, that can be downloaded.

This facial recognition technology trial, compared images from  camera surveillance with pre-registered images of the face of each child, and processed first and last name.

In the preamble, the decision recognised that the General Data Protection Regulation does not contain any derogations for pilot or trial activities.

In summary, the Authority concluded that by using facial recognition via camera to monitor school children’s attendance, the Secondary Education Board (Gymnasienämnden) in the municipality of Skellefteå (Skellefteå kommun) processed personal data that was unnecessary, excessively invasive, and unlawful; with regard to

  • Article 5 of the General Data Protection Regulation by processing personal data in a manner that is more intrusive than necessary and encompasses more personal data than is necessary for the specified purpose (monitoring of attendance)
  • Article 9 processing special category personal data (biometric data) without having a valid derogation from the prohibition on the processing of special categories of personal data,

and

  • Articles 35 and 36 by failing to fulfil the requirements for an impact assessment and failing to carry out prior consultation with the Swedish Data Protection Authority.

Consent

Perhaps the most significant part of the decision is the first officially documented recognition in education data processing under GDPR, that consent fails, even though explicit guardians’ consent was requested and it was possible to opt out.  It recognised that this was about processing the personal data of children in a disempowered relationship and environment.

It makes the assessment that consent was not freely given. It is widely recognised that consent cannot be a tick box exercise,  and that any choice must be informed. However, little attention has yet been given in GDPR circles, to the power imbalance of relationships, especially for children.

The decision recognised that the relationship that exists between the data subject and the controller, namely the balance of power, is significant in assessing whether a genuine choice exists, and whether or not it can be freely given without detriment. The scope for voluntary consent within the public sphere is limited:

“As regards the school sector, it is clear that the students are in a position of dependence with respect to the school …”

The Education Board had said that consent was the basis for the processing of the facial recognition in attendance monitoring.

With the Data Protection Authority’s assessment that the consent was invalid, the lawful basis for processing fell away.

The importance of necessity

The basis for processing was consent 6(1)(a), not 6(1)(e) ‘necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller’ so as to process special category [sensitive] personal data.

However the same test of necessity, was also important in this case. Recital 39 of GDPR requires that personal data should be processed only if the purpose of the processing could not reasonably be fulfilled by other means.

The Swedish Data Protection Authority recognised and noted that, while there is a legal basis for administering student attendance at school, there is no explicit legal basis for performing the task through the processing of special categories of personal data or in any other manner which entails a greater invasion of privacy — put simply, taking the register via facial recognition did not meet the data protection test of being necessary and proportionate. There are less privacy invasive alternatives available, and on balance, the rights of the individual outweigh those of the data processor.

While some additional considerations were made for local Swedish data protection law,  (the Data Protection Act (prop. 2017/18:105 Ny dataskyddslag)) even those exceptional provisions were not intended to be applied routinely to everyday tasks.

Considering rights by design

The decision refers to  the document provided by the school board, Skellefteå kommun – Framtidens klassrum (Skelleftå municipality – The classroom of the future). In the appendix (p. 5), “it noted one advantage of facial recognition is that it is easy to register a large group such as a class in bulk. The disadvantages mentioned include that it is a technically advanced solution which requires a relatively large number of images of each individual, that the camera must have a free line of sight to all students who are present, and that any headdress/shawls may cause the identification process to fail.”

The Board did not submit a prior consultation for data protection impact assessment to the Authority under Article 36. The Authority considered that a number of factors indicated that the processing operations posed a high risk to the rights and freedoms of the individuals concerned but that these were inadequately addressed, and failed to assess the proportionality of the processing in relation to its purposes.

For example, the processing operations involved
a) the use of new technology,
b) special categories of personal data,
c) children,
d) and a power imbalance between the parties.

As the risk assessment submitted by the Board did not demonstrate an assessment of relevant risks to the rights and freedoms of the data subjects [and its mitigations], the decision noted that the high risks pursuant to Article 36 had not been reduced.

What’s next for the UK

The Swedish Data Protection Authority identifies some important points in perhaps the first significant GDPR ruling in the education sector so far, and much will apply school data processing in the UK.

What may surprise some, is that this decision was not about the distribution of the data; since the data was stored on a local computer without any internet connection.  It was not about security, since the computer was kept in a locked cupboard. It was about the fundamentals of basic data protection and rights to privacy for children in the school environment, under the law.

Processing must meet the tests of necessity. Necessary is not defined by a lay test of convenience.

Processing must be lawful. Consent is rarely going to offer a lawful basis for routine processing in schools, and especially when it comes to the risks to the rights and freedoms of the child when processing biometric data, consent fails to offer satisfactory and adequate lawful grounds for processing, due to the power imbalance.

Data should be accurate, be only the minimum necessary and proportionate, and not respect the fundamental rights of the child.

The Swedish DPA fined Skellefteå Municipality, Secondary Education Board 200 000 SEK (approximately 20 000 euros). According to Article 83 (1) of the General Data Protection Regulation, supervisory authorities must ensure that the imposition of administrative fines is effective, proportionate and dissuasive, and in this case, is designed to end the processing infringements.

The GDPR, as preceding data protection law did, offers a route for data controllers and processors to understand what is lawful, and it demands their accountability to be able to demonstrate they are.

Whether children in the UK will find that it affords them their due protections, now depends on its enforcement like this case.

When FAT fails: Why we need better process infrastructure in public services.

I’ve been thinking about FAT, and the explainability of decision making.

There may be few decisions about people at scale, today in the public sector, in which computer stored data aren’t used. For some, computers are used to make or help make decisions.

How we understand those decisions in a vital part of the obligation of fairness, in data processing. How I know that *you* have data about me, and are processing it, in order to make a decision that affects me. So there’s an awful lot of good things that come out of that. The staff member does their job with better understanding. The person affected has an opportunity to question and correct if necessary, the inputs to the decision. And one hopes, that the computer support can make many decisions faster, and with more information in useful ways, than the human staff member alone.

But, why then, does it seem so hard to get this understood and processes in place to make the decision making understandable?

And more importantly, why does there seem to be no consistency in how such decision-making is documented, and communicated?

From school progress measures, to PIP and Universal Credit applications, to predictive  ‘risk scores’ for identifying gang membership and child abuse. In a world where you need to be computer literate but there may be no computer to help you make an application, the computers behind the scenes are making millions of life changing decisions.

We cannot see them happen, and often don’t see the data that goes into them. From start to finish, it is a hidden process.

The current focus on FAT —  fairness, accountability, and transparency of algorithmic systems — often makes accountability for the computer part of the decision-making in the public sector, appear something that has become too hard to solve and needs complex thinking around.

I want conversations to go back to something more simple. Humans taking responsibility for their actions. And to do so, we need better infrastructure for whole process delivery, where it involves decision making, in public services.

Academics, boards, conferences, are all spending time on how to make the impact of the algorithms fair, accountable, and transparent. But in the search for ways to explain legal and ethical models of fairness, and to explain the mathematics and logic behind algorithmic systems and machine learning, we’ve lost sight of why anyone needs to know. Who cares and why?

People need to get redress when things go wrong or appear to be wrong. If things work, the public at large generally need not know why.  Take TOEIC. The way the Home Office has treated these students makes a mockery of the British justice system. And the impact has been devastating. Yet there is no mechanism for redress and no one in government has taken responsibility for its failures.

That’s a policy decision taken by people.

Routes for redress on decisions today are often about failed policy and processes. They are costly and inaccessible, such as fighting Local Authorities decisions not to provide services required by law.

That’s a policy decision taken by people.

Rather in the same way that the concept of ethics has become captured and distorted by companies to suit their own agenda, so if anything, the focus on FAT has undermined the concept of whole process audit and responsibility for human choices, decisions, and actions.

The effect of a machine-made decision on those who are included in the system response, — and more rarely those who may be left out of it, or its community effects, — has been singled out for a lot of people’s funding and attention as what matters to understand and audit in the use of data for making safe and just decisions.

It’s right to do so, but not as a stand alone cog in the machine.

The computer and its data processing have been unjustifiably deified. Rather than supporting public sector staff they are disempowered in the process as a whole. It is assumed the computer knows best, and can be used to justify a poor decision — “well, what could I do, the data told me to do it?” is rather like, “it was not my job to pick up the fax from the fax machine.” But that’s not a position we should encourage.

We have become far too accommodating of this automated helplessness.

If society feels a need to take back control, as a country and of our own lives, we also need to see decision makers take back responsibility.

The focus on FAT emphasises the legal and ethical obligations on companies and organisations, to be accountable for what the computer says, and the narrow algorithmic decision(s) in it.  But it is rare that an outcome in most things in real life, is the result of a singular decision.

So does FAT fit these systems at all?

Do I qualify for PIP? Can your child meet the criteria needed for additional help at school?  Does the system tag your child as part of a ‘Troubled Family’? These outcomes are life affecting in the public sector. It should therefore be made possible to audit *if* and *how* the public sector should offer to change lives as a holistic process.

That means re-looking at if and how we audit that whole end-to-end process > from policy idea, to legislation, through design to delivery.

There are no simple, clean, machine readable results in that.

Yet here again, the current system-process-solution encourages the public sector to use *data* to assess and incentivise the process to measure the process, and award success and failure, packaged into surveys and payment-by-results.

The data driven measurement, assesses data driven processes, that compound the problems of this infinite human-out-of-the-loop.

This clean laser-like focus misses out on the messy complexity of our human lives.  And the complexity of public service provision makes it very hard to understand the process of delivery. As long as the end-to-end system remains weighted to self preservation, to minimise financial risk to the institution for example, or to find a targeted number of interventions, people will be treated unfairly.

Through a hyper focus on algorithms and computer-led decision accountability, the tech sector, academics and everyone involved, is complicit in a debate that should be about human failure. We already have algorithms in every decision process. Human and machine-led algorithms. Before we decide if we need a new process of fairness, accountability and transparency, we should know who’s responsible now for the outcomes and failure in any given activity, and ask, ‘Does it really need to change?’

To restore some of the power imbalance to the public on decisions about us made by authorities today, we urgently need public bodies to compile, publish and maintain at very minimum, some of the basic underpinning and auditable infrastructure — the ‘plumbing’ — inside these processes:

  1. a register of data analytics systems used by Local and Central Government, including but not only those where algorithmic decision-making affects individuals.
  2. a register of data sources used in those analytics systems.
  3. a consistently identifiable and searchable taxonomy of the companies and third-parties delivering those analytics systems.
  4. a diagrammatic mapping of core public service delivery activities, to understand the tasks, roles, and responsibilities within the process. It would benefit government at all levels to be able to see themselves where decision points sit, understand flows of data and cash, and see where which law supports the task, and accountability sits.

Why? Because without knowing what is being used at scale, how and by whom, we are poorly informed and stay helpless. It allows for enormous and often unseen risks without adequate checks and balances like named records with the sexual orientation data of almost 3.2 million people, and religious belief data on 3.7 million sitting in multiple distributed databases and with the massive potential for state-wide abuse by any current or future government.  And the responsibility for each part of a process remains unclear.

If people don’t know what you’re doing, they don’t know what you’re doing wrong, after all. But it also means the system is weighted unfairly against people. Especially those who least fit the model.

We need to make increasingly lean systems more fat and stuff them with people power again. Yes we need fairness accountability and transparency. But we need those human qualities to reach across thinking beyond computer code. We need to restore humanity to automated systems and it has to be re-instated across whole processes.

FAT focussed only on computer decisions, is a distraction from auditing failure to deliver systems that work for people. It’s a failure to manage change and of governance, and to be accountable for when things go wrong.

What happens when FAT fails? Who cares and what do they do?

Thoughts on the Online Harms White Paper (I)

“Whatever the social issue we want to grasp – the answer should always begin with family.”

Not my words, but David Cameron’s. Just five years ago, Conservative policy was all about “putting families at the centre of domestic policy-making.”

Debate on the Online Harms White Paper, thanks in part to media framing of its own departmental making, is almost all about children. But I struggle with the debate that leaves out our role as parents almost entirely, other than as bereft or helpless victims ourselves.

I am conscious wearing my other hat of defenddigitalme, that not all families are the same, and not all children have families. Yet it seems counter to conservative values,  for a party that places the family traditionally at the centre of policy, to leave out or abdicate parents of responsibility for their children’s actions and care online.

Parental responsibility cannot be outsourced to tech companies, or accept it’s too hard to police our children’s phones. If we as parents are concerned about harms, it is our responsibility to enable access to that which is not, and be aware and educate ourselves and our children on what is. We are aware of what they read in books. I cast an eye over what they borrow or buy. I play a supervisory role.

Brutal as it may be, the Internet is not responsible for suicide. It’s just not that simple. We cannot bring children back from the dead. We certainly can as society and policy makers, try and create the conditions that harms are not normalised, and do not become more common.  And seek to reduce risk. But few would suggest social media is a single source of children’s mental health issues.

What policy makers are trying to regulate is in essence, not a single source of online harms but 2.1 billion users’ online behaviours.

It follows that to see social media as a single source of attributable fault per se, is equally misplaced. A one-size-fits-all solution is going to be flawed, but everyone seems to have accepted its inevitability.

So how will we make the least bad law?

If we are to have sound law that can be applied around what is lawful,  we must reduce the substance of debate by removing what is already unlawful and has appropriate remedy and enforcement.

Debate must also try to be free from emotive content and language.

I strongly suspect the language around ‘our way of life’ and ‘values’ in the White Paper comes from the Home Office. So while it sounds fair and just, we must remember reality in the background of TOEIC, of Windrush, of children removed from school because their national records are being misused beyond educational purposes. The Home Office is no friend of child rights, and does not foster the societal values that break down discrimination and harm. It instead creates harms of its own making, and division by design.

I’m going to quote Graham Smith, for I cannot word it better.

“Harms to society, feature heavily in the White Paper, for example: content or activity that:

“threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration.”

Similarly:

“undermine our democratic values and debate”;

“encouraging us to make decisions that could damage our health, undermining our respect and tolerance for each other and confusing our understanding of what is happening in the wider world.”

This kind of prose may befit the soapbox or an election manifesto, but has no place in or near legislation.”

[Cyberleagle, April 18, 2019,Users Behaving Badly – the Online Harms White Paper]

My key concern in this area is that through a feeling of ‘it is all awful’ stems the sense that ‘all regulation will be better than now’, and  comes with a real risk of increasing current practices that would not be better than now, and in fact need fixing.

More monitoring

The first, is today’s general monitoring of school children’s Internet content for risk and harms, which creates unintended consequences and very real harms of its own — at the moment, without oversight.

In yesterday’s House of Lords debate, Lord Haskel, said,

“This is the practicality of monitoring the internet. When the duty of care required by the White Paper becomes law, companies and regulators will have to do a lot more of it. ” [April 30, HOL]

The Brennan Centre yesterday published its research on the spend by US schools purchasing social media monitoring software from 2013-18, and highlighted some of the issues:

Aside from anecdotes promoted by the companies that sell this software, there is no proof that these surveillance tools work [compared with other practices]. But there are plenty of risks. In any context, social media is ripe for misinterpretation and misuse.” [Brennan Centre for Justice, April 30, 209]

That monitoring software focuses on two things —

a) seeing children through the lens of terrorism and extremism, and b) harms caused by them to others, or as victims of harms by others, or self-harm.

It is the near same list of ‘harms’ topics that the White Paper covers. Co-driven by the same department interested in it in schools — the Home Office.

These concerns are set in the context of the direction of travel of law and policy making, its own loosening of accountability and process.

It was preceded by a House of Commons discussion on Social Media and Health, lead by the former Minister for Digital, Culture, Media and Sport who seems to feel more at home in that sphere, than in health.

His unilateral award of funds to the Samaritans for work with Google and Facebook on a duty of care, while the very same is still under public consultation, is surprising to say the least.

But it was his response to this question, which points to the slippery slope such regulations may lead. The Freedom of Speech champions should be most concerned not even by what is potentially in any legislation ahead, but in the direction of travel and debate around it.

“Will he look at whether tech giants such as Amazon can be brought into the remit of the Online Harms White Paper?

He replied, that “Amazon sells physical goods for the most part and surely has a duty of care to those who buy them, in the same way that a shop has a responsibility for what it sells. My hon. Friend makes an important point, which I will follow up.”

Mixed messages

The Center for Democracy and Technology recommended in its 2017 report, Mixed Messages? The Limits of Automated Social Media Content Analysis, that the use of automated content analysis tools to detect or remove illegal content should never be mandated in law.

Debate so far has demonstrated broad gaps between what is wanted, in knowledge, and what is possible. If behaviours are to be stopped because they are undesirable rather than unlawful, we open up a whole can of worms if not done with the greatest attention to  detail.

Lord Stevenson and Lord McNally both suggested that pre-legislative scrutiny of the Bill, and more discussion would be positive. Let’s hope it happens.

Here’s my personal first reflections on the Online Harms White Paper discussion so far.

Six suggestions:

Suggestion one: 

The Law Commission Review, mentioned in the House of Lords debate,  may provide what I have been thinking of crowd sourcing and now may not need to. A list of laws that the Online Harms White Paper related discussion reaches into, so that we can compare what is needed in debate versus what is being sucked in. We should aim to curtail emotive discussion of broad risk and threat that people experience online. This would enable the themes which are already covered in law to be avoided, and focus on the gaps.  It would make for much tighter and more effective legislation. For example, the Crown Prosecution Service offers Guidelines on prosecuting cases involving communications sent via social media, but a wider list of law is needed.

Suggestion two:
After (1) defining what legislation is lacking, definitions must be very clear, narrow, and consistent across other legislation. Not for the regulator to determine ad-hoc and alone.

Suggestion three:
If children’s rights are at to be so central in discussion on this paper, then their wider rights must including privacy and participation, access to information and freedom of speech must be included in debate. This should include academic research-based evidence of children’s experience online when making the regulations.

Suggestion four:
Internet surveillance software in schools should be publicly scrutinised. A review should establish the efficacy, boundaries and oversight of policy and practice regards Internet monitoring for harms and not embed even more, without it. Boundaries should be put into legislation for clarity and consistency.

Suggestion five:
Terrorist activity or child sexual exploitation and abuse (CSEA) online are already unlawful and should not need additional Home Office powers. Great caution must be exercised here.

Suggestion six: 
Legislation could and should encapsulate accountability and oversight for micro-targeting and algorithmic abuse.


More detail behind my thinking, follows below, after the break. [Structure rearranged on May 14, 2019]


Continue reading Thoughts on the Online Harms White Paper (I)

Women Leading in AI — Challenging the unaccountable and the inevitable

Notes [and my thoughts] from the Women Leading in AI launch event of the Ten Principles of Responsible AI report and recommendations, February 6, 2019.

Speakers included Ivana Bartoletti (GemServ), Jo Stevens MP, Professor Joanna J Bryson, Lord Tim Clement-Jones, Roger Taylor (Centre for Data Ethics and Innovation, Chair), Sue Daley (techUK), Reema Patel, Nuffield Foundation and Ada Lovelace Institute.

Challenging the unaccountable and the ‘inevitable’ is the title of the conclusion of the Women Leading in AI report Ten Principles of Responsible AI, launched this week, and this makes me hopeful.

“There is nothing inevitable about how we choose to use this disruptive technology. […] And there is no excuse for failing to set clear rules so that it remains accountable, fosters our civic values and allows humanity to be stronger and better.”

Ivana Bartoletti, co-founder of Women Leading in AI, began the event, hosted at the House of Commons by Jo Stevens, MP for Cardiff Central, and spoke brilliantly of why it matters right now.

Everyone’s talking about ethics, she said, but it has limitations. I agree with that. This was by contrast very much a call to action.

It was nearly impossible not to cheer, as she set out without any of the usual bullshit, the reasons why we need to stop “churning out algorithms which discriminate against women and minorities.”

Professor Joanna J Bryson took up multiple issues, such as why

  • innovation, ‘flashes in the pan’ are not sustainable and not what we’re looking for things in that work for us [society].
  • The power dynamics of data, noting Facebook, Google et al are global assets, and are also global problems, and flagged the UK consultation on taxation open now.
  • And that it is critical that we do not have another nation with access to all of our data.

She challenged the audience to think about the fact that inequality is higher now than it has been since World War I. That the rich are getting richer and that imbalance of not only wealth, but of the control individuals have in their own lives, is failing us all.

This big picture thinking while zooming in on detailed social, cultural, political and tech issues, fascinated me most that evening. It frustrated the man next to me apparently, who said to me at the end, ‘but they haven’t addressed anything on the technology.’

[I wondered if that summed up neatly, some of why fixing AI cannot be a male dominated debate. Because many of these issues for AI, are not of the technology, but of people and power.] 

Jo Stevens, MP for Cardiff Central, hosted the event and was candid about politicians’ level of knowledge and the need to catch up on some of what matters in the tech sector.

We grapple with the speed of tech, she said. We’re slow at doing things and tech moves quickly. It means that we have to learn quickly.

While discussing how regulation is not something AI tech companies should fear, she suggested that a constructive framework whilst protecting society against some of the problems we see is necessary and just, because self-regulation has failed.

She talked about their enquiry which began about “fake news” and disinformation, but has grown to include:

  • wider behavioural economics,
  • how it affects democracy.
  • understanding the power of data.
  • disappointment with social media companies, who understand the power they have, and fail to be accountable.

She wants to see something that changes the way big business works, in the way that employment regulation challenged exploitation of the workforce and unsafe practices in the past.

The bias (conscious or unconscious) and power imbalance has some similarity with the effects on marginalised communities — women, BAME, disabilities — and she was looking forward to see the proposed solutions, and welcomed the principles.

Lord Clement-Jones, as Chair of the Select Committee on Artificial Intelligence, picked up the values they had highlighted in the March 2018 report, Artificial Intelligence, AI in the UK: ready, willing and able?

Right now there are so many different bodies, groups in parliament and others looking at this [AI / Internet / The Digital World] he said, so it was good that the topic is timely, front and centre with a focus on women, diversity and bias.

He highlighted, the importance of maintaining public trust. How do you understand bias? How do you know how algorithms are trained and understand the issues? He fessed up to being a big fan of DotEveryone and their drive for better ‘digital understanding’.

[Though sometimes this point is over complicated by suggesting individuals must understand how the AI works, the consensus of the evening was common sensed — and aligned with the Working Party 29 guidance — that data controllers must ensure they explain clearly and simply to individuals, how the profiling or automated decision-making process works, and what its effect is for them.]

The way forward he said includes:

  • Designing ethics into algorithms up front.
  • Data audits need to be diverse in order to embody fairness and diversity in the AI.
  • Questions of the job market and re-skilling.
  • The enforcement of ethical frameworks.

He also asked how far bodies will act, in different debates. Deciding who decides on that is still a debate to be had.

For example, aware of the social credit agenda and scoring in China, we should avoid the same issues. He also agreed with Joanna, that international cooperation is vital, and said it is important that we are not disadvantaged in this global technology. He expected that we [the Government Office for AI] will soon promote a common set of AI ethics, at the G20.

Facial recognition and AI are examples of areas that require regulation for safe use of the tech and to weed out those using it for the wrong purposes, he suggested.

However, on regulation he held back. We need to be careful about too many regulators he said. We’ve got the ICO, FCA, CMA, OFCOM, you name it, we’ve already got it, and they risk tripping over one another. [What I thought as CDEI was created para 31.]

We [the Lords Committee] didn’t suggest yet another regulator for AI, he said and instead the CDEI should grapple with those issues and encourage ethical design in micro-targeting for example.

Roger Taylor (Chair of the CDEI), — after saying it felt as if the WLinAI report was like someone had left their homework on his desk,  supported the concept of the WLinAI principles are important, and  agreed it was time for practical things, and what needs done.

Can our existing regulators do their job, and cover AI? he asked, suggesting new regulators will not be necessary. Bias he rightly recognised, already exists in our laws and bodies with public obligations, and in how AI is already operating;

  • CVs sorting. [problematic IMO > See Amazon, US teachers]
  • Policing.
  • Creditworthiness.

What evidence is needed, what process is required, what is needed to assure that we know how it is actually operating? Who gets to decide to know if this is fair or not? While these are complex decisions, they are ultimately not for technicians, but a decision for society, he said.

[So far so good.]

Then he made some statements which were rather more ambiguous. The standards expected of the police will not be the same as those for marketeers micro targeting adverts at you, for example.

[I wondered how and why.]

Start up industries pay more to Google and Facebook than in taxes he said.

[I wondered how and why.]

When we think about a knowledge economy, the output of our most valuable companies is increasingly ‘what is our collective truth? Do you have this diagnosis or not? Are you a good credit risk or not? Even who you think you are — your identity will be controlled by machines.’

What can we do as one country [to influence these questions on AI], in what is a global industry? He believes, a huge amount. We are active in the financial sector, the health service, education, and social care — and while we are at the mercy of large corporations, even large corporations obey the law, he said.

[Hmm, I thought, considering the Google DeepMind-Royal Free agreement that didn’t, and venture capitalists not renowned for their ethics, and yet advise on some of the current data / tech / AI boards. I am sceptical of corporate capture in UK policy making.]

The power to use systems to nudge our decisions, he suggested, is one that needs careful thought. The desire to use the tech to help make decisions is inbuilt into what is actually wrong with the technology that enables us to do so. [With this I strongly agree, and there is too little protection from nudge in data protection law.]

The real question here is, “What is OK to be owned in that kind of economy?” he asked.

This was arguably the neatest and most important question of the evening, and I vigorously agreed with him asking it, but then I worry about his conclusion in passing, that he was, “very keen to hear from anyone attempting to use AI effectively, and encountering difficulties because of regulatory structures.

[And unpopular or contradictory a view as it may be, I find it deeply ethically problematic for the Chair of the CDEI to be held by someone who had a joint-venture that commercially exploited confidential data from the NHS without public knowledge, and its sale to the Department of Health was described by the Public Accounts Committee, as a “hole and corner deal”. That was the route towards care.data, that his co-founder later led for NHS England. The company was then bought by Telstra, where Mr Kelsey went next on leaving NHS Engalnd. The whole commodification of confidentiality of public data, without regard for public trust, is still a barrier to sustainable UK data policy.]

Sue Daley (Tech UK) agreed this year needs to be the year we see action, and the report is a call to action on issues that warrant further discussion.

  • Business wants to do the right thing, and we need to promote it.
  • We need two things — confidence and vigilance.
  • We’re not starting from scratch, and talked about GDPR as the floor not the ceiling. A starting point.

[I’m not quite sure what she was after here, but perhaps it was the suggestion that data regulation is fundamental in AI regulation, with which I would agree.]

What is the gap that needs filled she asked? Gap analysis is what we need next and avoid duplication of effort —need to avoid complexity and duplicity of work with other bodies. If we can answer some of the big, profound questions need to be addressed to position the UK as the place where companies want to come to.

Sue was the only speaker that went on to talk about the education system that needs to frame what skills are needed for a future world for a generation, ‘to thrive in the world we are building for them.’

[The Silicon Valley driven entrepreneur narrative that the education system is broken, is not an uncontroversial position.]

She finished with the hope that young people watching BBC icons the night before would see, Alan Turing [winner of the title] and say yes, I want to be part of that.

Listening to Reema Patel, representative of the Ada Lovelace Institute, was the reason I didn’t leave early and missed my evening class. Everything she said resonated, and was some of the best I have heard in the recent UK debate on AI.

  • Civic engagement, the role of the public is as yet unclear with not one homogeneous, but many publics.
  • The sense of disempowerment is important, with disconnect between policy and decisions made about people’s lives.
  • Transparency and literacy are key.
  • Accountability is vague but vital.
  • What does the social contract look like on people using data?
  • Data may not only be about an individual and under their own responsibility, but about others and what does that mean for data rights, data stewardship and articulation of how they connect with one another, which is lacking in the debate.
  • Legitimacy; If people don’t believe it is working for them, it won’t work at all.
  • Ensuring tech design is responsive to societal values.

2018 was a terrible year she thought. Let’s make 2019 better. [Yes!]


Comments from the floor and questions included Professor Noel Sharkey, who spoke about the reasons why it is urgent to act especially where technology is unfair and unsafe and already in use. He pointed to Compass (Durham police), and predictive policing using AI and facial recognition, with 5% accuracy, and that the Met was not taking these flaws seriously. Liberty produced a strong report on it out this week.

Caroline, from Women in AI echoed my own comments on the need to get urgent review in place of these technologies used with children in education and social care. [in particular where used for prediction of child abuse and interventions in family life].

Joanna J Bryson added to the conversation on accountability, to say people are not following existing software and audit protocols,  someone just needs to go and see if people did the right thing.

The basic question of accountability, is to ask if any flaw is the fault of a corporation, of due diligence, or of the users of the tool? Telling people that this is the same problem as any other software, makes it much easier to find solutions to accountability.

Tim Clement-Jones asked, on how many fronts can we fight on at the same time? If government has appeared to exempt itself from some of these issues, and created a weak framework for itself on handing data, in the Data Protection Act — critically he also asked, is the ICO adequately enforcing on government and public accountability, at local and national levels?

Sue Daley also reminded us that politicians need not know everything, but need to know what the right questions are to be asking? What are the effects that this has on my constituents, in employment, my family? And while she also suggested that not using the technology could be unethical, a participant countered that it’s not the worst the thing to have to slow technology down and ensure it is safe before we all go along with it.

My takeaways of the evening, included that there is a very large body of women, of whom attendees were only a small part, who are thinking, building and engineering solutions to some of these societal issues embedded in policy, practice and technology. They need heard.

It was genuinely electric and empowering, to be in a room dominated by women, women reflecting diversity of a variety of publics, ages, and backgrounds, and who listened to one another. It was certainly something out of the ordinary.

There was a subtle but tangible tension on whether or not  regulation beyond what we have today is needed.

While regulating the human behaviour that becomes encoded in AI, we need to ensure ethics of human behaviour, reasonable expectations and fairness are not conflated with the technology [ie a question of, is AI good or bad] but how it is designed, trained, employed, audited, and assess whether it should be used at all.

This was the most effective group challenge I have heard to date, counter the usual assumed inevitability of a mythical omnipotence. Perhaps Julia Powles, this is the beginnings of a robust, bold, imaginative response.

Why there’s not more women or people from minorities working in the sector, was a really interesting if short, part of the discussion. Why should young women and minorities want to go into an environment that they can see is hostile, in which they may not be heard, and we still hold *them* responsible for making work work?

And while there were many voices lamenting the skills and education gaps, there were probably fewer who might see the solution more simply, as I do. Schools are foreshortening Key Stage 3 by a year, replacing a breadth of subjects, with an earlier compulsory 3 year GCSE curriculum which includes RE, and PSHE, but means that at 12, many children are having to choose to do GCSE courses in computer science / coding, or a consumer-style iMedia, or no IT at all, for the rest of their school life. This either-or content, is incredibly short-sighted and surely some blend of non-examined digital skills should be offered through to 16 to all, at least in parallel importance with RE or PSHE.

I also still wonder, about all that incredibly bright and engaged people are not talking about and solving, and missing in policy making, while caught up in AI. We need to keep thinking broadly, and keep human rights at the centre of our thinking on machines. Anaïs Nin wrote over 70 years ago about the risks of growth in technology to expand our potential for connectivity through machines, but diminish our genuine connectedness as people.

“I don’t think the [American] obsession with politics and economics has improved anything. I am tired of this constant drafting of everyone, to think only of present day events”.

And as I wrote about nearly 3 years ago, we still seem to have no vision for sustainable public policy on data, or establishing a social contract for its use as Reema said, which underpins the UK AI debate. Meanwhile, the current changing national public policies in England on identity and technology, are becoming catastrophic.

Challenging the unaccountable and the ‘inevitable’ in today’s technology and AI debate, is an urgent call to action.

I look forward to hearing how Women Leading in AI plan to make it happen.


References:

Women Leading in AI website: http://womenleadinginai.org/
WLiAI Report: 10 Principles of Responsible AI
@WLinAI #WLinAI

image credits 
post: creative commons Mark Dodds/Flickr
event photo:  / GemServ

Policy shapers, product makers, and profit takers (1)

In 2018, ethics became the new fashion in UK data circles.

The launch of the Women Leading in AI principles of responsible AI, has prompted me to try and finish and post these thoughts, which have been on my mind for some time. If two parts of 1K is tl:dr for you, then in summary, we need more action on:

  • Ethics as a route to regulatory avoidance.
  • Framing AI and data debates as a cost to the Economy.
  • Reframing the debate around imbalance of risk.
  • Challenging the unaccountable and the ‘inevitable’.

And in the next post on:

  • Corporate Capture.
  • Corporate Accountability, and
  • Creating Authentic Accountability.

Ethics as a route to regulatory avoidance

In 2019, the calls to push aside old wisdoms for new, for everyone to focus on the value-laden words of ‘innovation’ and ‘ethics’, appears an ever louder attempt to reframe regulation and law as barriers to business, asking to cast them aside.

On Wednesday evening, at the launch of the Women Leading in AI principles of responsible AI, the chair of the CDEI said in closing, he was keen to hear from companies where, “they were attempting to use AI effectively and encountering difficulties due to regulatory structures.”

In IBM’s own words to government recently,

A rush to further regulation can have the effect of chilling innovation and missing out on the societal and economic benefits that AI can bring.”

The vague threat is very clear, if you regulate, you’ll lose. But the the societal and economic benefits are just as vague.

So far, many talking about ethics are trying to find a route to regulatory avoidance. ‘We’ll do better,’ they promise.

In Ben Wagner’s recent paper, Ethics as an Escape from Regulation: From ethics-washing to ethics-shopping,he asks how to ensure this does not become the default engagement with ethical frameworks or rights-based design. He sums up, “In this world, ‘ethics’ is the new ‘industry self-regulation.”

Perhaps it’s ingenious PR to make sure that what is in effect self-regulation, right across the business model, looks like it comes imposed from others, from the very bodies set up to fix it.

But as I think about in part 2, is this healthy for UK public policy and the future not of an industry sector, but a whole technology, when it comes to AI?

Framing AI and data debates as a cost to the Economy

Companies, organisations and individuals arguing against regulation are framing the debate as if it would come at a great cost to society and the economy. But we rarely hear, what effect do they expect on their company. What’s the cost/benefit expected for them. It’s disingenuous to have only part of that conversation. In fact the AI debate would be richer were it to be included. If companies think their innovation or profits are at risk from non-use, or regulated use, and there is risk to the national good associated with these products, we should be talking about all of that.

And in addition, we can talk about use and non-use in society. Too often, the whole debate is intangible. Show me real costs, real benefits. Real risk assessments. Real explanations that speak human. Industry should show society what’s in it for them.

You don’t want it to ‘turn out like GM crops’? Then learn their lessons on transparency, trustworthiness, and avoid the hype. And understand sometimes there is simply tech, people do not want.

Reframing the debate around imbalance of risk

And while we often hear about the imbalance of power associated with using AI, we also need to talk about the imbalance of risk.

While a small false positive rate for a company product may be a great success for them, or for a Local Authority buying the service, it might at the same time, mean lives forever changed, children removed from families, and individual reputations ruined.

And where company owners may see no risk from the product they assure is safe, there are intangible risks that need factored in, for example in education where a child’s learning pathway is determined by patterns of behaviour, and how tools shape individualised learning, as well as the model of education.

Companies may change business model, ownership, and move on to other sectors after failure. But with the levels of unfairness already felt in the relationship between the citizen and State — in programmes like Troubled Families, Universal Credit, Policing, and Prevent — where use of algorithms and ever larger datasets is increasing, long term harm from unaccountable failure will grow.

Society needs a rebalance of the system urgently to promote transparent fairness in interactions, including but not only those with new applications of technology.

We must find ways to reframe how this imbalance of risk is assessed, and is distributed between companies and the individual, or between companies and state and society, and enable access to meaningful redress when risks turn into harm.

If we are to do that, we need first to separate truth from hype, public good from self-interest and have a real discussion of risk across the full range from individual, to state, to society at large.

That’s not easy against a non-neutral backdrop and scant sources of unbiased evidence and corporate capture.

Challenging the unaccountable and the ‘inevitable’.

In 2017 the Care Quality Commission reported into online services in the NHS, and found serious concerns of unsafe and ineffective care. They have a cross-regulatory working group.

By contrast, no one appears to oversee that risk and the embedded use of automated tools involved in decision-making or decision support, in children’s services, or education. Areas where AI and cognitive behavioural science and neuroscience are already in use, without ethical approval, without parental knowledge or any transparency.

Meanwhile, as all this goes on, academics many are busy debating fixing algorithmic bias, accountability and its transparency.

Few are challenging the narrative of the ‘inevitability’ of AI.

Julia Powles and Helen Nissenbaum recently wrote that many of these current debates are an academic distraction, removed from reality. It is under appreciated how deeply these tools are already embedded in UK public policy. “Trying to “fix” A.I. distracts from the more urgent questions about the technology. It also denies us the possibility of asking: Should we be building these systems at all?”

Challenging the unaccountable and the ‘inevitable’ is the title of the conclusion of the Women Leading in AI report on principles, and makes me hopeful.

“There is nothing inevitable about how we choose to use this disruptive technology. […] And there is no excuse for failing to set clear rules so that it remains accountable, fosters our civic values and allows humanity to be stronger and better.”

[1] Powles, Nissenbaum, 2018,The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence, Medium

Next: Part  2– Policy shapers, product makers, and profit takers on

  • Corporate Capture.
  • Corporate Accountability, and
  • Creating Authentic Accountability.

Policy shapers, product makers, and profit takers (2)

Corporate capture

Companies are increasingly in controlling positions of the tech narrative in the press. They are funding neutral third-sector orgs’ and think tanks’ research. Supporting organisations advising on online education. Closely involved in politics. And sit increasingly, within the organisations set up to lead the technology vision, advising government on policy and UK data analytics, or on social media, AI and ethics.

It is all subject to corporate capture.

But is this healthy for UK public policy and the future not of an industry sector, but a whole technology, when it comes to AI?

If a company’s vital business interests seem unfazed by the risk and harm they cause to individuals — from people who no longer trust the confidentiality of the system to measurable harms — why should those companies sit on public policy boards set up to shape the ethics they claim we need, to solve the problems and restore loss of trust that these very same companies are causing?

We laud people in these companies as co-founders and forward thinkers on new data ethics institutes. They are invited to sit on our national boards, or create new ones.

What does that say about the entire board’s respect for the law which the company breached? It is hard not to see it signal acceptance of the company’s excuses or lack of accountability.

Corporate accountability

The same companies whose work has breached data protection law, multiple ways, seemingly ‘by accident’ on national data extractions, are those companies that cross the t’s and dot the i’s on even the simplest conference call, and demand everything is said in strictest confidence. Meanwhile their everyday business practices ignore millions of people’s lawful rights to confidentiality.

The extent of commercial companies’ influence on these boards is  opaque. To allow this ethics bandwagon to be driven by the corporate giants surely eschews genuine rights-based values, and long-term integrity of the body they appear to serve.

I am told that these global orgs must be in the room and at the table, to use the opportunity to make the world a better place.

These companies already have *all* the opportunity. Not only monopoly positions on their own technology, but the datasets at scale which underpin it, excluding new entrants to the market. Their pick of new hires from universities. The sponsorship of events. The political lobbying. Access to the media. The lawyers. Bottomless pockets to pay for it all. And seats at board tables set up to shape UK policy responses.

It’s a struggle for power, and a stake in our collective future. The status quo is not good enough for many parts of society, and to enable Big Tech or big government to maintain that simply through the latest tools, is a missed chance to reshape for good.

You can see it in many tech boards’ make up, and pervasive white male bias. We hear it echoed in London think tank conferences, even independent tech design agencies, or set out in some Big Tech reports. All seemingly unconnected, but often funded by the same driving sources.

These companies are often those that made it worse to start with, and the very ethics issues the boards have been set up to deal with, are at the core of their business models and of their making.

The deliberate infiltration of influence on online safety policy for children, or global privacy efforts is very real, explicitly set out in the #FacebookEmails, for example.

We will not resolve these fundamental questions, as long as the companies whose business depend on them, steer national policy. The odds will be ever in their favour.

At the same time, some of these individuals are brilliant. In all senses.

So what’s the answer. If they are around the table, what should the UK public expect of their involvement, and ensure in whose best interests it is? How do we achieve authentic accountability?

Whether it be social media, data analytics, or AI in public policy, can companies be safely permitted to be policy shapers if they wear all the hats; product maker, profit taker, *and* process or product auditor?

Creating Authentic Accountability

At minimum we must demand responsibility for their own actions from board members who represent or are funded by companies.

  1. They must deliver on their own product problems first before being allowed to suggest solutions to societal problems.
  2. There should be credible separation between informing policy makers, and shaping policy.
  3. There must be total transparency of funding sources across any public sector boards, of members, and those lobbying them.
  4. Board members must be meaningfully held accountable for continued company transgressions on rights and freedoms, not only harms.
  5. Oversight of board decision making must be decentralised, transparent and available to scrutiny and meaningful challenge.

While these new bodies may propose solutions that include public engagement strategies, transparency, and standards, few propose meaningful oversight. The real test is not what companies say in their ethical frameworks, but in what they continue to do.

If they fail to meet legal or regulatory frameworks, minimum accountability should mean no more access to public data sets and losing positions of policy influence.

Their behaviour needs to go above and beyond meeting the letter of the law, scraping by or working around rights based protections. They need to put people ahead of profit and self interests. That’s what ethics should mean, not be a PR route to avoid regulation.

As long as companies think the consequences of their platforms and actions are tolerable and a minimal disruption to their business model, society will be expected to live with their transgressions, and our most vulnerable will continue to pay the cost.


This is part 2 of thoughts on Policy shapers, product makers, and profit takers — data and AI. Part 1 is here.

The power of imagination in public policy

“A new, a vast, and a powerful language is developed for the future use of analysis, in which to wield its truths so that these may become of more speedy and accurate practical application for the purposes of mankind than the means hitherto in our possession have rendered possible.” [on Ada Lovelace, The First tech Visionary, New Yorker, 2013]

What would Ada Lovelace have argued for in today’s AI debates? I think she may have used her voice not only to call for the good use of data analysis, but for her second strength.The power of her imagination.

James Ball recently wrote in The European [1]:

“It is becoming increasingly clear that the modern political war isn’t one against poverty, or against crime, or drugs, or even the tech giants – our modern political era is dominated by a war against reality.”

My overriding take away from three days spent at the Conservative Party Conference this week, was similar. It reaffirmed the title of a school debate I lost at age 15, ‘We only believe what we want to believe.’

James writes that it is, “easy to deny something that’s a few years in the future“, and that Conservatives, “especially pro-Brexit Conservatives – are sticking to that tried-and-tested formula: denying the facts, telling a story of the world as you’d like it to be, and waiting for the votes and applause to roll in.”

These positions are not confined to one party’s politics, or speeches of future hopes, but define perception of current reality.

I spent a lot of time listening to MPs. To Ministers, to Councillors, and to party members. At fringe events, in coffee queues, on the exhibition floor. I had conversations pressed against corridor walls as small press-illuminated swarms of people passed by with Queen Johnson or Rees-Mogg at their centre.

In one panel I heard a primary school teacher deny that child poverty really exists, or affects learning in the classroom.

In another, in passing, a digital Minister suggested that Pupil Referral Units (PRU) are where most of society’s ills start, but as a Birmingham head wrote this week, “They’ll blame the housing crisis on PRUs soon!” and “for the record, there aren’t gang recruiters outside our gates.”

This is no tirade on failings of public policymakers however. While it is easy to suspect malicious intent when you are at, or feel, the sharp end of policies which do harm, success is subjective.

It is clear that an overwhelming sense of self-belief exists in those responsible, in the intent of any given policy to do good.

Where policies include technology, this is underpinned by a self re-affirming belief in its power. Power waiting to be harnessed by government and the public sector. Even more appealing where it is sold as a cost-saving tool in cash strapped councils. Many that have cut away human staff are now trying to use machine power to make decisions. Some of the unintended consequences of taking humans out of the process, are catastrophic for human rights.

Sweeping human assumptions behind such thinking on social issues and their causes, are becoming hard coded into algorithmic solutions that involve identifying young people who are in danger of becoming involved in crime using “risk factors” such as truancy, school exclusion, domestic violence and gang membership.

The disconnect between perception of risk, the reality of risk, and real harm, whether perceived or felt from these applied policies in real-life, is not so much, ‘easy to deny something that’s a few years in the future‘ as Ball writes, but a denial of the reality now.

Concerningly, there is lack of imagination of what real harms look like.There is no discussion where sometimes these predictive policies have no positive, or even a negative effect, and make things worse.

I’m deeply concerned that there is an unwillingness to recognise any failures in current data processing in the public sector, particularly at scale, and where it regards the well-known poor quality of administrative data. Or to be accountable for its failures.

Harms, existing harms to individuals, are perceived as outliers. Any broad sweep of harms across policy like Universal Credit, seem perceived as political criticism, which makes the measurable failures less meaningful, less real, and less necessary to change.

There is a worrying growing trend of finger-pointing exclusively at others’ tech failures instead. In particular, social media companies.

Imagination and mistaken ideas are reinforced where the idea is plausible, and shared. An oft heard and self-affirming belief was repeated in many fora between policymakers, media, NGOs regards children’s online safety. “There is no regulation online”. In fact, much that applies offline applies online. The Crown Prosecution Service Social Media Guidelines is a good place to start. [2] But no one discusses where children’s lives may be put at risk or less safe, through the use of state information about them.

Policymakers want data to give us certainty. But many uses of big data, and new tools appear to do little more than quantify moral fears, and yet still guide real-life interventions in real-lives.

Child abuse prediction, and school exclusion interventions should not be test-beds for technology the public cannot scrutinise or understand.

In one trial attempting to predict exclusion, this recent UK research project in 2013-16 linked children’s school records of 800 children in 40 London schools, with Metropolitan Police arrest records of all the participants. It found interventions created no benefit, and may have caused harm. [3]

“Anecdotal evidence from the EiE-L core workers indicated that in some instances schools informed students that they were enrolled on the intervention because they were the “worst kids”.”

Keeping students in education, by providing them with an inclusive school environment, which would facilitate school bonds in the context of supportive student–teacher relationships, should be seen as a key goal for educators and policy makers in this area,” researchers suggested.

But policy makers seem intent to use systems that tick boxes, and create triggers to single people out, with quantifiable impact.

Some of these systems are known to be poor, or harmful.

When it comes to predicting and preventing child abuse, there is concern with the harms in US programmes ahead of us, such as both Pittsburgh, and Chicago that has scrapped its programme.

The Illinois Department of Children and Family Services ended a high-profile program that used computer data mining to identify children at risk for serious injury or death after the agency’s top official called the technology unreliable, and children still died.

“We are not doing the predictive analytics because it didn’t seem to be predicting much,” DCFS Director Beverly “B.J.” Walker told the Tribune.

Many professionals in the UK share these concerns. How long will they be ignored and children be guinea pigs without transparent error rates, or recognition of the potential harmful effects?

Helen Margetts, Director of the Oxford Internet Institute and Programme Director for Public Policy at the Alan Turing Institute, suggested at the IGF event this week, that stopping the use of these AI in the public sector is impossible. We could not decide that, “we’re not doing this until we’ve decided how it’s going to be.” It can’t work like that.” [45:30]

Why on earth not? At least for these high risk projects.

How long should children be the test subjects of machine learning tools at scale, without transparent error rates, audit, or scrutiny of their systems and understanding of unintended consequences?

Is harm to any child a price you’re willing to pay to keep using these systems to perhaps identify others, while we don’t know?

Is there an acceptable positive versus negative outcome rate?

The evidence so far of AI in child abuse prediction is not clearly showing that more children are helped than harmed.

Surely it’s time to stop thinking, and demand action on this.

It doesn’t take much imagination, to see the harms. Safe technology, and safe use of data, does not prevent the imagination or innovation, employed for good.

If we continue to ignore views from Patrick Brown, Ruth Gilbert, Rachel Pearson and Gene Feder, Charmaine Fletcher, Mike Stein, Tina Shaw and John Simmonds I want to know why.

Where you are willing to sacrifice certainty of human safety for the machine decision, I want someone to be accountable for why.

 


References

[1] James Ball, The European, Those waging war against reality are doomed to failure, October 4, 2018.

[2] Thanks to Graham Smith for the link. “Social Media – Guidelines on prosecuting cases involving communications sent via social media. The Crown Prosecution Service (CPS) , August 2018.”

[3] Obsuth, I., Sutherland, A., Cope, A. et al. J Youth Adolescence (2017) 46: 538. https://doi.org/10.1007/s10964-016-0468-4 London Education and Inclusion Project (LEIP): Results from a Cluster-Randomized Controlled Trial of an Intervention to Reduce School Exclusion and Antisocial Behavior (March 2016)