Tag Archives: transparency

The devil craves DARPA

‘People, ideas, machines — in that order.’ This quote in that  latest blog by Dominic Cummings is spot on, but the blind spots or the deliberate scoping the blog reveals, are both just as interesting.

If you want to “figure out what characters around Putin might do”, move over Miranda. If your soul is for sale, then this might be the job for you. This isn’t anthropomorphism of Cummings, but an excuse to get in the parallels to Meryl Streep’s portrayal of Priestly.

“It will be exhausting but interesting and if you cut it you will be involved in things at the age of 21 that most people never see.”

Comments like these make people who are not of that mould, feel of less worth. Commitment comes in many forms. People with kids and caring responsibilities, may be some of your most loyal staff. You may not want them as your new PA, but you will almost certainly, not want to lose them across the board.

Some words would be wise in follow up to existing staff, the thousands of public servants we have today, after his latest post.

1. The blog is aimed at a certain kind of men. Speak to women too.

The framing of this call for staff is problematic, less for its suggested work ethic, than the structural inequalities it appears to purposely perpetuate. Despite the poke at public school bluffers. Do you want the best people around you, able to play well with others, or not?

I am disappointed that asking for “the sort of people we need to find” is designed, intentionally or not, to appeal to a certain kind of men. Even if he says it should be diverse and includes people, “like that girl hired by Bigend as a brand ‘diviner.'”

If Cummings is intentional about hiring the best people, then he needs to do by better by women. We already have a PM that many women would consider toxic to work around, and won’t as a result.

Some of the most brilliant, cognitively diverse, young people I know who fit these categories well, — and across the political spectrum–are themselves diverse by nature and expect their surroundings to be. They (unlike our generation), do not “babble about ‘gender identity diversity blah blah’.” Woke is not an adjective that needs explained, but a way of life. Put such people off by appearing to devalue their norms, and you’ll miss out on some potential brilliant applicants from the pool, which will already be self-selecting, excluding many who simply won’t work for you, or Boris, or Brexit blah blah. People prepared to burn out as you want them to, aren’t going to be at their best for long. And it takes a long time to recover.

‘That girl’ was the main character, and her name was Cayce Pollard.  Women know why you should say her name. Fewer women will have worked at CERN, perhaps for related reasons, compared with “the ideal candidate” described in this call.

“If you want an example of the sort of people we need to find in Britain, look at this’ he writes of C.C. Myers, with a link to, ‘On the Cover:  The World’s Fastest Man.

Charlie Munger, Warren Buffett, Alexander Grothendieck, Bret Victor, von Neumann, Cialdini. Groves, Mueller, Jain, Pearl, Kay, Gibson, Grove, Makridakis, Yudkowsky, Graham and Thiel.

The *men illustrated* list, goes on and on.

What does it matter how many lovers you have if none of them gives you the universe?

Not something I care to discuss over dinner either.

But women of all ages do care that our PM appears to be a cad. It matters therefore that your people be seen to work to a better standard. You want people loyal to your cause, and the public to approve, even if they don’t of your leader. Leadership goes far beyond electoral numbers and a mandate.

Women — including those that tick the skill boxes need, yet again, to look beyond the numbers and have to put up with a lot. This advertorial appeals to Peter Parker, when the future needs more of Miles Morales. Fewer people with the privilege and opportunity to work at the Large Hadron Collider, and more of those who stop Kingpin’s misuse and shut it down.

A different kind of the same kind of thing, isn’t real change. This call for something new, is far less radical than it is being portrayed as.

2. Change. Don’t forget to manage it by design.

In fact, the speculation that this is all change, hiring new people for new stuff [some of which elsewhere he has genuinely interesting ideas on, like, “decentralisation and distributed control to minimise the inevitable failures of even the best people”] doesn’t really feature here, rather it is something of a precursor. He’s starting less with building the new, and rather with let’s ‘drain the swamp’ of bureaucracy. The Washington-style of 1980’s Reagan, including, ‘let’s put in some more of our kind of people’.

His personal brand of longer-term change may not be what some of his cheerleaders think it will be, but if the outcome is the same and seen to be ‘showing these Swamp creatures the zero mercy they deserve‘, [sic] does intent matter? It does, and he needs to describe his future plans better, if he wants to have a civil service  that works well.

The biggest content gap (leaving actual policy content aside) is any appreciation of the current, and need for change management.

Training gets a mention; but new process success, depends on effectively communicating on change, and delivering training about it to all, not only those from whom you expect the most high performance. People not projects, remember?

Change management and capability transfer delivered by costly consultants, is not needed, but making it understandable not elitist, is.

  • genuinely present an understanding of the as-is,  (I get you and your org, for change *with* you, not to force change upon you)
  • communicating what the future model is going to move towards (this is why you want to change and what good looks like), and
  • a roadmap of how you expect the organisation to get there (how and when), that need not be constricted by artificial comms grids.

Because people and having their trust, are what make change work.

On top of the organisational model, *every* member of staff must know where their own path fits in, and if their role is under threat, whether training will be offered to adapt, or whether they will be made redundant. Uncertainty around this over time, is also toxic. You might not care if you lose people along the way. You might consider these the most expendable people. But if people are fearful and unhappy in your organisation, or about their own future, it will hold them back from delivering at their best, and the organisation as a result.  And your best will leave, as much as those who are not.

“How to build great teams and so on”, is not a bolt-on extra here, it is fundamental.  You can’t forget the kitchens. But changing the infrastructure alone, cannot deliver real change you want to see.

3. Communications. Neither propaganda and persuasion nor PR.

There is not such a vast difference between the business of communications as a campaign tool, and tool for control. Persuasion and propaganda. But where there may be a blind spot in the promotion of the Cialdini-six style comms, is that behavioural scientists that excel at these, will not use the kind of communication tools that either the civil service nor the country needs for the serious communications of change, beyond the immediate short term.

Five thoughts:

  1. Your comms strategy should simply be “Show the thing. Be clear. Be brief.”
  2. Communicating that failure is acceptable, is only so if it means learning from it.
  3. If policy comms plans depend on work led by people like you,  who like each other and like you, you’ll be told what you want to hear.
  4. Ditto, think tanks that think the same are not as helpful as others.
  5. And you need grit in the oyster for real change.

As an aside, for anyone having kittens about using an unofficial email to get around FOI requests and think it a conspiracy to hide internal communications, it really doesn’t work that way. Don’t panic, we know where our towel is.

4. The Devil craves DARPA. Build it with safe infrastructures.

Cumming’s long-established fetishing of technology and fascination with Moscow will be familiar to those close, or blog readers. They are also currently fashionable, again. The solution is therefore no surprise, and has been prepped in various blogs for ages. The language is familiar. But single-mindedness over this length of time, can make for short sightedness.

In the US. DARPA was set up in 1958 after the Soviet Union launched the world’s first satellite, with a remit to “prevent technological surprise” and pump money into “high risk, high reward” projects. (Sunday Times, Dec 28, 2019)

In  March, Cummings wrote in praise of Project Maven;

“The limiting factor for the Pentagon in deploying advanced technology to conflict in a useful time period was not new technical ideas — overcoming its own bureaucracy was harder than overcoming enemy action.”

Almost a year after that project collapsed, its most interesting feature was surely not the role of bureaucracy among tech failure. Maven was a failure not of tech, nor bureaucracy, but to align its values with the decency of its workforce. Whether the recallibration of its compass as a company is even possible, remains to be seen.

If firing staff who hold you to account against a mantra of ‘don’t be evil’ is championed, this drive for big tech values underpinning your staff thinking and action, will be less about supporting technology moonshots, than a shift to the Dark Side of capitalist surveillance.

The incessant narrative focus on man and the machine –machine learning, ⁠—the machinery of government, quantitative models and the frontiers of the science of prediction is an obsession with power. The downplay of the human in that world ⁠—is displayed in so many ways, but the most obvious is the press and political narrative of a need to devalue human rights, ⁠— and yet to succeed, tech and innovation needs an equal and equivalent counterweight, in accountability under human rights and the law, so that when systems fail people, they do not cause catastrophic harm at scale.

“Practically nobody is ever held accountable regardless of the scale of failure, you say? How do you measure your own failure? Or the failure of policy? Transparency over that, and a return to Ministerial accountability are changes I would like to see. Or how about demanding accountability for algorithms that send children to social care, of which the CEO has said his failure is only measured by a Local Authority not saving money as a result of using their system?

We must stop state systems failing children, if they are not to create a failed society.

A UK DARPA-esque, devolved hothousing for technology will fail, if you don’t shore up public trust. Both in the state and commercial sectors. An electoral mandate won’t last, nor reach beyond its scope for long. You need a social licence to have legitimacy for tech that uses public data, that is missing today. It is bone-headed and idiotic that we can’t get this right as a country.  Despite knowing how to, if government keeps avoiding doing it safely, it will come at a cost.

The Pentagon certainly cares about the implications for national security when the personal data of millions of people could be open to exploitation, blackmail or abuse.

You might of course, not care. But commercial companies will when they go under. The electorate will. Your masters might if their legacy will suffer and debate about the national good and the UK as a Life Sciences centre, all come to naught.

There was little in this blog, of the reality of what these hires should deliver beyond more tech and systems’ change. But the point is to make systems that work for people, not see more systems at work.

We could have it all, but not if you spaff our data laws up the wall.

“But the ship can’t sink.”

“She is made of iron, sir. I assure you, she can. And she will. It is a mathematical certainty.

[Attributed to Thomas Andrews, Chief Designer of the RMS Titanic.]

5. The ‘circle of competence’ needs values, not only to value skills.

It’s important and consistent behaviour that Cummings says he recognises his own weaknesses, that some decisions are beyond his ‘circle of competence’ and that he should in in effect become redundant, having brought in, “the sort of expertise supporting the PM and ministers that is needed.” Founder’s syndrome is common to organisations and politics is not exempt. But neither is the Peter principle a phenomenon particular to only the civil service.

“One of the problems with the civil service is the way in which people are shuffled such that they either do not acquire expertise or they are moved out of areas they really know to do something else.”

But so what? what’s worse, is politics has not only the Peter’s but the Dilbert principle when it comes to senior leadership. You can’t put people in positions expected to command respect when they tell others to shut up and go away. Or fire without due process. If you want orgs to function together at scale, especially beyond the current problems with silos, they need people on the ground who can work together, and have a common goal who respect those above them, and feel it is all worthwhile. Their politics don’t matter. But integrity, respect and trust do, even if they don’t matter to you personally.

I agree wholeheartedly that circles of competence matter [as I see the need to build some in education on data and edTech]. Without the appropriate infrastructure change, radical change of policy is nearly impossible. But skill is not the only competency that counts when it comes to people.

If the change you want is misaligned with people’s values, people won’t support it, no matter who you get to see it through. Something on the integrity that underpins this endeavour,  will matter to the applicants too. Most people do care how managers treat their own.

The blog was pretty clear that Cummings won’t value staff, unless their work ethic, skills and acceptance will belong to him alone to judge sufficient or not, to be “binned within weeks if you don’t fit.”

This government already knows it has treated parts of the public like that for too long. Policy has knowingly left some people behind on society’s  scrap heap, often those scored by automated systems as inadequate. Families in-work moved onto Universal Credit, feed their children from food banks for #5WeeksTooLong. The rape clause. Troubled families. Children with special educational needs battling for EHC plan recognition without which schools won’t take them, and DfE knowingly underfunding suitable Alternative Provision in education by a colossal several hundred per cent amount per place, by design.

The ‘circle of competence’ needs to recognise what happens as a result of policy, not only to place value on the skills in its delivery or see outcomes on people as inevitable or based on merit. Charlie Munger may have said, “At the end of the day – if you live long enough – most people get what they deserve.”

An awful lot of people deserve a better standard of living and human dignity than the UK affords them today. And we can’t afford not to fix it. A question for new hires: How will you contribute to doing this?

6. Remember that our civil servants, are after all, public servants.  

The real test of competence, and whether the civil service delivers for the people whom they serve, is inextricably bound with government policy. If its values, if its ethics are misguided, building a new path with or without new people, will be impossible.

The best civil servants I have worked with, have one thing in common. They have a genuine desire to make the world better. [We can disagree on what that looks like and for whom, on fraud detection, on immigration, on education, on exploitation of data mining and human rights, or the implications of the law. Their policy may bring harm, but their motivation is not malicious.] Your goal may be a ‘better’ civil service. They may be more focussed on better outcomes for people, not systems. Lose sight of that, and you put the service underpinning government, at risk. Not to bring change for good, but to destroy the very point of it.  Keep the point of a better service, focussed on the improvement for the public.

Civil servants civilly serve in the words of asked, so should we all ask Cummings to outline his thoughts on:

  • “What makes the decisions which civil servants implement legitimate?
  • Where are the boundaries of that legitimacy and how can they be detected?
  • What should civil servants do if those boundaries are reached and crossed?”

Self-destruction for its own sake, is not a compelling narrative for change, whether you say you want to control that narrative, or not.

Two hands are a lot, but many more already work in the civil service. If Cummings only works against them, he’ll succeed not in building change, but resistance.

The consent model fails school children. Let’s fix it.

The Joint Committee on Human Rights report, The Right to Privacy (Article 8) and the Digital Revolution,  calls for robust regulation to govern how personal data is used and stringent enforcement of the rules.

“The consent model is broken” was among its key conclusions.

Similarly, this summer,  the Swedish DPA found, in accordance with GDPR, that consent was not a valid legal basis for a school pilot using facial recognition to keep track of students’ attendance given the clear imbalance between the data subject and the controller.

This power imbalance is at the heart of the failure of consent as a lawful basis under Art. 6, for data processing from schools.

Schools, children and their families across England and Wales currently have no mechanisms to understand which companies and third parties will process their personal data in the course of a child’s compulsory education.

Children have rights to privacy and to data protection that are currently disregarded.

  1. Fair processing is a joke.
  2. Unclear boundaries between the processing in-school and by third parties are the norm.
  3. Companies and third parties reach far beyond the boundaries of processor, necessity and proportionality, when they determine the nature of the processing: extensive data analytics,  product enhancements and development going beyond necessary for the existing relationship, or product trials.
  4. Data retention rules are as unrespected as the boundaries of lawful processing. and ‘we make the data pseudonymous / anonymous and then archive / process / keep forever’ is common.
  5. Rights are as yet almost completely unheard of for schools to explain, offer and respect, except for Subject Access. Portability for example, a requirement for consent, simply does not exist.

In paragraph 8 of its general comment No. 1, on the aims of education, the UN Convention Committee on the Rights of the Child stated in 2001:

“Children do not lose their human rights by virtue of passing through the school gates. Thus, for example, education must be provided in a way that respects the inherent dignity of the child and enables the child to express his or her views freely in accordance with article 12, para (1), and to participate in school life.”

Those rights currently unfairly compete with commercial interests. And that power balance in education is as enormous, as the data mining in the sector. The then CEO of Knewton,  Jose Ferreira said in 2012,

“the human race is about to enter a totally data mined existence…education happens to be today, the world’s most data mineable industry– by far.”

At the moment, these competing interests and the enormous power imbalance between companies and schools, and schools and families, means children’s rights are last on the list and oft ignored.

In addition, there are serious implications for the State, schools and families due to the routine dependence on key systems at scale:

  • Infrastructure dependence ie Google Education
  • Hidden risks [tangible and intangible] of freeware
  • Data distribution at scale and dependence on third party intermediaries
  • and not least, the implications for families’ mental health and stress thanks to the shift of the burden of school back office admin from schools, to the family.

It’s not a contract between children and companies either

Contract GDPR Article 6 (b) does not work either, as a basis of processing between the data processing and the data subject, because again, it’s the school that determines the need for and nature of the processing in education, and doesn’t work for children.

The European Data Protection Board published Guidelines 2/2019 on the processing of personal data under Article 6(1)(b) GDPR in the context of the provision of online services to data subjects, on October 16, 2019.

Controllers must, inter alia, take into account the impact on data subjects’ rights when identifying the appropriate lawful basis in order to respect the principle of fairness.

They also concluded that, on the capacity of children to enter into contracts, (footnote 10, page 6)

“A contractual term that has not been individually negotiated is unfair under the Unfair Contract Terms Directive “if, contrary to the requirement of good faith, it causes a significant imbalance in the parties’ rights and obligations arising under the contract, to the detriment of the consumer”.

Like the transparency obligation in the GDPR, the Unfair Contract Terms Directive mandates the use of plain, intelligible language.

Processing of personal data that is based on what is deemed to be an unfair term under the Unfair Contract Terms Directive, will generally not be consistent with the requirement under Article5(1)(a) GDPR that processing is lawful and fair.’

In relation to the processing of special categories of personal data, in the guidelines on consent, WP29 has also observed that Article 9(2) does not recognize ‘necessary for the performance of a contract’ as an exception to the general prohibition to process special categories of data.

They too also found:

it is completely inappropriate to use consent when processing children’s data: children aged 13 and older are, under the current legal framework, considered old enough to consent to their data being used, even though many adults struggle to understand what they are consenting to.

Can we fix it?

Consent models fail school children. Contracts can’t be between children and companies. So what do we do instead?

Schools’ statutory tasks rely on having a legal basis under data protection law, the public task lawful basis Article 6(e) under GDPR, which implies accompanying lawful obligations and responsibilities of schools towards children. They cannot rely on (f) legitimate interests. This 6(e) does not extend directly to third parties.

Third parties should operate on the basis of contract with the school, as processors, but nothing more. That means third parties do not become data controllers. Schools stay the data controller.

Where that would differ with current practice, is that most processors today stray beyond necessary tasks and become de facto controllers. Sometimes because of the everyday processing and having too much of a determining role in the definition of purposes or not allowing changes to terms and conditions; using data to develop their own or new products, for extensive data analytics, the location of processing and data transfers, and very often because of excessive retention.

Although the freedom of the mish-mash of procurement models across UK schools on an individual basis, learning grids, MATs, Local Authorities and no-one-size-fits-all model may often be a good thing, the lack of consistency today means your child’s privacy and data protection are in a postcode lottery. Instead we need:

  • a radical rethink the use of consent models, and home-school agreements to obtain manufactured ‘I agree’ consent.
  • to radically articulate and regulate what good looks like, for interactions between children and companies facilitated by schools, and
  • radically redesign a contract model which enables only that processing which is within the limitations of a processors remit and therefore does not need to rely on consent.

It would mean radical changes in retention as well. Processors can only process for only as long as the legal basis extends from the school. That should generally be only the time for which a child is in school, and using that product in the course of their education. And certainly data must not stay with an indefinite number of companies and their partners, once the child has left that class, year, or left school and using the tool. Schools will need to be able to bring in part of the data they outsource to third parties for learning, *if* they need it as evidence or part of the learning record, into the educational record.

Where schools close (or the legal entity shuts down and no one thinks of the school records [yes, it happens], change name, and reopen in the same walls as under academisation) there must be a designated controller communicated before the change occurs.

The school fence is then something that protects the purposes of the child’s data for education, for life, and is the go to for questions. The child has a visible and manageable digital footprint. Industry can be confident that they do indeed have a lawful basis for processing.

Schools need to be within a circle of competence

This would need an independent infrastructure we do not have today, but need to draw on.

  • Due diligence,
  • communication to families and children of agreed processors on an annual basis,
  • an opt out mechanism that works,
  • alternative lesson content on offer to meet a similar level of offering for those who do,
  • and end-of-school-life data usage reports.

The due diligence in procurement, in data protection impact assessment, and accountability needs to be done up front, removed from the classroom teacher’s responsibility who is in an impossible position having had no basic teacher training in privacy law or data protection rights, and the documents need published in consultation with governors and parents, before beginning processing.

However, it would need to have a baseline of good standards that simply does not exist today.

That would also offer a public safeguard for processing at scale, where a company is not notifying the DPA due to small numbers of children at each school, but where overall group processing of special category (sensitive) data could be for millions of children.

Where some procurement structures might exist today, in left over learning grids, their independence is compromised by corporate partnerships and excessive freedoms.

While pre-approval of apps and platforms can fail where the onus is on the controller to accept a product at a point in time, the power shift would occur where products would not be permitted to continue processing without notifying of significant change in agreed activities, owner, storage of data abroad and so on.

We shift the power balance back to schools, where they can trust a procurement approval route, and children and families can trust schools to only be working with suppliers that are not overstepping the boundaries of lawful processing.

What might school standards look like?

The first principles of necessity, proportionality, data minimisation would need to be demonstrable — just as required under data protection law for many years, and is more explicit under GDPR’s accountability principle. The scope of the school’s authority must be limited to data processing for defined educational purposes under law and only these purposes can be carried over to the processor. It would need legislation and a Code of Practice, and ongoing independent oversight. Violations could mean losing the permission to be a provider in the UK school system. Data processing failures would be referred to the ICO.

  1. Purposes: A duty on the purposes of processing to be for necessary for strictly defined educational purposes.
  2. Service Improvement: Processing personal information collected from children to improve the product would be very narrow and constrained to the existing product and relationship with data subjects — i.e security, not secondary product development.
  3. Deletion: Families and children must still be able to request deletion of personal information collected by vendors which do not form part of the permanent educational record. And a ‘clean slate’ approach for anything beyond the necessary educational record, which would in any event, be school controlled.
  4. Fairness: Whilst at school, the school has responsibility for communication to the child and family how their personal data are processed.
  5. Post-school accountability as the data, resides with the school: On leaving school the default for most companies, should be deletion of all personal data, provided by the data subject, by the school, and inferred from processing.  For remaining data, the school should become the data controller and the data transferred to the school. For any remaining company processing, it must be accountable as controller on demand to both the school and the individual, and at minimum communicate data usage on an annual basis to the school.
  6. Ongoing relationships: Loss of communication channels should be assumed to be a withdrawal of relationship and data transferred to the school, if not deleted.
  7. Data reuse and repurposing for marketing explicitly forbidden. Vendors must be prohibited from using information for secondary [onward or indirect] reuse, for example in product or external marketing to pupils or parents.
  8. Families must still be able to object to processing, on an ad hoc basis, but at no detriment to the child, and an alternative method of achieving the same aims must be offered.
  9. Data usage reports would become the norm to close the loop on an annual basis.  “Here’s what we said we’d do at the start of the year. Here’s where your data actually went, and why.”
  10.  In addition, minimum acceptable ethical standards could be framed around for example, accessibility, and restrictions on in-product advertising.

There must be no alternative back route to just enough processing

What we should not do, is introduce workarounds by the back door.

Schools are not to carry on as they do today, manufacturing ‘consent’ which is in fact unlawful. It’s why Google, despite the objection when I set this out some time ago, is processing unlawfully. They rely on consent that simply cannot and does not exist.

The U.S. schools model wording would similarly fail GDPR tests, in that schools cannot ‘consent’ on behalf of children or families. I believe that in practice the US has weakened what should be strong protections for school children, by having the too expansive  “school official exception” found in the Family Educational Rights and Privacy Act (“FERPA”), and as described in Protecting Student Privacy While Using Online Educational Services: Requirements and Best Practices.

Companies can also work around their procurement pathways.

In parallel timing, the US Federal Trade Commission’s has a consultation open until December 9th, on the Implementation of the Children’s Online Privacy Protection Rule, the COPPA consultation.

The COPPA Rule “does not preclude schools from acting as intermediaries between operators and schools in the notice and consent process, or from serving as the parents’ agent in the process.”

‘There has been a significant expansion of education technology used in classrooms’, the FTC mused before asking whether the Commission should consider a specific exception to parental consent for the use of education technology used in the schools.

In a backwards approach to agency and the development of a rights respecting digital environment for the child, the consultation in effect suggests that we mould our rights mechanisms to fit the needs of business.

That must change. The ecosystem needs a massive shift to acknowledge that if it is to be GDPR compliant, which is a rights respecting regulation, then practice must become rights respecting.

That means meeting children and families reasonable expectations. If I send my daughter to school, and we are required to use a product that processes our personal data, it must be strictly for the *necessary* purposes of the task that the school asks of the company, and the child/ family expects, and not a jot more.

Borrowing on Ben Green’s smart enough city concept, or Rachel Coldicutt’s just enough Internet, UK school edTech suppliers should be doing just enough processing.

How it is done in the U.S. governed by FERPA law is imperfect and still results in too many privacy invasions, but it offers a regional model of expertise for schools to rely on, and strong contractual agreements of what is permitted.

That, we could build on. It could be just enough, to get it right.

Women Leading in AI — Challenging the unaccountable and the inevitable

Notes [and my thoughts] from the Women Leading in AI launch event of the Ten Principles of Responsible AI report and recommendations, February 6, 2019.

Speakers included Ivana Bartoletti (GemServ), Jo Stevens MP, Professor Joanna J Bryson, Lord Tim Clement-Jones, Roger Taylor (Centre for Data Ethics and Innovation, Chair), Sue Daley (techUK), Reema Patel, Nuffield Foundation and Ada Lovelace Institute.

Challenging the unaccountable and the ‘inevitable’ is the title of the conclusion of the Women Leading in AI report Ten Principles of Responsible AI, launched this week, and this makes me hopeful.

“There is nothing inevitable about how we choose to use this disruptive technology. […] And there is no excuse for failing to set clear rules so that it remains accountable, fosters our civic values and allows humanity to be stronger and better.”

Ivana Bartoletti, co-founder of Women Leading in AI, began the event, hosted at the House of Commons by Jo Stevens, MP for Cardiff Central, and spoke brilliantly of why it matters right now.

Everyone’s talking about ethics, she said, but it has limitations. I agree with that. This was by contrast very much a call to action.

It was nearly impossible not to cheer, as she set out without any of the usual bullshit, the reasons why we need to stop “churning out algorithms which discriminate against women and minorities.”

Professor Joanna J Bryson took up multiple issues, such as why

  • innovation, ‘flashes in the pan’ are not sustainable and not what we’re looking for things in that work for us [society].
  • The power dynamics of data, noting Facebook, Google et al are global assets, and are also global problems, and flagged the UK consultation on taxation open now.
  • And that it is critical that we do not have another nation with access to all of our data.

She challenged the audience to think about the fact that inequality is higher now than it has been since World War I. That the rich are getting richer and that imbalance of not only wealth, but of the control individuals have in their own lives, is failing us all.

This big picture thinking while zooming in on detailed social, cultural, political and tech issues, fascinated me most that evening. It frustrated the man next to me apparently, who said to me at the end, ‘but they haven’t addressed anything on the technology.’

[I wondered if that summed up neatly, some of why fixing AI cannot be a male dominated debate. Because many of these issues for AI, are not of the technology, but of people and power.] 

Jo Stevens, MP for Cardiff Central, hosted the event and was candid about politicians’ level of knowledge and the need to catch up on some of what matters in the tech sector.

We grapple with the speed of tech, she said. We’re slow at doing things and tech moves quickly. It means that we have to learn quickly.

While discussing how regulation is not something AI tech companies should fear, she suggested that a constructive framework whilst protecting society against some of the problems we see is necessary and just, because self-regulation has failed.

She talked about their enquiry which began about “fake news” and disinformation, but has grown to include:

  • wider behavioural economics,
  • how it affects democracy.
  • understanding the power of data.
  • disappointment with social media companies, who understand the power they have, and fail to be accountable.

She wants to see something that changes the way big business works, in the way that employment regulation challenged exploitation of the workforce and unsafe practices in the past.

The bias (conscious or unconscious) and power imbalance has some similarity with the effects on marginalised communities — women, BAME, disabilities — and she was looking forward to see the proposed solutions, and welcomed the principles.

Lord Clement-Jones, as Chair of the Select Committee on Artificial Intelligence, picked up the values they had highlighted in the March 2018 report, Artificial Intelligence, AI in the UK: ready, willing and able?

Right now there are so many different bodies, groups in parliament and others looking at this [AI / Internet / The Digital World] he said, so it was good that the topic is timely, front and centre with a focus on women, diversity and bias.

He highlighted, the importance of maintaining public trust. How do you understand bias? How do you know how algorithms are trained and understand the issues? He fessed up to being a big fan of DotEveryone and their drive for better ‘digital understanding’.

[Though sometimes this point is over complicated by suggesting individuals must understand how the AI works, the consensus of the evening was common sensed — and aligned with the Working Party 29 guidance — that data controllers must ensure they explain clearly and simply to individuals, how the profiling or automated decision-making process works, and what its effect is for them.]

The way forward he said includes:

  • Designing ethics into algorithms up front.
  • Data audits need to be diverse in order to embody fairness and diversity in the AI.
  • Questions of the job market and re-skilling.
  • The enforcement of ethical frameworks.

He also asked how far bodies will act, in different debates. Deciding who decides on that is still a debate to be had.

For example, aware of the social credit agenda and scoring in China, we should avoid the same issues. He also agreed with Joanna, that international cooperation is vital, and said it is important that we are not disadvantaged in this global technology. He expected that we [the Government Office for AI] will soon promote a common set of AI ethics, at the G20.

Facial recognition and AI are examples of areas that require regulation for safe use of the tech and to weed out those using it for the wrong purposes, he suggested.

However, on regulation he held back. We need to be careful about too many regulators he said. We’ve got the ICO, FCA, CMA, OFCOM, you name it, we’ve already got it, and they risk tripping over one another. [What I thought as CDEI was created para 31.]

We [the Lords Committee] didn’t suggest yet another regulator for AI, he said and instead the CDEI should grapple with those issues and encourage ethical design in micro-targeting for example.

Roger Taylor (Chair of the CDEI), — after saying it felt as if the WLinAI report was like someone had left their homework on his desk,  supported the concept of the WLinAI principles are important, and  agreed it was time for practical things, and what needs done.

Can our existing regulators do their job, and cover AI? he asked, suggesting new regulators will not be necessary. Bias he rightly recognised, already exists in our laws and bodies with public obligations, and in how AI is already operating;

  • CVs sorting. [problematic IMO > See Amazon, US teachers]
  • Policing.
  • Creditworthiness.

What evidence is needed, what process is required, what is needed to assure that we know how it is actually operating? Who gets to decide to know if this is fair or not? While these are complex decisions, they are ultimately not for technicians, but a decision for society, he said.

[So far so good.]

Then he made some statements which were rather more ambiguous. The standards expected of the police will not be the same as those for marketeers micro targeting adverts at you, for example.

[I wondered how and why.]

Start up industries pay more to Google and Facebook than in taxes he said.

[I wondered how and why.]

When we think about a knowledge economy, the output of our most valuable companies is increasingly ‘what is our collective truth? Do you have this diagnosis or not? Are you a good credit risk or not? Even who you think you are — your identity will be controlled by machines.’

What can we do as one country [to influence these questions on AI], in what is a global industry? He believes, a huge amount. We are active in the financial sector, the health service, education, and social care — and while we are at the mercy of large corporations, even large corporations obey the law, he said.

[Hmm, I thought, considering the Google DeepMind-Royal Free agreement that didn’t, and venture capitalists not renowned for their ethics, and yet advise on some of the current data / tech / AI boards. I am sceptical of corporate capture in UK policy making.]

The power to use systems to nudge our decisions, he suggested, is one that needs careful thought. The desire to use the tech to help make decisions is inbuilt into what is actually wrong with the technology that enables us to do so. [With this I strongly agree, and there is too little protection from nudge in data protection law.]

The real question here is, “What is OK to be owned in that kind of economy?” he asked.

This was arguably the neatest and most important question of the evening, and I vigorously agreed with him asking it, but then I worry about his conclusion in passing, that he was, “very keen to hear from anyone attempting to use AI effectively, and encountering difficulties because of regulatory structures.

[And unpopular or contradictory a view as it may be, I find it deeply ethically problematic for the Chair of the CDEI to be held by someone who had a joint-venture that commercially exploited confidential data from the NHS without public knowledge, and its sale to the Department of Health was described by the Public Accounts Committee, as a “hole and corner deal”. That was the route towards care.data, that his co-founder later led for NHS England. The company was then bought by Telstra, where Mr Kelsey went next on leaving NHS Engalnd. The whole commodification of confidentiality of public data, without regard for public trust, is still a barrier to sustainable UK data policy.]

Sue Daley (Tech UK) agreed this year needs to be the year we see action, and the report is a call to action on issues that warrant further discussion.

  • Business wants to do the right thing, and we need to promote it.
  • We need two things — confidence and vigilance.
  • We’re not starting from scratch, and talked about GDPR as the floor not the ceiling. A starting point.

[I’m not quite sure what she was after here, but perhaps it was the suggestion that data regulation is fundamental in AI regulation, with which I would agree.]

What is the gap that needs filled she asked? Gap analysis is what we need next and avoid duplication of effort —need to avoid complexity and duplicity of work with other bodies. If we can answer some of the big, profound questions need to be addressed to position the UK as the place where companies want to come to.

Sue was the only speaker that went on to talk about the education system that needs to frame what skills are needed for a future world for a generation, ‘to thrive in the world we are building for them.’

[The Silicon Valley driven entrepreneur narrative that the education system is broken, is not an uncontroversial position.]

She finished with the hope that young people watching BBC icons the night before would see, Alan Turing [winner of the title] and say yes, I want to be part of that.

Listening to Reema Patel, representative of the Ada Lovelace Institute, was the reason I didn’t leave early and missed my evening class. Everything she said resonated, and was some of the best I have heard in the recent UK debate on AI.

  • Civic engagement, the role of the public is as yet unclear with not one homogeneous, but many publics.
  • The sense of disempowerment is important, with disconnect between policy and decisions made about people’s lives.
  • Transparency and literacy are key.
  • Accountability is vague but vital.
  • What does the social contract look like on people using data?
  • Data may not only be about an individual and under their own responsibility, but about others and what does that mean for data rights, data stewardship and articulation of how they connect with one another, which is lacking in the debate.
  • Legitimacy; If people don’t believe it is working for them, it won’t work at all.
  • Ensuring tech design is responsive to societal values.

2018 was a terrible year she thought. Let’s make 2019 better. [Yes!]


Comments from the floor and questions included Professor Noel Sharkey, who spoke about the reasons why it is urgent to act especially where technology is unfair and unsafe and already in use. He pointed to Compass (Durham police), and predictive policing using AI and facial recognition, with 5% accuracy, and that the Met was not taking these flaws seriously. Liberty produced a strong report on it out this week.

Caroline, from Women in AI echoed my own comments on the need to get urgent review in place of these technologies used with children in education and social care. [in particular where used for prediction of child abuse and interventions in family life].

Joanna J Bryson added to the conversation on accountability, to say people are not following existing software and audit protocols,  someone just needs to go and see if people did the right thing.

The basic question of accountability, is to ask if any flaw is the fault of a corporation, of due diligence, or of the users of the tool? Telling people that this is the same problem as any other software, makes it much easier to find solutions to accountability.

Tim Clement-Jones asked, on how many fronts can we fight on at the same time? If government has appeared to exempt itself from some of these issues, and created a weak framework for itself on handing data, in the Data Protection Act — critically he also asked, is the ICO adequately enforcing on government and public accountability, at local and national levels?

Sue Daley also reminded us that politicians need not know everything, but need to know what the right questions are to be asking? What are the effects that this has on my constituents, in employment, my family? And while she also suggested that not using the technology could be unethical, a participant countered that it’s not the worst the thing to have to slow technology down and ensure it is safe before we all go along with it.

My takeaways of the evening, included that there is a very large body of women, of whom attendees were only a small part, who are thinking, building and engineering solutions to some of these societal issues embedded in policy, practice and technology. They need heard.

It was genuinely electric and empowering, to be in a room dominated by women, women reflecting diversity of a variety of publics, ages, and backgrounds, and who listened to one another. It was certainly something out of the ordinary.

There was a subtle but tangible tension on whether or not  regulation beyond what we have today is needed.

While regulating the human behaviour that becomes encoded in AI, we need to ensure ethics of human behaviour, reasonable expectations and fairness are not conflated with the technology [ie a question of, is AI good or bad] but how it is designed, trained, employed, audited, and assess whether it should be used at all.

This was the most effective group challenge I have heard to date, counter the usual assumed inevitability of a mythical omnipotence. Perhaps Julia Powles, this is the beginnings of a robust, bold, imaginative response.

Why there’s not more women or people from minorities working in the sector, was a really interesting if short, part of the discussion. Why should young women and minorities want to go into an environment that they can see is hostile, in which they may not be heard, and we still hold *them* responsible for making work work?

And while there were many voices lamenting the skills and education gaps, there were probably fewer who might see the solution more simply, as I do. Schools are foreshortening Key Stage 3 by a year, replacing a breadth of subjects, with an earlier compulsory 3 year GCSE curriculum which includes RE, and PSHE, but means that at 12, many children are having to choose to do GCSE courses in computer science / coding, or a consumer-style iMedia, or no IT at all, for the rest of their school life. This either-or content, is incredibly short-sighted and surely some blend of non-examined digital skills should be offered through to 16 to all, at least in parallel importance with RE or PSHE.

I also still wonder, about all that incredibly bright and engaged people are not talking about and solving, and missing in policy making, while caught up in AI. We need to keep thinking broadly, and keep human rights at the centre of our thinking on machines. Anaïs Nin wrote over 70 years ago about the risks of growth in technology to expand our potential for connectivity through machines, but diminish our genuine connectedness as people.

“I don’t think the [American] obsession with politics and economics has improved anything. I am tired of this constant drafting of everyone, to think only of present day events”.

And as I wrote about nearly 3 years ago, we still seem to have no vision for sustainable public policy on data, or establishing a social contract for its use as Reema said, which underpins the UK AI debate. Meanwhile, the current changing national public policies in England on identity and technology, are becoming catastrophic.

Challenging the unaccountable and the ‘inevitable’ in today’s technology and AI debate, is an urgent call to action.

I look forward to hearing how Women Leading in AI plan to make it happen.


References:

Women Leading in AI website: http://womenleadinginai.org/
WLiAI Report: 10 Principles of Responsible AI
@WLinAI #WLinAI

image credits 
post: creative commons Mark Dodds/Flickr
event photo:  / GemServ

Policy shapers, product makers, and profit takers (1)

In 2018, ethics became the new fashion in UK data circles.

The launch of the Women Leading in AI principles of responsible AI, has prompted me to try and finish and post these thoughts, which have been on my mind for some time. If two parts of 1K is tl:dr for you, then in summary, we need more action on:

  • Ethics as a route to regulatory avoidance.
  • Framing AI and data debates as a cost to the Economy.
  • Reframing the debate around imbalance of risk.
  • Challenging the unaccountable and the ‘inevitable’.

And in the next post on:

  • Corporate Capture.
  • Corporate Accountability, and
  • Creating Authentic Accountability.

Ethics as a route to regulatory avoidance

In 2019, the calls to push aside old wisdoms for new, for everyone to focus on the value-laden words of ‘innovation’ and ‘ethics’, appears an ever louder attempt to reframe regulation and law as barriers to business, asking to cast them aside.

On Wednesday evening, at the launch of the Women Leading in AI principles of responsible AI, the chair of the CDEI said in closing, he was keen to hear from companies where, “they were attempting to use AI effectively and encountering difficulties due to regulatory structures.”

In IBM’s own words to government recently,

A rush to further regulation can have the effect of chilling innovation and missing out on the societal and economic benefits that AI can bring.”

The vague threat is very clear, if you regulate, you’ll lose. But the the societal and economic benefits are just as vague.

So far, many talking about ethics are trying to find a route to regulatory avoidance. ‘We’ll do better,’ they promise.

In Ben Wagner’s recent paper, Ethics as an Escape from Regulation: From ethics-washing to ethics-shopping,he asks how to ensure this does not become the default engagement with ethical frameworks or rights-based design. He sums up, “In this world, ‘ethics’ is the new ‘industry self-regulation.”

Perhaps it’s ingenious PR to make sure that what is in effect self-regulation, right across the business model, looks like it comes imposed from others, from the very bodies set up to fix it.

But as I think about in part 2, is this healthy for UK public policy and the future not of an industry sector, but a whole technology, when it comes to AI?

Framing AI and data debates as a cost to the Economy

Companies, organisations and individuals arguing against regulation are framing the debate as if it would come at a great cost to society and the economy. But we rarely hear, what effect do they expect on their company. What’s the cost/benefit expected for them. It’s disingenuous to have only part of that conversation. In fact the AI debate would be richer were it to be included. If companies think their innovation or profits are at risk from non-use, or regulated use, and there is risk to the national good associated with these products, we should be talking about all of that.

And in addition, we can talk about use and non-use in society. Too often, the whole debate is intangible. Show me real costs, real benefits. Real risk assessments. Real explanations that speak human. Industry should show society what’s in it for them.

You don’t want it to ‘turn out like GM crops’? Then learn their lessons on transparency, trustworthiness, and avoid the hype. And understand sometimes there is simply tech, people do not want.

Reframing the debate around imbalance of risk

And while we often hear about the imbalance of power associated with using AI, we also need to talk about the imbalance of risk.

While a small false positive rate for a company product may be a great success for them, or for a Local Authority buying the service, it might at the same time, mean lives forever changed, children removed from families, and individual reputations ruined.

And where company owners may see no risk from the product they assure is safe, there are intangible risks that need factored in, for example in education where a child’s learning pathway is determined by patterns of behaviour, and how tools shape individualised learning, as well as the model of education.

Companies may change business model, ownership, and move on to other sectors after failure. But with the levels of unfairness already felt in the relationship between the citizen and State — in programmes like Troubled Families, Universal Credit, Policing, and Prevent — where use of algorithms and ever larger datasets is increasing, long term harm from unaccountable failure will grow.

Society needs a rebalance of the system urgently to promote transparent fairness in interactions, including but not only those with new applications of technology.

We must find ways to reframe how this imbalance of risk is assessed, and is distributed between companies and the individual, or between companies and state and society, and enable access to meaningful redress when risks turn into harm.

If we are to do that, we need first to separate truth from hype, public good from self-interest and have a real discussion of risk across the full range from individual, to state, to society at large.

That’s not easy against a non-neutral backdrop and scant sources of unbiased evidence and corporate capture.

Challenging the unaccountable and the ‘inevitable’.

In 2017 the Care Quality Commission reported into online services in the NHS, and found serious concerns of unsafe and ineffective care. They have a cross-regulatory working group.

By contrast, no one appears to oversee that risk and the embedded use of automated tools involved in decision-making or decision support, in children’s services, or education. Areas where AI and cognitive behavioural science and neuroscience are already in use, without ethical approval, without parental knowledge or any transparency.

Meanwhile, as all this goes on, academics many are busy debating fixing algorithmic bias, accountability and its transparency.

Few are challenging the narrative of the ‘inevitability’ of AI.

Julia Powles and Helen Nissenbaum recently wrote that many of these current debates are an academic distraction, removed from reality. It is under appreciated how deeply these tools are already embedded in UK public policy. “Trying to “fix” A.I. distracts from the more urgent questions about the technology. It also denies us the possibility of asking: Should we be building these systems at all?”

Challenging the unaccountable and the ‘inevitable’ is the title of the conclusion of the Women Leading in AI report on principles, and makes me hopeful.

“There is nothing inevitable about how we choose to use this disruptive technology. […] And there is no excuse for failing to set clear rules so that it remains accountable, fosters our civic values and allows humanity to be stronger and better.”

[1] Powles, Nissenbaum, 2018,The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence, Medium

Next: Part  2– Policy shapers, product makers, and profit takers on

  • Corporate Capture.
  • Corporate Accountability, and
  • Creating Authentic Accountability.

Can Data Trusts be trustworthy?

The Lords Select Committee report on AI in the UK in March 2018, suggested that,“the Government plans to adopt the Hall-Pesenti Review recommendation that ‘data trusts’ be established to facilitate the ethical sharing of data between organisations.”

Since data distribution already happens, what difference would a Data Trust model make to ‘ethical sharing‘?

A ‘set of relationships underpinned by a repeatable framework, compliant with parties’ obligations’ seems little better than what we have today, with all its problems including deeply unethical policy and practice.

The ODI set out some of the characteristics Data Trusts might have or share. As importantly, we should define what Data Trusts are not. They should not simply be a new name for pooling content and a new single distribution point. Click and collect.

But is a Data Trust little more than a new description for what goes on already? Either a physical space or legal agreements for data users to pass around the personal data from the unsuspecting, and sometimes unwilling, public. Friends-with-benefits who each bring something to the party to share with the others?

As with any communal risk, it is the standards of the weakest link, the least ethical, the one that pees in the pool, that will increase reputational risk for all who take part, and spoil it for everyone.

Importantly, the Lords AI Committee report recognised that there is an inherent risk how the public would react to Data Trusts, because there is no social license for this new data sharing.

“Under the current proposals, individuals who have their personal data contained within these trusts would have no means by which they could make their views heard, or shape the decisions of these trusts.

Views those keen on Data Trusts seem keen to ignore.

When the Administrative Data Research Network was set up in 2013, a new infrastructure for “deidentified” data linkage, extensive public dialogue was carried across across the UK. It concluded in a report with very similar findings as was apparent at dozens of care.data engagement events in 2014-15;

There is not public support for

  • “Creating large databases containing many variables/data from a large number of public sector sources,
  • Establishing greater permanency of datasets,
  • Allowing administrative data to be linked with business data, or
  • Linking of passively collected administrative data, in particular geo-location data”

The other ‘red-line’ for some participants was allowing “researchers for private companies to access data, either to deliver a public service or in order to make profit. Trust in private companies’ motivations were low.”

All of the above could be central to Data Trusts. All of the above highlight that in any new push to exploit personal data, the public must not be the last to know. And until all of the above are resolved, that social-license underpinning the work will always be missing.

Take the National Pupil Database (NPD) as a case study in a Data Trust done wrong.

It is a mega-database of over 20 other datasets. Raw data has been farmed out for years under terms and conditions to third parties, including users who hold an entire copy of the database, such as the somewhat secretive and unaccountable Fischer Family Trust, and others, who don’t answer to Freedom-of-Information, and whose terms are hidden under commercial confidentilaity. Buying and benchmarking data from schools and selling it back to some, profiling is hidden from parents and pupils, yet the FFT predictive risk scoring can shape a child’s school experience from age 2. They don’t really want to answer how staff tell if a child’s FFT profile and risk score predictions are accurate, or of they can spot errors or a wrong data input somewhere.

Even as the NPD moves towards risk reduction, its issues remain. When will children be told how data about them are used?

Is it any wonder that many people in the UK feel a resentment of institutions and orgs who feel entitled to exploit them, or nudge their behaviour, and a need to ‘take back control’?

It is naïve for those working in data policy and research to think that it does not apply to them.

We already have safe infrastructures in the UK for excellent data access. What users are missing, is the social license to do so.

Some of today’s data uses are ethically problematic.

No one should be talking about increasing access to public data, before delivering increased public understanding. Data users must get over their fear of what if the public found out.

If your data use being on the front pages would make you nervous, maybe it’s a clue you should be doing something differently. If you don’t trust the public would support it, then perhaps it doesn’t deserve to be trusted. Respect individuals’ dignity and human rights. Stop doing stupid things that undermine everything.

Build the social license that care.data was missing. Be honest. Respect our right to know, and right to object. Build them into a public UK data strategy to be understood and be proud of.


Part 1. Ethically problematic
Ethics is dissolving into little more than a buzzword. Can we find solutions underpinned by law, and ethics, and put the person first?

Part 2. Can Data Trusts be trustworthy?
As long as data users ignore data subjects rights, Data Trusts have no social license.



The Trouble with Boards at the Ministry of Magic

Peter Riddell, the Commissioner for Public Appointments, has completed his investigation into the recent appointments to the Board of the Office for Students and published his report.

From the “Number 10 Googlers,”  that NUS affiliation — an interest in student union representation was seen as undesirable, to “undermining the policy goals” and what the SpAds supported, the whole report is worth a read.

Perception of the process

The concern that the Commissioner raises, over the harm  done to the public’s perception of the public appointments process means more needs done to fix these problems, before and after appointments.

This process reinforces what people think already. Jobs for the [white Oxford] boys, and yes-men.  And so what, why should I get involved anyway, and what can we hope to change?

Possibilities for improvement

What should the Department for Education (DfE) now offer and what should be required after the appointments process, for the OfS and other bodies, boards and groups et al?

  • Every board at the Department for Education, its name, aim, and members — internal and external — should be published.
  • Every board at the Department for Education should be required to publish its Terms of Appointment, and Terms of Reference.
  • Every board at the Department for Education should be required to publish agendas before meetings and meaningful meeting minutes promptly.

Why? Because there’s all sorts of boards around and their transparency is frankly non-existent. I know because I sit on one. Foolishly I did not make it a requirement to publish minutes before I agreed to join. But in a year it has only met twice, so you’ve not missed much. Who else sits where, on what policy, and why?

In another I used to sit on I got increasingly frustrated that the minutes were not reflective of the substance of discussion. This does the public a disservice twice over. The purpose of the boards look insipid and the evidence for what challenge they are intended to offer,  their very reason for being, is washed away. Show the public what’s hard, that there’s debate, that risks are analysed and balanced, and then decisions taken. Be open to scrutiny.

The public has a right to know

When scrutiny really matters, it is wrong — just as the Commissioner report reads — for any Department or body to try to hide the truth.

The purpose of transparency must be to hold to account and ensure checks-and-balances are upheld in a democratic system.

The DfE withdrew from a legal hearing scheduled at the First Tier Information Rights Tribunal last year a couple of weeks beforehand, and finally accepted an ICO decision notice in my favour. I had gone through a year of the Freedom-of-Information appeal process to get hold of the meeting minutes of the Department for Education Star Chamber Scrutiny Board, from November 2015.

It was the meeting in which I had been told members approved the collection of nationality and country of birth in the school census.

“The Star Chamber Scrutiny Board”.  Not out of Harry Potter and the Ministry of Magic but appointed by the DfE.

It’s a board that mentions actively seeking members of certain teaching unions but omits others. It publishes no meeting minutes. Its terms of reference are 38 words long, and it was not told the whole truth before one of the most important and widely criticised decisions it ever made affecting the lives of millions of children across England and harm and division in the classroom.

Its annual report doesn’t mention the controversy at all.

After sixteen months, the DfE finally admitted it had kept the Star Chamber Scrutiny Board in the dark on at least one of the purposes of expanding the school census. And on its pre-existing active, related data policy passing pupil data over to the Home Office.

The minutes revealed the Board did not know anything about the data sharing agreement already in place between the DfE and Home Office or that “(once collected) nationality data” [para 15.2.6] was intended to share with the Border Force Casework Removals Team.

Truth that the DfE was forced to reveal, and only came out two years after the meeting, and a full year after the change in law.

If the truth, transparency, diversity of political opinion on boards are allowed to die so does democracy

I spoke to Board members in 2016. They were shocked to find out what the MOU purposes were for the new data,  and that regular data transfers had already begun without their knowledge, when they were asked to sign off the nationality data collection.

Their lack of concerns raised was given in written evidence to the House of Lords Secondary Legislation Scrutiny Committee that it had been properly reviewed.

How trustworthy is anything that the Star Chamber now “approves” and our law making process to expand school data? How trustworthy is the Statutory Instrument scrutiny process?

“there was no need for DfE to discuss with SCSB the sharing of data with Home Office as: a.) none of the data being considered by the SCSB as part of the proposal supporting this SI has been, or will be, shared with any third-party (including other government departments);

[omits it “was planned to be”]

and b.) even if the data was to be shared externally, those decisions are outside the SCSB terms of reference.”

Outside the terms of reference that are 38 words long and should scrutinise but not too closely or reject on the basis of what exactly?

Not only is the public not being told the full truth about how these boards are created, and what their purpose is, it seems board members are not always told the full truth they deserve either.

Who is invited to the meeting, and who is left out? What reports are generated with what recommendations? What facts or opinion cannot be listened to, scrutinised and countered, that could be so damaging as to not even allow people to bring the truth to the table?

If the meeting minutes would be so controversial and damaging to making public policy by publishing them, then who the heck are these unelected people making such significant decisions and how? Are they qualified, are they independent, and are they accountable?

If alternately, what should be ‘independent’ boards, or panels, or meetings set up to offer scrutiny and challenge, are in fact being manipulated to manoeuvre policy and ready-made political opinions of the day,  it is a disaster for public engagement and democracy.

It should end with this ex- OfS hiring process at DfE, today.

The appointments process and the ongoing work by boards must have full transparency, if they are ever to be seen as trustworthy.

Statutory Instruments, the #DPBill and the growth of the Database State

First they came for the lists of lecturers. Did you speak out?

Last week Chris Heaton-Harris MP wrote to vice-chancellors to ask for a list of lecturers’ names and course content, “With particular reference to Brexit”.  Academics on social media spoke out in protest. There has been little reaction however, to a range of new laws that permit the incremental expansion of the database state on paper and in practice.

The government is building ever more sensitive lists of names and addresses, without oversight. They will have access to information about our bank accounts. They are using our admin data to create distress-by-design in a ‘hostile environment.’ They are writing laws that give away young people’s confidential data, ignoring new EU law that says children’s data merits special protections.

Earlier this year, Part 5 of the new Digital Economy Act reduced the data protection infrastructure between different government departments. This week, in discussion on the Codes of Practice, some local government data users were already asking whether safeguards can be further relaxed to permit increased access to civil registration data and use our identity data for more purposes.

Now in the Data Protection Bill, the government has included clauses in Schedule 2, to reduce our rights to question how our data are used and that will remove a right to redress where things go wrong.  Clause 15 designs-in open ended possibilities of Statutory Instruments for future change.

The House of Lords Select Committee on the Constitution point out  on the report on the Bill, that the number and breadth of the delegated powers, are, “an increasingly common feature of legislation which, as we have repeatedly stated, causes considerable concern.”

Concern needs to translate into debate, better wording and safeguards to ensure Parliament maintains its role of scrutiny and where necessary constrains executive powers.

Take as case studies, three new Statutory Instruments on personal data  from pupils, students, and staff. They all permit more data to be extracted from individuals and to be sent to national level:

  • SI 807/2017 The Education (Information About Children in Alternative Provision) (England) (Amendment) Regulations 2017
  • SI No. 886 The Education (Student Information) (Wales) Regulations 2017 (W. 214) and
  • SL(5)128 – The Education (Supply of Information about the School Workforce) (Wales) Regulations 2017

The SIs typically state “impact assessment has not been prepared for this Order as no impact on businesses or civil society organisations is foreseen. The impact on the public sector is minimal.” Privacy Impact Assessments are either not done, not published or refused via FOI.

Ever expanding national databases of names

Our data are not always used for the purposes we expect in practice, or what Ministers tell us they will be used for.

Last year the government added nationality to the school census in England, and snuck the change in law through Parliament in the summer holidays.  (SI 808/2016). Although the Department for Education conceded after public pressure, “These data will not be passed to the Home Office,” the intention was very real to hand over “Nationality (once collected)” for immigration purposes. The Department still hands over children’s names and addresses every month.

That SI should have been a warning, not a process model to repeat.

From January, thanks to yet another rushed law without debate, (SI 807/2017) teen pregnancy, young offender and mental health labels will be added to children’s records for life in England’s National Pupil Database. These are on a named basis, and highly sensitive. Data from the National Pupil Database, including special needs data (SEN) are passed on for a broad range of purposes to third parties, and are also used across government in Troubled Families, shared with National Citizen Service, and stored forever; on a named basis, all without pupils’ consent or parents’ knowledge. Without a change in policy, young offender and pregnancy, will be handed out too.

Our children’s privacy has been outsourced to third parties since 2012. Not anonymised data, but  identifiable and confidential pupil-level data is handed out to commercial companies, charities and press, hundreds of times a year, without consent.

Near-identical wording  that was used in 2012 to change the law in England, reappears in the new SI for student data in Wales.

The Wales government introduced regulations for a new student database of names, date of birth and ethnicity, home address including postcode, plus exam results. The third parties listed who will get given access to the data without asking for students’ consent, include the Student Loans Company and “persons who, for the purpose of promoting the education or well-being of students in Wales, require the information for that purpose”, in SI No. 886, the Education (Student Information) (Wales) Regulations 2017 (W. 214).

The consultation was conflated with destinations data, and while it all sounds for the right reasons, the SI is broad on purposes and prescribed persons. It received 10 responses.

Separately, a 2017 consultation on the staff data collection received 34 responses about building a national database of teachers, including names, date of birth, National Insurance numbers, ethnicity, disability, their level of Welsh language skills, training, salary and more. Unions and the Information Commissioner’s Office both asked basic questions in the consultation that remain unanswered, including who will have access. It’s now law thanks  to SL(5)128 – The Education (Supply of Information about the School Workforce) (Wales) Regulations 2017. The questions are open.

While I have been assured this weekend in writing that these data will not be used for commercial purposes or immigration enforcement, any meaningful safeguards are missing.

More failings on fairness

Where are the communications to staff, students and parents? What oversight will there be? Will a register of uses be published? And why does government get to decide without debate, that our fundamental right to privacy can be overwritten by a few lines of law? What protections will pupils, students and staff have in future how these data will be used and uses expanded for other things?

Scope creep is an ever present threat. In 2002 MPs were assured on the changes to the “Central Pupil Database”, that the Department for Education had no interest in the identity of individual pupils.

But come 2017 and the Department for Education has become the Department for Deportation.

Children’s names are used to match records in an agreement with the Home Office handing over up to 1,500 school pupils’ details a month. The plan was parliament and public should never know.

This is not what people expect or find reasonable. In 2015 UCAS had 37,000 students respond to an Applicant Data Survey. 62% of applicants think sharing their personal data for research is a good thing, and 64% see personal benefits in data sharing.  But over 90% of applicants say they should be asked first, regardless of whether their data is to be used for research, or other things. This SI takes away their right to control their data and their digital identity.

It’s not in young people’s best interests to be made more digitally disempowered and lose control over their digital identity. The GDPR requires data privacy by design. This approach should be binned.

Meanwhile, the Digital Economy Act codes of practice talk about fair and lawful processing as if it is a real process that actually happens.

That gap between words on paper, and reality, is a caredata style catastrophe across every sector of public data and government waiting to happen. When will the public be told how data are used?

Better data must be fairer and safer in the future

The new UK Data Protection Bill is in Parliament right now, and its wording will matter. Safe data, transparent use, and independent oversight are not empty slogans to sling into the debate.

They must shape practical safeguards to prevent there being no course of redress if you are slung into a Border Force van at dawn, your bank account is frozen, or you get a 30 days notice-to-leave letter all by mistake.

To ensure our public [personal] data are used well, we need to trust why they’re collected and see how they are used. But instead the government has drafted their own get-out-of-jail-free-card to remove all our data protection rights to know in the name of immigration investigation and enforcement, and other open ended public interest exemptions.

The pursuit of individuals and their rights under an anti-immigration rhetoric without evidence of narrow case need, in addition to all the immigration law we have, is not the public interest, but ideology.

If these exemptions becomes law, every one of us loses right to ask where our data came from, why it was used for that purpose, or course of redress.

The Digital Economy Act removed some of the infrastructure protections between Departments for datasharing. These clauses will remove our rights to know where and why that data has been passed around between them.

These lines are not just words on a page. They will have real effects on real people’s lives. These new databases are lists of names, and addresses, or attach labels to our identity that last a lifetime.

Even the advocates in favour of the Database State know that if we want to have good public services, their data use must be secure and trustworthy, and we have to be able to trust staff with our data.

As the Committee sits this week to review the bill line by line, the Lords must make sure common sense sees off the scattering of substantial public interest and immigration exemptions in the Data Protection Bill. Excessive exemptions need removed, not our rights.

Otherwise we can kiss goodbye to the UK as a world leader in tech that uses our personal data, or research that uses public data. Because if the safeguards are weak, the commercial players who get it wrong in trials of selling patient data,  or who try to skip around the regulatory landscape asking to be treated better than everyone else, and fail to comply with Data Protection law, or when government is driven to chasing children out of education, it doesn’t  just damage their reputation, or the potential of innovation for all, they damage public trust from everyone, and harm all data users.

Clause 15 leaves any future change open ended by Statutory Instrument. We can already see how SIs like these are used to create new national databases that can pop up at any time, without clear evidence of necessity, and without chance for proper scrutiny. We already see how data can be used, beyond reasonable expectations.

If we don’t speak out for our data privacy, the next time they want a list of names, they won’t need to ask. They’ll already know.


First they came …” is with reference to the poem written by German Lutheran pastor Martin Niemöller (1892–1984).

The Future of Data in Public Life

What is means to be human is going to be different. That was the last word of a panel of four excellent speakers, and the sparkling wit and charm of chair Timandra Harkness, at tonight’s Turing Institute event, hosted at the British Library, on the future of data.

The first speaker, Bernie Hogan, of the Oxford Internet Institute, spoke of Facebook’s emotion experiment,  and the challenges of commercial companies ownership and concentrations of knowledge, as well as their decisions controlling what content you get to see.

He also explained simply what an API is in human terms. Like a plug in a socket and instead of electricity, you get a flow of data, but the data controller can control which data can come out of the socket.

And he brilliantly brought in a thought what would it mean to be able to go back in time to the Nuremberg trials, and regulate not only medical ethics, but the data ethics of indirect and computational use of information. How would it affect today’s thinking on AI and machine learning and where we are now?

“Available does not mean accessible, transparent does not mean accountable”

Charles from the Bureau of Investigative Journalism, who had also worked for Trinity Mirror using data analytics, introduced some of the issues that large datasets have for the public.

  • People rarely have the means to do any analytics well.
  • Even if open data are available, they are not necessarily accessible due to the volume of data to access, and constraints of common software (such as excel) and time constraints.
  • Without the facts they cannot go see a [parliamentary] representative or community group to try and solve the problem.
  • Local journalists often have targets for the number of stories they need to write, and target number of Internet views/hits to meet.

Putting data out there is only transparency, but not accountability if we cannot turn information into knowledge that can benefit the public.

“Trust, is like personal privacy. Once lost, it is very hard to restore.”

Jonathan Bamford, Head of Parliamentary and Government Affairs at the ICO, took us back to why we need to control data at all. Democracy. Fairness. The balance of people’s rights,  like privacy, and Freedom-of-Information, and the power of data holders. The awareness that power of authorities and companies will affect the lives of ordinary citizens. And he said that even early on there was a feeling there was a need to regulate who knows what about us.

The third generation of Data Protection law he said, is now more important than ever to manage the whole new era of technology and use of data that did not exist when previous laws were made.

But, he said, the principles stand true today. Don’t be unfair. Use data for the purposes people expect. Security of data matters. As do rights to see the data people hold about us.  Make sure data are relevant, accurate, necessary and kept for a sensible amount of time.

And even if we think that technology is changing, he argued, the principles will stand, and organisations need to consider these principles before they do things, considering privacy as a fundamental human right by default, and data protection by design.

After all, we should remember the Information Commissioner herself recently said,

“privacy does not have to be the price we pay for innovation. The two can sit side by side. They must sit side by side.

It’s not always an easy partnership and, like most relationships, a lot of energy and effort is needed to make it work. But that’s what the law requires and it’s what the public expects.”

“We must not forget, evil people want to do bad things. AI needs to be audited.”

Joanna J. Bryson was brilliant her multifaceted talk, summing up how data will affect our lives. She explained how implicit biases work, and how we reason, make decisions and showed up how we think in some ways  in Internet searches. She showed in practical ways, how machine learning is shaping our future in ways we cannot see. And she said, firms asserting that doing these things fairly and openly and that regulation no longer fits new tech, “is just hoo-hah”.

She talked about the exciting possibilities and good use of data, but that , “we must not forget, evil people want to do bad things. AI needs to be audited.” She summed up, we will use data to predict ourselves. And she said:

“What is means to be human is going to be different.”

That is perhaps the crux of this debate. How do data and machine learning and its mining of massive datasets, and uses for ‘prediction’, affect us as individual human beings, and our humanity?

The last audience question addressed inequality. Solutions like transparency, subject access, accountability, and understanding biases and how we are used, will never be accessible to all. It needs a far greater digital understanding across all levels of society.   How can society both benefit from and be involved in the future of data in public life? The conclusion was made, that we need more faith in public institutions working for people at scale.

But what happens when those institutions let people down, at scale?

And some institutions do let us down. Such as over plans for how our NHS health data will be used. Or when our data are commercialised without consent breaking data protection law. Why do 23 million people not know how their education data are used? The government itself does not use our data in ways we expect, at scale. School children’s data used in immigration enforcement fails to be fair, is not the purpose for which it was collected, and causes harm and distress when it is used in direct interventions including “to effect removal from the UK”, and “create a hostile environment.” There can be a lack of committment to independent oversight in practice, compared to what is promised by the State. Or no oversight at all after data are released. And ethics in researchers using data are inconsistent.

The debate was less about the Future of Data in Public Life,  and much more about how big data affects our personal lives. Most of the discussion was around how we understand the use of our personal information by companies and institutions, and how will we ensure democracy, fairness and equality in future.

The question went unanswered from an audience member, how do we protect ourselves from the harms we cannot see, or protect the most vulnerable who are least able to protect themselves?

“How can we future proof data protection legislation and make sure it keeps up with innovation?”

That audience question is timely given the new Data Protection Bill. But what legislation means in practice, I am learning rapidly, can be very different from what is in the written down in law.

One additional tool in data privacy and rights legislation is up for discussion, right now,  in the UK. If it matters to you, take action.

NGOs could be enabled to make complaints on behalf of the public under article 80 of the General Data Protection Regulation (GDPR). However, the government has excluded that right from the draft UK Data Protection Bill launched last week.

“Paragraph 53 omits from Article 80, representation of data subjects, where provided for by Member State law” from paragraph 1 and paragraph 2,” [Data Protection Bill Explanatory notes, paragraph 681 p84/112]. 80 (2) gives members states the option to provide for NGOs to take action independently on behalf of many people that may have been affected.

If you want that right, a right others will be getting in other countries in the EU, then take action. Call your MP or write to them. Ask for Article 80, the right to representation, in UK law. We need to ensure that our human rights continue to be enacted and enforceable to the maximum, if, “what is means to be human is going to be different.”

For the Future of Data, has never been more personal.

The Queen’s Speech, Information Society Services and GDPR

The Queen’s Speech promised new laws to ensure that the United Kingdom retains its world-class regime protecting personal data. And the government proposes a new digital charter to make the United Kingdom the safest place to be online for children.

Improving online safety for children should mean one thing. Children should be able to use online services without being used by them and the people and organisations behind it. It should mean that their rights to be heard are prioritised in decisions about them.

As Sir Tim Berners-Lee is reported as saying, there is a need to work with companies to put “a fair level of data control back in the hands of people“. He rightly points out that today terms and conditions are “all or nothing”.

There is a gap in discussions that we fail to address when we think of consent to terms and conditions, or “handing over data”. It is that this assumes that these are always and can be always, conscious acts.

For children the question of whether accepting Ts&Cs giving them control and whether it is meaningful becomes even more moot. What are the agreeing to? Younger children cannot give free and informed consent. After all most privacy policies standardly include phrases such as, “If we sell all or a portion of our business, we may transfer all of your information, including personal information, to the successor organization,” which means in effect that “accepting” a privacy policy today, is effectively a blank cheque for anything tomorrow.

The GDPR requires terms and conditions to be laid out in policies that a child can understand.

The current approach to legislation around children and the Internet is heavily weighted towards protection from seen threats. The threats we need to give more attention to, are those unseen.

By 2024 more than 50% of home Internet traffic will be used by appliances and devices, rather than just for communication and entertainment…The IoT raises huge questions on privacy and security, that have to be addressed by government, corporations and consumers. (WEF, 2017)

Our lives as measured in our behaviours and opinions, purchases and likes, are connected by trillions of sensors. My parents may have described using the Internet as going online. Today’s online world no longer means our time is spent ‘on the computer’, but being online, all day every day. Instead of going to a desk and booting up through a long phone cable, we have wireless computers in our pockets and in our homes, with functionality built-in to enable us to do other things; make a phonecall, make toast, and play. In a smart city surrounded by sensors under pavements, in buildings, cameras and tracking everywhere we go, we are living ever more inside an overarching network of cloud computers that store our data. And from all that data decisions are made, which adverts to show us, on which network sites, what we get offered and do not, and our behaviours and our conscious decision-making may be nudged quite invisibly.

Data about us, whether uniquely identifiable or not, is all too often collected passively, IP Address, linked sign-ins that extract friends lists, and some decide if we can either use the thing or not. It’s part of the deal. We get the service, they get to trade our identity, like Top Trumps, behind the scenes. But we often don’t see it, and under GDPR, there should be no contractual requirement as part of consent. I.e. agree or don’t get the service, is not an option.

From May 25, 2018 there will be special “conditions applicable to child’s consent in relation to information society services,” in Data Protection law which are applicable to the collection of data.

As yet, we have not had debate in the UK what that means in concrete terms, and if we do not soon, we risk it becoming an afterthought that harms more than helps protect children’s privacy, and therefore their digital identity.

I think of five things needed by policy shapers to tackle it:

  • In depth understanding of what ‘online’ and the Internet mean
  • Consistent understanding of what threat models and risk are connected to personal data, which today are underestimated
  • A grasp of why data privacy training is vital to safeguarding
    Confront the idea that user regulation as a stand-alone step will create a better online experience for users, when we know that perceived problems are created by providers or other site users
  • Siloed thinking that fails to be forward thinking or join the dots of tactics across Departments into cohesive inclusive strategy

If the government’s new “major new drive on internet safety” involves the world’s largest technology companies in order to make the UK the “safest place in the world for young people to go online,” then we must also ensure that these strategies and papers join things up and above all, a technical knowledge of how the Internet works needs to join the dots of risks and benefits in order to form a strategy that will actually make children safe, skilled and see into their future.

When it comes to children, there is a further question over consent and parental spyware. Various walk-to-school apps, lauded by the former Secretary of State two years running, use spyware and can be used without a child’s consent. Guardian Gallery, which could be used to scan for nudity in photos on anyone’s phone that the ‘parent’ phone holder has access to install it on, can be made invisible on the ‘child’ phone. Imagine this in coercive relationships.

If these technologies and the online environment are not correctly assessed with regard to “online safety” threat models for all parts of our population, then they fail to address the risk for the most vulnerable who need it.

What will the GDPR really mean for online safety improvement? What will it define as online services for remuneration in the IoT? And who will be considered as children, “targeted at” or “offered to”?

An active decision is required in the UK. Will 16 remain the default age needed for consent to access Information Society Services, or will we adopt 13 which needs a legal change?

As banal as these questions sound they need close attention paid, and clarity, between now and May 25, 2018 if the UK is to be GDPR ready for providers of online services to know who and how they should treat Internet access, participation and age [parental] verification.

How will the “controller” make “reasonable efforts to verify in such cases that consent is given or authorised by the holder of parental responsibility over the child”, and “taking into consideration available technology”.

These are fundamental questions of what the Internet is and means to people today. And if the current government approach to security is anything to go by, safety will not mean what we think it will mean.

It will matter how these plans join up. Age verification was not being considered in UK law in relation to how we would derogate GDPR, even as late as in October 2016 despite age verification requirements already in the Digital Economy Bill. It shows a lack of joined up digital thinking across our government and needs addressed with urgency to get into the next Parliamentary round.

In recent draft legislation I am yet to see the UK government address Internet rights and safety for young people as anything other than a protection issue, treating the online space in the same way as offline, irl, focused on stranger danger, and sexting.

The UK Digital Strategy commits to the implementation of the General Data Protection Regulation by May 2018, and frames it as a business issue, labelling data as “a global commodity” and as such, its handling is framed solely as a requirements needed to ensure “that our businesses can continue to compete and communicate effectively around the world” and that adoption “will ensure a shared and higher standard of protection for consumers and their data.”

The Digital Economy Bill, despite being a perfect vehicle for this has failed to take on children’s rights, and in particular the requirements of GDPR for consent at all. It was clear if we were to do any future digital transactions we need to level up to GDPR, not drop to the lowest common denominator between that and existing laws.

It was utterly ignored. So were children’s rights to have their own views heard in the consultation to comment on the GDPR derogations for children, with little chance for involvement from young people’s organisations, and less than a monthto respond.

We must now get this right in any new Digital Strategy and bill in the coming parliament.

Crouching Tiger Hidden Dragon: the making of an IoT trust mark

The Internet of Things (IoT) brings with it unique privacy and security concerns associated with smart technology and its use of data.

  • What would it mean for you to trust an Internet connected product or service and why would you not?
  • What has damaged consumer trust in products and services and why do sellers care?
  • What do we want to see different from today, and what is necessary to bring about that change?

These three pairs of questions implicitly underpinned the intense day of  discussion at the London Zoo last Friday.

The questions went unasked, and could have been voiced before we started, although were probably assumed to be self-evident:

  1. Why do you want one at all [define the problem]?
  2. What needs to change and why [define the future model]?
  3. How do you deliver that and for whom [set out the solution]?

If a group does not agree on the need and drivers for change, there will be no consensus on what that should look like, what the gap is to achieve it, and even less on making it happen.

So who do you want the trustmark to be for, why will anyone want it, and what will need to change to deliver the aims? No one wants a trustmark per se. Perhaps you want what values or promises it embodies to  demonstrate what you stand for, promote good practice, and generate consumer trust. To generate trust, you must be seen to be trustworthy. Will the principles deliver on those goals?

The Open IoT Certification Mark Principles, as a rough draft was the outcome of the day, and are available online.

Here’s my reflections, including what was missing on privacy, and the potential for it to be considered in future.

I’ve structured this first, assuming readers attended the event, at ca 1,000 words. Lists and bullet points. The background comes after that, for anyone interested to read a longer piece.

Many thanks upfront, to fellow participants, to the organisers Alexandra D-S and Usman Haque and the colleague who hosted at the London Zoo. And Usman’s Mum.  I hope there will be more constructive work to follow, and that there is space for civil society to play a supporting role and critical friend.


The mark didn’t aim to fix the IoT in a day, but deliver something better for product and service users, by those IoT companies and providers who want to sign up. Here is what I took away.

I learned three things

  1. A sense of privacy is not homogenous, even within people who like and care about privacy in theoretical and applied ways. (I very much look forward to reading suggestions promised by fellow participants, even if enforced personal openness and ‘watching the watchers’ may mean ‘privacy is theft‘.)
  2. Awareness of current data protection regulations needs improved in the field. For example, Subject Access Requests already apply to all data controllers, public and private. Few have read the GDPR, or the e-Privacy directive, despite importance for security measures in personal devices, relevant for IoT.
  3. I truly love working on this stuff, with people who care.

And it reaffirmed things I already knew

  1. Change is hard, no matter in what field.
  2. People working together towards a common goal is brilliant.
  3. Group collaboration can create some brilliantly sharp ideas. Group compromise can blunt them.
  4. Some men are particularly bad at talking over each other, never mind over the women in the conversation. Women notice more. (Note to self: When discussion is passionate, it’s hard to hold back in my own enthusiasm and not do the same myself. To fix.)
  5. The IoT context, and risks within it are not homogenous, but brings new risks and adverseries. The risks for manufacturers and consumers and the rest of the public are different, and cannot be easily solved with a one-size-fits-all solution. But we can try.

Concerns I came away with

  1. If the citizen / customer / individual is to benefit from the IoT trustmark, they must be put first, ahead of companies’ wants.
  2. If the IoT group controls both the design, assessment to adherence and the definition of success, how objective will it be?
  3. The group was not sufficiently diverse and as a result, reflects too little on the risks and impact of the lack of diversity in design and effect, and the implications of dataveillance .
  4. Critical minority thoughts although welcomed, were stripped out from crowdsourced first draft principles in compromise.
  5. More future thinking should be built-in to be robust over time.

IoT adversaries: via Twitter, unknown source

What was missing

There was too little discussion of privacy in perhaps the most important context of IoT – inter connectivity and new adversaries. It’s not only about *your* thing, but things that it speaks to, interacts with, of friends, passersby, the cityscape , and other individual and state actors interested in offense and defense. While we started to discuss it, we did not have the opportunity to discuss sufficiently at depth to be able to get any thinking into applying solutions in the principles.

One of the greatest risks that users face is the ubiquitous collection and storage of data about users that reveal detailed, inter-connected patterns of behaviour and our identity and not seeing how that is used by companies behind the scenes.

What we also missed discussing is not what we see as necessary today, but what we can foresee as necessary for the short term future, brainstorming and crowdsourcing horizon scanning for market needs and changing stakeholder wants.

Future thinking

Here’s the areas of future thinking that smart thinking on the IoT mark could consider.

  1. We are moving towards ever greater requirements to declare identity to use a product or service, to register and log in to use anything at all. How will that change trust in IoT devices?
  2. Single identity sign-on is becoming ever more imposed, and any attempts for multiple presentation of who I am by choice, and dependent on context, therefore restricted. [not all users want to use the same social media credentials for online shopping, with their child’s school app, and their weekend entertainment]
  3. Is this imposition what the public wants or what companies sell us as what customers want in the name of convenience? What I believe the public would really want is the choice to do neither.
  4. There is increasingly no private space or time, at places of work.
  5. Limitations on private space are encroaching in secret in all public city spaces. How will ‘handoffs’ affect privacy in the IoT?
  6. Public sector (connected) services are likely to need even more exacting standards than single home services.
  7. There is too little understanding of the social effects of this connectedness and knowledge created, embedded in design.
  8. What effects may there be on the perception of the IoT as a whole, if predictive data analysis and complex machine learning and AI hidden in black boxes becomes more commonplace and not every company wants to be or can be open-by-design?
  9. Ubiquitous collection and storage of data about users that reveal detailed, inter-connected patterns of behaviour and our identity needs greater commitments to disclosure. Where the hand-offs are to other devices, and whatever else is in the surrounding ecosystem, who has responsibility for communicating interaction through privacy notices, or defining legitimate interests, where the data joined up may be much more revealing than stand-alone data in each silo?
  10. Define with greater clarity the privacy threat models for different groups of stakeholders and address the principles for each.

What would better look like?

The draft privacy principles are a start, but they’re not yet aspirational as I would have hoped. Of course the principles will only be adopted if possible, practical and by those who choose to. But where is the differentiator from what everyone is required to do, and better than the bare minimum? How will you sell this to consumers as new? How would you like your child to be treated?

The wording in these 5 bullet points, is the first crowdsourced starting point.

  • The supplier of this product or service MUST be General Data Protection Regulation (GDPR) compliant.
  • This product SHALL NOT disclose data to third parties without my knowledge.
  • I SHOULD get full access to all the data collected about me.
  • I MAY operate this device without connecting to the internet.
  • My data SHALL NOT be used for profiling, marketing or advertising without transparent disclosure.

Yes other points that came under security address some of the crossover between privacy and surveillance risks, but there is as yet little substantial that is aspirational to make the IoT mark a real differentiator in terms of privacy. An opportunity remains.

It was that and how young people perceive privacy that I hoped to bring to the table. Because if manufacturers are serious about future success, they cannot ignore today’s children and how they feel. How you treat them today, will shape future purchasers and their purchasing, and there is evidence you are getting it wrong.

The timing is good in that it now also offers the opportunity to promote consistent understanding, and embed the language of GDPR and ePrivacy regulations into consistent and compatible language in policy and practice in the #IoTmark principles.

User rights I would like to see considered

These are some of the points I would think privacy by design would mean. This would better articulate GDPR Article 25 to consumers.

Data sovereignty is a good concept and I believe should be considered for inclusion in explanatory blurb before any agreed privacy principles.

  1. Goods should by ‘dumb* by default’ until the smart functionality is switched on. [*As our group chair/scribe called it]  I would describe this as, “off is the default setting out-of-the-box”.
  2. Privact by design. Deniability by default. i.e. not only after opt out, but a company should not access the personal or identifying purchase data of anyone who opts out of data collection about their product/service use during the set up process.
  3. The right to opt out of data collection at a later date while continuing to use services.
  4. A right to object to the sale or transfer of behavioural data, including to third-party ad networks and absolute opt-in on company transfer of ownership.
  5. A requirement that advertising should be targeted to content, [user bought fridge A] not through jigsaw data held on users by the company [how user uses fridge A, B, C and related behaviour].
  6. An absolute rejection of using children’s personal data gathered to target advertising and marketing at children

Background: Starting points before privacy

After a brief recap on 5 years ago, we heard two talks.

The first was a presentation from Bosch. They used the insights from the IoT open definition from 5 years ago in their IoT thinking and embedded it in their brand book. The presenter suggested that in five years time, every fridge Bosch sells will be ‘smart’. And the  second was a fascinating presentation, of both EU thinking and the intellectual nudge to think beyond the practical and think what kind of society we want to see using the IoT in future. Hints of hardcore ethics and philosophy that made my brain fizz from , soon to retire from the European Commission.

The principles of open sourcing, manufacturing, and sustainable life cycle were debated in the afternoon with intense arguments and clearly knowledgeable participants, including those who were quiet.  But while the group had assigned security, and started work on it weeks before, there was no one pre-assigned to privacy. For me, that said something. If they are serious about those who earn the trustmark being better for customers than their competition, then there needs to be greater emphasis on thinking like their customers, and by their customers, and what use the mark will be to customers, not companies. Plan early public engagement and testing into the design of this IoT mark, and make that testing open and diverse.

To that end, I believe it needed to be articulated more strongly, that sustainable public trust is the primary goal of the principles.

  • Trust that my device will not become unusable or worthless through updates or lack of them.
  • Trust that my device is manufactured safely and ethically and with thought given to end of life and the environment.
  • Trust that my source components are of high standards.
  • Trust in what data and how that data is gathered and used by the manufacturers.

Fundamental to ‘smart’ devices is their connection to the Internet, and so the last for me, is therefore key to successful public perception and it actually making a difference, beyond the PR value to companies. The value-add must be measured from consumers point of view.

All the openness about design functions and practice improvements, without attempting to change privacy infringing practices, may be wasted effort. Why? Because the perceived benefit of the value of the mark, will be proportionate to what risks it is seen to mitigate.

Why?

Because I assume that you know where your source components come from today. I was shocked to find out not all do and that ‘one degree removed’ is going to be an improvement? Holy cow, I thought. What about regulatory requirements for product safety recalls? These differ of course for different product areas, but I was still surprised. Having worked in global Fast Moving Consumer Goods (FMCG) and food industry, semiconductor and optoelectronics, and medical devices it was self-evident for me, that sourcing is rigorous. So that new requirement to know one degree removed, was a suggested minimum. But it might shock consumers to know there is not usually more by default.

Customers also believe they have reasonable expectations of not being screwed by a product update, left with something that does not work because of its computing based components. The public can take vocal, reputation-damaging action when they are let down.

In the last year alone, some of the more notable press stories include a manufacturer denying service, telling customers, “Your unit will be denied server connection,” after a critical product review. Customer support at Jawbone came in for criticism after reported failings. And even Apple has had problems in rolling out major updates.

While these are visible, the full extent of the overreach of company market and product surveillance into our whole lives, not just our living rooms, is yet to become understood by the general population. What will happen when it is?

The Internet of Things is exacerbating the power imbalance between consumers and companies, between government and citizens. As Wendy Grossman wrote recently, in one sense this may make privacy advocates’ jobs easier. It was always hard to explain why “privacy” mattered. Power, people understand.

That public discussion is long overdue. If open principles on IoT devices mean that the signed-up companies differentiate themselves by becoming market leaders in transparency, it will be a great thing. Companies need to offer full disclosure of data use in any privacy notices in clear, plain language  under GDPR anyway, but to go beyond that, and offer customers fair presentation of both risks and customer benefits, will not only be a point-of-sales benefit, but potentially improve digital literacy in customers too.

The morning discussion touched quite often on pay-for-privacy models. While product makers may see this as offering a good thing, I strove to bring discussion back to first principles.

Privacy is a human right. There can be no ethical model of discrimination based on any non-consensual invasion of privacy. Privacy is not something I should pay to have. You should not design products that reduce my rights. GDPR requires privacy-by-design and data protection by default. Now is that chance for IoT manufacturers to lead that shift towards higher standards.

We also need a new ethics thinking on acceptable fair use. It won’t change overnight, and perfect may be the enemy of better. But it’s not a battle that companies should think consumers have lost. Human rights and information security should not be on the battlefield at all in the war to win customer loyalty.  Now is the time to do better, to be better, demand better for us and in particular, for our children.

Privacy will be a genuine market differentiator

If manufacturers do not want to change their approach to exploiting customer data, they are unlikely to be seen to have changed.

Today feelings that people in US and Europe reflect in surveys are loss of empowerment, feeling helpless, and feeling used. That will shift to shock, resentment, and any change curve will predict, anger.

A 2014 survey for the Royal Statistical Society by Ipsos MORI, found that trust in institutions to use data is much lower than trust in them in general.

“The poll of just over two thousand British adults carried out by Ipsos MORI found that the media, internet services such as social media and search engines and telecommunication companies were the least trusted to use personal data appropriately.” [2014, Data trust deficit with lessons for policymakers, Royal Statistical Society]

In the British student population, one 2015 survey of university applicants in England, found of 37,000 who responded, the vast majority of UCAS applicants agree that sharing personal data can benefit them and support public benefit research into university admissions, but they want to stay firmly in control. 90% of respondents said they wanted to be asked for their consent before their personal data is provided outside of the admissions service.

In 2010, a multi method model of research with young people aged 14-18, by the Royal Society of Engineering, found that, “despite their openness to social networking, the Facebook generation have real concerns about the privacy of their medical records.” [2010, Privacy and Prejudice, RAE, Wellcome]

When people use privacy settings on Facebook set to maximum, they believe they get privacy, and understand little of what that means behind the scenes.

Are there tools designed by others, like Projects by If licenses, and ways this can be done, that you’re not even considering yet?

What if you don’t do it?

“But do you feel like you have privacy today?” I was asked the question in the afternoon. How do people feel today, and does it matter? Companies exploiting consumer data and getting caught doing things the public don’t expect with their data, has repeatedly damaged consumer trust. Data breaches and lack of information security have damaged consumer trust. Both cause reputational harm. Damage to reputation can harm customer loyalty. Damage to customer loyalty costs sales, profit and upsets the Board.

Where overreach into our living rooms has raised awareness of invasive data collection, we are yet to be able to see and understand the invasion of privacy into our thinking and nudge behaviour, into our perception of the world on social media, the effects on decision making that data analytics is enabling as data shows companies ‘how we think’, granting companies access to human minds in the abstract, even before Facebook is there in the flesh.

Governments want to see how we think too, and is thought crime really that far away using database labels of ‘domestic extremists’ for activists and anti-fracking campaigners, or the growing weight of policy makers attention given to predpol, predictive analytics, the [formerly] Cabinet Office Nudge Unit, Google DeepMind et al?

Had the internet remained decentralized the debate may be different.

I am starting to think of the IoT not as the Internet of Things, but as the Internet of Tracking. If some have their way, it will be the Internet of Thinking.

Considering our centralised Internet of Things model, our personal data from human interactions has become the network infrastructure, and data flows, are controlled by others. Our brains are the new data servers.

In the Internet of Tracking, people become the end nodes, not things.

And it is this where the future users will be so important. Do you understand and plan for factors that will drive push back, and crash of consumer confidence in your products, and take it seriously?

Companies have a choice to act as Empires would – multinationals, joining up even on low levels, disempowering individuals and sucking knowledge and power at the centre. Or they can act as Nation states ensuring citizens keep their sovereignty and control over a selected sense of self.

Look at Brexit. Look at the GE2017. Tell me, what do you see is the direction of travel? Companies can fight it, but will not defeat how people feel. No matter how much they hope ‘nudge’ and predictive analytics might give them this power, the people can take back control.

What might this desire to take-back-control mean for future consumer models? The afternoon discussion whilst intense, reached fairly simplistic concluding statements on privacy. We could have done with at least another hour.

Some in the group were frustrated “we seem to be going backwards” in current approaches to privacy and with GDPR.

But if the current legislation is reactive because companies have misbehaved, how will that be rectified for future? The challenge in the IoT both in terms of security and privacy, AND in terms of public perception and reputation management, is that you are dependent on the behaviours of the network, and those around you. Good and bad. And bad practices by one, can endanger others, in all senses.

If you believe that is going back to reclaim a growing sense of citizens’ rights, rather than accepting companies have the outsourced power to control the rights of others, that may be true.

There was a first principle asked whether any element on privacy was needed at all, if the text was simply to state, that the supplier of this product or service must be General Data Protection Regulation (GDPR) compliant. The GDPR was years in the making after all. Does it matter more in the IoT and in what ways? The room tended, understandably, to talk about it from the company perspective.  “We can’t” “won’t” “that would stop us from XYZ.” Privacy would however be better addressed from the personal point of view.

What do people want?

From the company point of view, the language is different and holds clues. Openness, control, and user choice and pay for privacy are not the same thing as the basic human right to be left alone. Afternoon discussion reminded me of the 2014 WAPO article, discussing Mark Zuckerberg’s theory of privacy and a Palo Alto meeting at Facebook:

“Not one person ever uttered the word “privacy” in their responses to us. Instead, they talked about “user control” or “user options” or promoted the “openness of the platform.” It was as if a memo had been circulated that morning instructing them never to use the word “privacy.””

In the afternoon working group on privacy, there was robust discussion whether we had consensus on what privacy even means. Words like autonomy, control, and choice came up a lot. But it was only a beginning. There is opportunity for better. An academic voice raised the concept of sovereignty with which I agreed, but how and where  to fit it into wording, which is at once both minimal and applied, and under a scribe who appeared frustrated and wanted a completely different approach from what he heard across the group, meant it was left out.

This group do care about privacy. But I wasn’t convinced that the room cared in the way that the public as a whole does, but rather only as consumers and customers do. But IoT products will affect potentially everyone, even those who do not buy your stuff. Everyone in that room, agreed on one thing. The status quo is not good enough. What we did not agree on, was why, and what was the minimum change needed to make a enough of a difference that matters.

I share the deep concerns of many child rights academics who see the harm that efforts to avoid restrictions Article 8 the GDPR will impose. It is likely to be damaging for children’s right to access information, be discriminatory according to parents’ prejudices or socio-economic status, and ‘cheating’ – requiring secrecy rather than privacy, in attempts to hide or work round the stringent system.

In ‘The Class’ the research showed, ” teachers and young people have a lot invested in keeping their spheres of interest and identity separate, under their autonomous control, and away from the scrutiny of each other.” [2016, Livingstone and Sefton-Green, p235]

Employers require staff use devices with single sign including web and activity tracking and monitoring software. Employee personal data and employment data are blended. Who owns that data, what rights will employees have to refuse what they see as excessive, and is it manageable given the power imbalance between employer and employee?

What is this doing in the classroom and boardroom for stress, anxiety, performance and system and social avoidance strategies?

A desire for convenience creates shortcuts, and these are often met using systems that require a sign-on through the platforms giants: Google, Facebook, Twitter, et al. But we are kept in the dark how by using these platforms, that gives access to them, and the companies, to see how our online and offline activity is all joined up.

Any illusion of privacy we maintain, we discussed, is not choice or control if based on ignorance, and backlash against companies lack of efforts to ensure disclosure and understanding is growing.

“The lack of accountability isn’t just troubling from a philosophical perspective. It’s dangerous in a political climate where people are pushing back at the very idea of globalization. There’s no industry more globalized than tech, and no industry more vulnerable to a potential backlash.”

[Maciej Ceglowski, Notes from an Emergency, talk at re.publica]

Why do users need you to know about them?

If your connected *thing* requires registration, why does it? How about a commitment to not forcing one of these registration methods or indeed any at all? Social Media Research by Pew Research in 2016 found that  56% of smartphone owners ages 18 to 29 use auto-delete apps, more than four times the share among those 30-49 (13%) and six times the share among those 50 or older (9%).

Does that tell us anything about the demographics of data retention preferences?

In 2012, they suggested social media has changed the public discussion about managing “privacy” online. When asked, people say that privacy is important to them; when observed, people’s actions seem to suggest otherwise.

Does that tell us anything about how well companies communicate to consumers how their data is used and what rights they have?

There is also data with strong indications about how women act to protect their privacy more but when it comes to basic privacy settings, users of all ages are equally likely to choose a private, semi-private or public setting for their profile. There are no significant variations across age groups in the US sample.

Now think about why that matters for the IoT? I wonder who makes the bulk of purchasing decsions about household white goods for example and has Bosch factored that into their smart-fridges-only decision?

Do you *need* to know who the user is? Can the smart user choose to stay anonymous at all?

The day’s morning challenge was to attend more than one interesting discussion happening at the same time. As invariably happens, the session notes and quotes are always out of context and can’t possibly capture everything, no matter how amazing the volunteer (with thanks!). But here are some of the discussion points from the session on the body and health devices, the home, and privacy. It also included a discussion on racial discrimination, algorithmic bias, and the reasons why care.data failed patients and failed as a programme. We had lengthy discussion on ethics and privacy: smart meters, objections to models of price discrimination, and why pay-for-privacy harms the poor by design.

Smart meter data can track the use of unique appliances inside a person’s home and intimate patterns of behaviour. Information about our consumption of power, what and when every day, reveals  personal details about everyday lives, our interactions with others, and personal habits.

Why should company convenience come above the consumer’s? Why should government powers, trump personal rights?

Smart meter is among the knowledge that government is exploiting, without consent, to discover a whole range of issues, including ensuring that “Troubled Families are identified”. Knowing how dodgy some of the school behaviour data might be, that helps define who is “troubled” there is a real question here, is this sound data science? How are errors identified? What about privacy? It’s not your policy, but if it is your product, what are your responsibilities?

If companies do not respect children’s rights,  you’d better shape up to be GDPR compliant

For children and young people, more vulnerable to nudge, and while developing their sense of self can involve forming, and questioning their identity, these influences need oversight or be avoided.

In terms of GDPR, providers are going to pay particular attention to Article 8 ‘information society services’ and parental consent, Article 17 on profiling,  and rights to restriction of processing (19) right to erasure in recital 65 and rights to portability. (20) However, they  may need to simply reassess their exploitation of children and young people’s personal data and behavioural data. Article 57 requires special attention to be paid by regulators to activities specifically targeted at children, as ‘vulnerable natural persons’ of recital 75.

Human Rights, regulations and conventions overlap in similar principles that demand respect for a child, and right to be let alone:

(a) The development of the child ‘s personality, talents and mental and physical abilities to their fullest potential;

(b) The development of respect for human rights and fundamental freedoms, and for the principles enshrined in the Charter of the United Nations.

A weakness of the GDPR is that it allows derogation on age and will create inequality and inconsistency  for children as a result. By comparison Article one of the Convention on the Rights of the Child (CRC) defines who is to be considered a “child” for the purposes of the CRC, and states that: “For the purposes of the present Convention, a child means every human being below the age of eighteen years unless, under the law applicable to the child, majority is attained earlier.”<

Article two of the CRC says that States Parties shall respect and ensure the rights set forth in the present Convention to each child within their jurisdiction without discrimination of any kind.

CRC Article 16 says that no child shall be subjected to arbitrary or unlawful interference with his or her honour and reputation.

Article 8 CRC requires respect for the right of the child to preserve his or her identity […] without unlawful interference.

Article 12 CRC demands States Parties shall assure to the child who is capable of forming his or her own views the right to express those views freely in all matters affecting the child, the views of the child being given due weight in accordance with the age and maturity of the child.

That stands in potential conflict with GDPR article 8. There is much on GDPR on derogations by country, and or children, still to be set.

What next for our data in the wild

Hosting the event at the zoo offered added animals, and during a lunch tour we got out on a tour, kindly hosted by a fellow participant. We learned how smart technology was embedded in some of the animal enclosures, and work on temperature sensors with penguins for example. I love tigers, so it was a bonus that we got to see such beautiful and powerful animals up close, if a little sad for their circumstances and as a general basic principle, seeing big animals caged as opposed to in-the-wild.

Freedom is a common desire in all animals. Physical, mental, and freedom from control by others.

I think any manufacturer that underestimates this element of human instinct is ignoring the ‘hidden dragon’ that some think is a myth.  Privacy is not dead. It is not extinct, or even unlike the beautiful tigers, endangered. Privacy in the IoT at its most basic, is the right to control our purchasing power. The ultimate people power waiting to be sprung. Truly a crouching tiger. People object to being used and if companies continue to do so without full disclosure, they do so at their peril. Companies seem all-powerful in the battle for privacy, but they are not.  Even insurers and data brokers must be fair and lawful, and it is for regulators to ensure that practices meet the law.

When consumers realise our data, our purchasing power has the potential to control, not be controlled, that balance will shift.

“Paper tigers” are superficially powerful but are prone to overextension that leads to sudden collapse. If that happens to the superficially powerful companies that choose unethical and bad practice, as a result of better data privacy and data ethics, then bring it on.

I hope that the IoT mark can champion best practices and make a difference to benefit everyone.

While the companies involved in its design may be interested in consumers, I believe it could be better for everyone, done well. The great thing about the efforts into an #IoTmark is that it is a collective effort to improve the whole ecosystem.

I hope more companies will realise their privacy rights and ethical responsibility in the world to all people, including those interested in just being, those who want to be let alone, and not just those buying.

“If a cat is called a tiger it can easily be dismissed as a paper tiger; the question remains however why one was so scared of the cat in the first place.”

The Resistance to Theory (1982), Paul de Man

Further reading: Networks of Control – A Report on Corporate Surveillance, Digital Tracking, Big Data & Privacy by Wolfie Christl and Sarah Spiekermann