Is Hancock’s App Age Appropriate?

What can Matt Hancock learn from his app privacy flaws?

Note: since starting this blog, the privacy policy has been changed since what was live at 4.30 and the “last changed date” backdated on the version that is now live at 21.00. It shows the challenge I point out in 5:

It’s hard to trust privacy policy terms and conditions that are not strong and stable. 


The Data Protection Bill about to pass through the House of Commons requires the Information Commissioner to prepare and issue codes of practice — which must be approved by the Secretary of State — before they can become statutory and enforced.

One of those new codes (clause 124) is about age-appropriate data protection design. Any provider of an Information Society Service — as outlined in GDPR Article 8, where a child’s data are collected on the legal basis of consent — must have regard for the code, if they target the site use at a child.

For 13 -18 year olds what changes might mean compared with current practices can be demonstrated by the Minister for Digital, Culture, Media and Sport’s new app, launched today.

This app is designed to be used by children 13+. Regardless that the terms say, [more aligned with US COPPA laws rather than GDPR] the app requires parental approval 13-18, it still needs to work for the child.

Apps could and should be used to open up what politics is about to children. Younger users are more likely to use an app than read a paper for example. But it must not cost them their freedoms. As others have written, this app has privacy flaws by design.

Children merit specific protection with regard to their personal data, as they may be less aware of the risks, consequences and safeguards concerned and their rights in relation to the processing of personal data. (GDPR Recital 38).

The flaw in the intent to protect by age, in the app, GDPR and UK Bill overall, is that understanding needed for consent is not dependent on age, but on capacity. The age-based model to protect the virtual child, is fundamentally flawed. It’s shortsighted, if well intentioned, but bad-by-design and does little to really protect children’s rights.

Future age verification for example; if it is to be helpful, not harm, or  a nuisance like a new cookie law, must be “a narrow form of ‘identity assurance’ – where only one attribute (age) need be defined.” It must also respect Recital 57, and not mean a lazy data grab like GiffGaff’s.

On these 5 things this app fails to be age appropriate:

  1. Age appropriate participation, privacy, and consent design.
  2. Excessive personal data collection and permissions. (Article 25)
  3. The purposes of each data collected must be specified, explicit and not further processed for something incompatible with them. (Principle 2).
  4. The privacy policy terms and conditions must be easily understood by a child, and be accurate. (Recital 58)
  5. It’s hard to trust privacy policy terms and conditions that are not strong and stable. Among things that can change are terms on a free trial which should require active and affirmative action not continue the account forever, that may compel future costs.  Any future changes, should also be age-appropriate of themselves,  and in the way that consent is re-managed.

How much profiling does the app enable and what is it used for? The Article 29 WP recommends, “Because children represent a more vulnerable group of society, organisations should, in general, refrain from profiling them for marketing purposes.” What will this mean for any software that profile children’s meta-data to share with third parties, or commercial apps with in-app purchases, or “bait and switch” style models? As this app’s privacy policy refers to.

The Council of Europe 2016-21 Strategy on the Rights of the Child, recognises “provision for children in the digital environment ICT and digital media have added a new dimension to children’’s right to education” exposing them to new risk, “privacy and data protection issues” and that “parents and teachers struggle to keep up with technological developments. ” [6. Growing up in a Digital World, Para 21]

Data protection by design really matters to get right for children and young people.

This is a commercially produced app and will only be used on a consent and optional basis.

This app shows how hard it can be for people buying tech from developers to understand and to trust what’s legal and appropriate.

For developers with changing laws and standards they need clarity and support to get it right. For parents and teachers they will need confidence to buy and let children use safe, quality technology.

Without relevant and trustworthy guidance, it’s nigh on impossible.

For any Minister in charge of the data protection rights of children, we need the technology they approve and put out for use by children, to be age-appropriate, and of the highest standards.

This app could and should be changed to meet them.

For children across the UK, more often using apps offers them no choice whether or not to use it. Many are required by schools that can make similar demands for their data and infringe their privacy rights for life. How much harder then, to protect their data security and rights, and keep track of their digital footprint where data goes.

If the Data protection Bill could have an ICO code of practice for  children that goes beyond consent based data collection; to put clarity, consistency and confidence at the heart of good edTech for children, parents and schools, it would be warmly welcomed.


Here’s detailed examples what the Minister might change to make his app in line with GDPR, and age-appropriate for younger users.

1. Is the app age appropriate by design?

Unless otherwise specified in the App details on the applicable App Store, to use the App you must be 18 or older (or be 13 or older and have your parent or guardian’s consent).

Children over 13 can use the app, but this app needs parental consent. That’s different from GDPR– consent over and above the new laws as will apply in the UK from May. That age will vary across the EU. Inconsistent age policies are going to be hard to navigate.

Many of the things that matter to privacy, have not been included in the privacy policy (detailed below), but in the terms and conditions.

What else needs changed?

2. Personal data protection by design and default

Excessive personal data collection cannot be justified through a “consent” process, by agreeing to use the app. There must be data protection by design and default using the available technology. That includes data minimisation, and limited retention. (Article 25)

The apps powers are vast and collect far more personal data than is needed, and if you use it, even getting permission to listen to your mic. That is not data protection by design and default, which must implement data-protection principles, such as data minimisation.

If as has been suggested, in the newest version of android each permission is asked for at the point of use not on first install, that could be a serious challenge for parents who think they have reviewed and approved permissions pre-install (and fails beyond the scope of this app). An app only requires consent to install and can change the permissions behind the scenes at any time. It makes privacy and data protection by design even more important.

Here’s a copy of what the android Google library page says it can do. Once you click into “permissions” and scroll. This is excessive. “Matt Hancock” is designed to prevent your phone from sleeping, read and modify the contents of storage, and access your microphone.

Version 2.27 can access:
 
Location
  • approximate location (network-based)
Phone
  • read phone status and identity
Photos / Media / Files
  • read the contents of your USB storage
  • modify or delete the contents of your USB storage
Storage
  • read the contents of your USB storage
  • modify or delete the contents of your USB storage
Camera
  • take pictures and videos
Microphone
  • record audio
Wi-Fi connection information
  • view Wi-Fi connections
Device ID & call information
  • read phone status and identity
Other
  • control vibration
  • manage document storage
  • receive data from Internet
  • view network connections
  • full network access
  • change your audio settings
  • control vibration
  • prevent device from sleeping

“Matt Hancock” knows where you live

The app makers – and Matt Hancock – should have no necessity to know where your phone is at all times, where it is regularly, or whose other phones you are near, unless you switch it off. That is excessive.

It’s not the same as saying “I’m a constituent”. It’s 24/7 surveillance.

The Ts&Cs say more.

It places the onus on the user to switch off location services — which you may expect for other apps such as your Strava run — rather than the developers take responsibility for your privacy by design. [Click image to see larger] [Full source policy].

[update since writing this post on February 1, the policy has been greatly added to]

It also collects ill-defined “technical information”. How should a 13 year old – or parent for that matter – know what these information are? Those data are the meta-data, the address and sender tags etc.

By using the App, you consent to us collecting and using technical information about your device and related information for the purpose of helping us to improve the App and provide any services to you.

As NSA General Counsel Stewart Baker has said, “metadata absolutely tells you everything about somebody’s life. General Michael Hayden, former director of the NSA and the CIA, has famously said, “We kill people based on metadata.”

If you use this app and “approve” the use, do you really know what the location services are tracking and how that data are used? For a young person, it is impossible to know, or see where their digital footprint has gone, or knowledge about them, have been used.

3. Specified, explicit, and necessary purposes

As a general principle, personal data must be only collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes. The purposes of these very broad data collection, are not clearly defined. That must be more specifically explained, especially given the data are so broad, and will include sensitive data. (Principle 2).

While the Minister has told the BBC that you maintain complete editorial control, the terms and conditions are quite different.

The app can use user photos, files, your audio and location data, and that once content is shared it is “a perpetual, irrevocable” permission to use and edit, this is not age-appropriate design for children who might accidentally click yes, or not appreciate what that may permit. Or later wish they could get that photo back. But now that photo is on social media potentially worldwide —  “Facebook, Twitter, Pinterest, YouTube, Instagram and on the Publisher’s own websites,” and the child’s rights to privacy and consent, are lost forever.

That’s not age appropriate and not in line with GDPR on rights to withdraw consent, to object or to restrict processing. In fact the terms, conflict with the app privacy policy which states those rights [see 4. App User Data Rights] Just writing “there may be valid reasons why we may be unable to do this” is poor practice and a CYA card.

4. Any privacy policy and app must do what it says

A privacy policy and terms and conditions must be easily understood by a child, [indeed any user] and be accurate.

Journalists testing the app point out that even if the user clicks “don’t allow”, when prompted to permit access to the photo library, the user is allowed to post the photo anyway.


What does consent mean if you don’t know what you are consenting to? You’re not. GDPR requires that privacy policies are written in a way that their meaning can be understood by a child user (not only their parent). They need to be jargon-free and meaningful in “clear and plain language that the child can easily understand.” (Recital 58)

This privacy policy is not child-appropriate. It’s not even clear for adults.

5. What would age appropriate permissions for  charging and other future changes look like?

It should be clear to users if there may be up front or future costs, and there should be no assumption that agreeing once to pay for an app, means granting permission forever, without affirmative action.

Couching Bait-and-Switch, Hidden Costs

This is one of the flaws that the Matt Hancock app terms and conditions shares with many free education apps used in schools. At first, they’re free. You register, and you don’t even know when your child  starts using the app, that it’s a free trial. But after a while, as determined by the developer, the app might not be free any more.

That’s not to say this is what the Matt Hancock app will do, in fact it would be very odd if it did. But odd then, that its privacy policy terms and conditions state it could.

The folly of boiler plate policy, or perhaps simply wanting to keep your options open?

Either way, it’s bad design for children– indeed any user — to agree to something that in fact, is meaningless because it could change at any time, and automatic renewals are convenient but who has not found they paid for an extra month of a newspaper or something else they intended to only use for a limited time?  And to avoid any charges, you must cancel before the end of the free trial – but if you don’t know it’s free, that’s hard to do. More so for children.

From time to time we may offer a free trial period when you first register to use the App before you pay for the subscription.[…] To avoid any charges, you must cancel before the end of the free trial.

(And on the “For more details, please see the product details in the App Store before you download the App.” there aren’t any, in case you’re wondering).

What would age appropriate future changes be?

It should be clear to parents that what they consent to on behalf of a child, or if a child consents, at the time of install. What that means must empower them to better digital understanding and to stay in control, not allow the company to change the agreement, without the user’s clear and affirmative action.

One of the biggest flaws for parents in children using apps is that what they think they have reviewed, thought appropriate, and permitted, can change at any time, at the whim of the developer and as often as they like.

Notification “by updating the Effective Date listed above” is not any notification at all.  And PS. they changed the policy and backdated it today from February 1, 2018, to July 2017. By 8 months. That’s odd.

The statements in this “changes” contradict one another. It’s a future dated get-out-of-jail-free-card for the developer and a transparency and oversight nightmare for parents. “Your continued use” is not clear, affirmative, and freely given consent, as demanded by GDPR.

Perhaps the kindest thing to say about this policy, and its poor privacy approach to rights and responsibilities, is that maybe the Minister did not read it. Which highlights the basic flaw in privacy policies in the first place. Data usage reports how your personal data have actually been used, versus what was promised, are of much greater value and meaning. That’s what children need in schools.


Statutory Instruments, the #DPBill and the growth of the Database State

First they came for the lists of lecturers. Did you speak out?

Last week Chris Heaton-Harris MP wrote to vice-chancellors to ask for a list of lecturers’ names and course content, “With particular reference to Brexit”.  Academics on social media spoke out in protest. There has been little reaction however, to a range of new laws that permit the incremental expansion of the database state on paper and in practice.

The government is building ever more sensitive lists of names and addresses, without oversight. They will have access to information about our bank accounts. They are using our admin data to create distress-by-design in a ‘hostile environment.’ They are writing laws that give away young people’s confidential data, ignoring new EU law that says children’s data merits special protections.

Earlier this year, Part 5 of the new Digital Economy Act reduced the data protection infrastructure between different government departments. This week, in discussion on the Codes of Practice, some local government data users were already asking whether safeguards can be further relaxed to permit increased access to civil registration data and use our identity data for more purposes.

Now in the Data Protection Bill, the government has included clauses in Schedule 2, to reduce our rights to question how our data are used and that will remove a right to redress where things go wrong.  Clause 15 designs-in open ended possibilities of Statutory Instruments for future change.

The House of Lords Select Committee on the Constitution point out  on the report on the Bill, that the number and breadth of the delegated powers, are, “an increasingly common feature of legislation which, as we have repeatedly stated, causes considerable concern.”

Concern needs to translate into debate, better wording and safeguards to ensure Parliament maintains its role of scrutiny and where necessary constrains executive powers.

Take as case studies, three new Statutory Instruments on personal data  from pupils, students, and staff. They all permit more data to be extracted from individuals and to be sent to national level:

  • SI 807/2017 The Education (Information About Children in Alternative Provision) (England) (Amendment) Regulations 2017
  • SI No. 886 The Education (Student Information) (Wales) Regulations 2017 (W. 214) and
  • SL(5)128 – The Education (Supply of Information about the School Workforce) (Wales) Regulations 2017

The SIs typically state “impact assessment has not been prepared for this Order as no impact on businesses or civil society organisations is foreseen. The impact on the public sector is minimal.” Privacy Impact Assessments are either not done, not published or refused via FOI.

Ever expanding national databases of names

Our data are not always used for the purposes we expect in practice, or what Ministers tell us they will be used for.

Last year the government added nationality to the school census in England, and snuck the change in law through Parliament in the summer holidays.  (SI 808/2016). Although the Department for Education conceded after public pressure, “These data will not be passed to the Home Office,” the intention was very real to hand over “Nationality (once collected)” for immigration purposes. The Department still hands over children’s names and addresses every month.

That SI should have been a warning, not a process model to repeat.

From January, thanks to yet another rushed law without debate, (SI 807/2017) teen pregnancy, young offender and mental health labels will be added to children’s records for life in England’s National Pupil Database. These are on a named basis, and highly sensitive. Data from the National Pupil Database, including special needs data (SEN) are passed on for a broad range of purposes to third parties, and are also used across government in Troubled Families, shared with National Citizen Service, and stored forever; on a named basis, all without pupils’ consent or parents’ knowledge. Without a change in policy, young offender and pregnancy, will be handed out too.

Our children’s privacy has been outsourced to third parties since 2012. Not anonymised data, but  identifiable and confidential pupil-level data is handed out to commercial companies, charities and press, hundreds of times a year, without consent.

Near-identical wording  that was used in 2012 to change the law in England, reappears in the new SI for student data in Wales.

The Wales government introduced regulations for a new student database of names, date of birth and ethnicity, home address including postcode, plus exam results. The third parties listed who will get given access to the data without asking for students’ consent, include the Student Loans Company and “persons who, for the purpose of promoting the education or well-being of students in Wales, require the information for that purpose”, in SI No. 886, the Education (Student Information) (Wales) Regulations 2017 (W. 214).

The consultation was conflated with destinations data, and while it all sounds for the right reasons, the SI is broad on purposes and prescribed persons. It received 10 responses.

Separately, a 2017 consultation on the staff data collection received 34 responses about building a national database of teachers, including names, date of birth, National Insurance numbers, ethnicity, disability, their level of Welsh language skills, training, salary and more. Unions and the Information Commissioner’s Office both asked basic questions in the consultation that remain unanswered, including who will have access. It’s now law thanks  to SL(5)128 – The Education (Supply of Information about the School Workforce) (Wales) Regulations 2017. The questions are open.

While I have been assured this weekend in writing that these data will not be used for commercial purposes or immigration enforcement, any meaningful safeguards are missing.

More failings on fairness

Where are the communications to staff, students and parents? What oversight will there be? Will a register of uses be published? And why does government get to decide without debate, that our fundamental right to privacy can be overwritten by a few lines of law? What protections will pupils, students and staff have in future how these data will be used and uses expanded for other things?

Scope creep is an ever present threat. In 2002 MPs were assured on the changes to the “Central Pupil Database”, that the Department for Education had no interest in the identity of individual pupils.

But come 2017 and the Department for Education has become the Department for Deportation.

Children’s names are used to match records in an agreement with the Home Office handing over up to 1,500 school pupils’ details a month. The plan was parliament and public should never know.

This is not what people expect or find reasonable. In 2015 UCAS had 37,000 students respond to an Applicant Data Survey. 62% of applicants think sharing their personal data for research is a good thing, and 64% see personal benefits in data sharing.  But over 90% of applicants say they should be asked first, regardless of whether their data is to be used for research, or other things. This SI takes away their right to control their data and their digital identity.

It’s not in young people’s best interests to be made more digitally disempowered and lose control over their digital identity. The GDPR requires data privacy by design. This approach should be binned.

Meanwhile, the Digital Economy Act codes of practice talk about fair and lawful processing as if it is a real process that actually happens.

That gap between words on paper, and reality, is a caredata style catastrophe across every sector of public data and government waiting to happen. When will the public be told how data are used?

Better data must be fairer and safer in the future

The new UK Data Protection Bill is in Parliament right now, and its wording will matter. Safe data, transparent use, and independent oversight are not empty slogans to sling into the debate.

They must shape practical safeguards to prevent there being no course of redress if you are slung into a Border Force van at dawn, your bank account is frozen, or you get a 30 days notice-to-leave letter all by mistake.

To ensure our public [personal] data are used well, we need to trust why they’re collected and see how they are used. But instead the government has drafted their own get-out-of-jail-free-card to remove all our data protection rights to know in the name of immigration investigation and enforcement, and other open ended public interest exemptions.

The pursuit of individuals and their rights under an anti-immigration rhetoric without evidence of narrow case need, in addition to all the immigration law we have, is not the public interest, but ideology.

If these exemptions becomes law, every one of us loses right to ask where our data came from, why it was used for that purpose, or course of redress.

The Digital Economy Act removed some of the infrastructure protections between Departments for datasharing. These clauses will remove our rights to know where and why that data has been passed around between them.

These lines are not just words on a page. They will have real effects on real people’s lives. These new databases are lists of names, and addresses, or attach labels to our identity that last a lifetime.

Even the advocates in favour of the Database State know that if we want to have good public services, their data use must be secure and trustworthy, and we have to be able to trust staff with our data.

As the Committee sits this week to review the bill line by line, the Lords must make sure common sense sees off the scattering of substantial public interest and immigration exemptions in the Data Protection Bill. Excessive exemptions need removed, not our rights.

Otherwise we can kiss goodbye to the UK as a world leader in tech that uses our personal data, or research that uses public data. Because if the safeguards are weak, the commercial players who get it wrong in trials of selling patient data,  or who try to skip around the regulatory landscape asking to be treated better than everyone else, and fail to comply with Data Protection law, or when government is driven to chasing children out of education, it doesn’t  just damage their reputation, or the potential of innovation for all, they damage public trust from everyone, and harm all data users.

Clause 15 leaves any future change open ended by Statutory Instrument. We can already see how SIs like these are used to create new national databases that can pop up at any time, without clear evidence of necessity, and without chance for proper scrutiny. We already see how data can be used, beyond reasonable expectations.

If we don’t speak out for our data privacy, the next time they want a list of names, they won’t need to ask. They’ll already know.


First they came …” is with reference to the poem written by German Lutheran pastor Martin Niemöller (1892–1984).

The Future of Data in Public Life

What is means to be human is going to be different. That was the last word of a panel of four excellent speakers, and the sparkling wit and charm of chair Timandra Harkness, at tonight’s Turing Institute event, hosted at the British Library, on the future of data.

The first speaker, Bernie Hogan, of the Oxford Internet Institute, spoke of Facebook’s emotion experiment,  and the challenges of commercial companies ownership and concentrations of knowledge, as well as their decisions controlling what content you get to see.

He also explained simply what an API is in human terms. Like a plug in a socket and instead of electricity, you get a flow of data, but the data controller can control which data can come out of the socket.

And he brilliantly brought in a thought what would it mean to be able to go back in time to the Nuremberg trials, and regulate not only medical ethics, but the data ethics of indirect and computational use of information. How would it affect today’s thinking on AI and machine learning and where we are now?

“Available does not mean accessible, transparent does not mean accountable”

Charles from the Bureau of Investigative Journalism, who had also worked for Trinity Mirror using data analytics, introduced some of the issues that large datasets have for the public.

  • People rarely have the means to do any analytics well.
  • Even if open data are available, they are not necessarily accessible due to the volume of data to access, and constraints of common software (such as excel) and time constraints.
  • Without the facts they cannot go see a [parliamentary] representative or community group to try and solve the problem.
  • Local journalists often have targets for the number of stories they need to write, and target number of Internet views/hits to meet.

Putting data out there is only transparency, but not accountability if we cannot turn information into knowledge that can benefit the public.

“Trust, is like personal privacy. Once lost, it is very hard to restore.”

Jonathan Bamford, Head of Parliamentary and Government Affairs at the ICO, took us back to why we need to control data at all. Democracy. Fairness. The balance of people’s rights,  like privacy, and Freedom-of-Information, and the power of data holders. The awareness that power of authorities and companies will affect the lives of ordinary citizens. And he said that even early on there was a feeling there was a need to regulate who knows what about us.

The third generation of Data Protection law he said, is now more important than ever to manage the whole new era of technology and use of data that did not exist when previous laws were made.

But, he said, the principles stand true today. Don’t be unfair. Use data for the purposes people expect. Security of data matters. As do rights to see the data people hold about us.  Make sure data are relevant, accurate, necessary and kept for a sensible amount of time.

And even if we think that technology is changing, he argued, the principles will stand, and organisations need to consider these principles before they do things, considering privacy as a fundamental human right by default, and data protection by design.

After all, we should remember the Information Commissioner herself recently said,

“privacy does not have to be the price we pay for innovation. The two can sit side by side. They must sit side by side.

It’s not always an easy partnership and, like most relationships, a lot of energy and effort is needed to make it work. But that’s what the law requires and it’s what the public expects.”

“We must not forget, evil people want to do bad things. AI needs to be audited.”

Joanna J. Bryson was brilliant her multifaceted talk, summing up how data will affect our lives. She explained how implicit biases work, and how we reason, make decisions and showed up how we think in some ways  in Internet searches. She showed in practical ways, how machine learning is shaping our future in ways we cannot see. And she said, firms asserting that doing these things fairly and openly and that regulation no longer fits new tech, “is just hoo-hah”.

She talked about the exciting possibilities and good use of data, but that , “we must not forget, evil people want to do bad things. AI needs to be audited.” She summed up, we will use data to predict ourselves. And she said:

“What is means to be human is going to be different.”

That is perhaps the crux of this debate. How do data and machine learning and its mining of massive datasets, and uses for ‘prediction’, affect us as individual human beings, and our humanity?

The last audience question addressed inequality. Solutions like transparency, subject access, accountability, and understanding biases and how we are used, will never be accessible to all. It needs a far greater digital understanding across all levels of society.   How can society both benefit from and be involved in the future of data in public life? The conclusion was made, that we need more faith in public institutions working for people at scale.

But what happens when those institutions let people down, at scale?

And some institutions do let us down. Such as over plans for how our NHS health data will be used. Or when our data are commercialised without consent breaking data protection law. Why do 23 million people not know how their education data are used? The government itself does not use our data in ways we expect, at scale. School children’s data used in immigration enforcement fails to be fair, is not the purpose for which it was collected, and causes harm and distress when it is used in direct interventions including “to effect removal from the UK”, and “create a hostile environment.” There can be a lack of committment to independent oversight in practice, compared to what is promised by the State. Or no oversight at all after data are released. And ethics in researchers using data are inconsistent.

The debate was less about the Future of Data in Public Life,  and much more about how big data affects our personal lives. Most of the discussion was around how we understand the use of our personal information by companies and institutions, and how will we ensure democracy, fairness and equality in future.

The question went unanswered from an audience member, how do we protect ourselves from the harms we cannot see, or protect the most vulnerable who are least able to protect themselves?

“How can we future proof data protection legislation and make sure it keeps up with innovation?”

That audience question is timely given the new Data Protection Bill. But what legislation means in practice, I am learning rapidly, can be very different from what is in the written down in law.

One additional tool in data privacy and rights legislation is up for discussion, right now,  in the UK. If it matters to you, take action.

NGOs could be enabled to make complaints on behalf of the public under article 80 of the General Data Protection Regulation (GDPR). However, the government has excluded that right from the draft UK Data Protection Bill launched last week.

“Paragraph 53 omits from Article 80, representation of data subjects, where provided for by Member State law” from paragraph 1 and paragraph 2,” [Data Protection Bill Explanatory notes, paragraph 681 p84/112]. 80 (2) gives members states the option to provide for NGOs to take action independently on behalf of many people that may have been affected.

If you want that right, a right others will be getting in other countries in the EU, then take action. Call your MP or write to them. Ask for Article 80, the right to representation, in UK law. We need to ensure that our human rights continue to be enacted and enforceable to the maximum, if, “what is means to be human is going to be different.”

For the Future of Data, has never been more personal.

Data Protection Bill 2017: summary of source links

The Data Protection Bill [Exemptions from GDPR] was introduced to the House of Lords on 13 September 2017
*current status April 6, 2018* Report Stage House of Commons — dates, to be announced
Debates

Dates for all stages of the passage of the Bill, including links to the debates.

EU GDPR Progress Overviews

Updates of GDPR age of consent mapping: Better Internet for Kids

Bird and Bird GDPR Tracker [Shows how and where GDPR has been supplemented locally, highlighting where Member States have taken the opportunities available in the law for national variation.]

ISiCo Tracker (Site in German language) with links.

UK Data Protection Bill Overview
  • Data Protection Bill Explanatory Notes [PDF], 1.2MB, 112 pages
  • Data Protection Bill Overview Factsheet [PDF], 229KB, 4 pages
  • Data Protection Bill Impact Assessment [PDF], 123KB, 5 pages
The General Data Protection Regulation

The General Data Protection Regulation [PDF] 959KB, 88 pages

Related Factsheets
  • General Processing Factsheet, [PDF], 141KB, 3 pages
  • Law Enforcement Data Processing Factsheet [PDF], 226KB, 3 pages
  • National Security Data Processing Factsheet [PDF], 231KB, 4 pages
These parts of the bill concern the function of the Information Commissioner and her powers of enforcement
  • Information Commissioner and Enforcement Factsheet [PDF] 223KB, 4 pages
  • Data sharing code of practice [PDF]
GDPR possible derogations

Source credit Amberhawk: Chris Pounder

Member State law can allow modifications to Articles 4(7), 4(9),  6(2), 6(3)(b), 6(4),  8(1), 8(3), 9(2)(a), 9(2)(b), 9(2)(g), 9(2)(h), 9(2)(i), 9(2)(j), 9(3), 9(4),  10,  14(5)(b), 14(5)(c), 14(5)(d),  17(1)(e), 17(3)(b), 17(3)(d), 22(2)(b),  23(1)(e),  26(1),  28(3), 28(3)(a), 28(3)(g), 28(3)(h), 28(4),  29,  32(4),  35(10), 36(5),  37(4),  38(5),  49(1)(g), 49(4), 49(5),  53(1), 53(3),  54(1), 54(2),  58(1)(f), 58(2), 58(3), 58(4), 58(5),  59,  61(4)(b),  62(3),  80,  83(5)(d), 83(7), 83(8),  85,  86,  87,  88,  89,  and 90 of the GDPR.

Other relevant significant connected legislation
  • The Police and Crime Directive [web link] 
  • EU Charter of Fundamental Rights – European Commission [link]
  • The proposed Regulation on Privacy and Electronic Communications [web link]
  • Draft modernised convention for the protection of individuals with regard to the processing of personal data (convention 108)
Data Protection Bill Statement of Intent
  • DCMS Statement of Intent [PDF] 229KB, 4 pages
  • Letter to Stakeholders [PDF] 184KB, 2 pages 7 Aug 2017
Other links on derogations and data processing
  • On Adequacy: Data transfers between the EU and UK post Brexit? Andrew D. Murray Article [link]
  • Two Birds [web link]
  • ICO legal basis for processing and children [link]
  • Public authorities under the Freedom of Information Act (ICO) Public authorities under FOIA 120160901 Version: 2.2 [link] 
  • ICO information for education [link]

Blogs on key issues [links in date of post]

  • Amberhawk
    • DP Bill’s new immigration exemption can put EU citizens seeking a right to remain at considerable disadvantage [09.10] re: Schedule 2, paragraph 4, new Immigration exemption.
    • On Adequacy:  Draconian powers in EU Withdrawal Bill can negate new Data Protection law [13.09]
    • Queen’s Speech, and the promised “Data Protection (Exemptions from GDPR) Bill [29.06]
  • defenddigitalme
    • Response to the Data Protection Bill debate and Green Paper on Online Strategy [11.10.2017]
  • Jon Baines
    • Serious DCMS error about consent data protection [11.08]
  • Eoin O’Dell
    • The UK’s Data Protection Bill 2017: repeals and compensation – updated: On DCMS legislating for Art 82 GDPR. [14.09]

Data Protection Bill Consultation: General Data Protection Regulation Call for Views on exemptions
  • New Data Protection Bill: Our planned reforms [PDF] 952KB, 30 pages
  • London Economics: Research and analysis to quantify benefits arising from personal data rights under the GDPR [PDF] 3.76MB 189 pages
  • ICO response to DCMS [link]
  • ESRC joint submissions on EU General Data Protection Regulation in the UK – Wellcome led multi org submission plus submission from British Academy / Erdos [link]
  • defenddigitalme response to the DCMS [link]
Minister for Digital Matt Hancock’s keynote address to the UK Internet Governance Forum, 13 September [link].

“…the Data Protection Bill, which will bring our data protection regime into the twenty first century, giving citizens more sovereignty over their data, and greater penalties for those who break the rules.

“With AI and machine learning, data use is moving fast. Good use of data isn’t just about complying with the regulations, it’s about the ethical use of data too.

“So good governance of data isn’t just about legislation – as important as that is – it’s also about establishing ethical norms and boundaries, as a society.  And this is something our Digital Charter will address too.”

Media links

14.09 BBC UK proposes exemptions to Data Protection Bill


Edits:

11.10.2017 to add links to the Second Reading in the House of Lords

The Queen’s Speech, Information Society Services and GDPR

The Queen’s Speech promised new laws to ensure that the United Kingdom retains its world-class regime protecting personal data. And the government proposes a new digital charter to make the United Kingdom the safest place to be online for children.

Improving online safety for children should mean one thing. Children should be able to use online services without being used by them and the people and organisations behind it. It should mean that their rights to be heard are prioritised in decisions about them.

As Sir Tim Berners-Lee is reported as saying, there is a need to work with companies to put “a fair level of data control back in the hands of people“. He rightly points out that today terms and conditions are “all or nothing”.

There is a gap in discussions that we fail to address when we think of consent to terms and conditions, or “handing over data”. It is that this assumes that these are always and can be always, conscious acts.

For children the question of whether accepting Ts&Cs giving them control and whether it is meaningful becomes even more moot. What are the agreeing to? Younger children cannot give free and informed consent. After all most privacy policies standardly include phrases such as, “If we sell all or a portion of our business, we may transfer all of your information, including personal information, to the successor organization,” which means in effect that “accepting” a privacy policy today, is effectively a blank cheque for anything tomorrow.

The GDPR requires terms and conditions to be laid out in policies that a child can understand.

The current approach to legislation around children and the Internet is heavily weighted towards protection from seen threats. The threats we need to give more attention to, are those unseen.

By 2024 more than 50% of home Internet traffic will be used by appliances and devices, rather than just for communication and entertainment…The IoT raises huge questions on privacy and security, that have to be addressed by government, corporations and consumers. (WEF, 2017)

Our lives as measured in our behaviours and opinions, purchases and likes, are connected by trillions of sensors. My parents may have described using the Internet as going online. Today’s online world no longer means our time is spent ‘on the computer’, but being online, all day every day. Instead of going to a desk and booting up through a long phone cable, we have wireless computers in our pockets and in our homes, with functionality built-in to enable us to do other things; make a phonecall, make toast, and play. In a smart city surrounded by sensors under pavements, in buildings, cameras and tracking everywhere we go, we are living ever more inside an overarching network of cloud computers that store our data. And from all that data decisions are made, which adverts to show us, on which network sites, what we get offered and do not, and our behaviours and our conscious decision-making may be nudged quite invisibly.

Data about us, whether uniquely identifiable or not, is all too often collected passively, IP Address, linked sign-ins that extract friends lists, and some decide if we can either use the thing or not. It’s part of the deal. We get the service, they get to trade our identity, like Top Trumps, behind the scenes. But we often don’t see it, and under GDPR, there should be no contractual requirement as part of consent. I.e. agree or don’t get the service, is not an option.

From May 25, 2018 there will be special “conditions applicable to child’s consent in relation to information society services,” in Data Protection law which are applicable to the collection of data.

As yet, we have not had debate in the UK what that means in concrete terms, and if we do not soon, we risk it becoming an afterthought that harms more than helps protect children’s privacy, and therefore their digital identity.

I think of five things needed by policy shapers to tackle it:

  • In depth understanding of what ‘online’ and the Internet mean
  • Consistent understanding of what threat models and risk are connected to personal data, which today are underestimated
  • A grasp of why data privacy training is vital to safeguarding
    Confront the idea that user regulation as a stand-alone step will create a better online experience for users, when we know that perceived problems are created by providers or other site users
  • Siloed thinking that fails to be forward thinking or join the dots of tactics across Departments into cohesive inclusive strategy

If the government’s new “major new drive on internet safety” involves the world’s largest technology companies in order to make the UK the “safest place in the world for young people to go online,” then we must also ensure that these strategies and papers join things up and above all, a technical knowledge of how the Internet works needs to join the dots of risks and benefits in order to form a strategy that will actually make children safe, skilled and see into their future.

When it comes to children, there is a further question over consent and parental spyware. Various walk-to-school apps, lauded by the former Secretary of State two years running, use spyware and can be used without a child’s consent. Guardian Gallery, which could be used to scan for nudity in photos on anyone’s phone that the ‘parent’ phone holder has access to install it on, can be made invisible on the ‘child’ phone. Imagine this in coercive relationships.

If these technologies and the online environment are not correctly assessed with regard to “online safety” threat models for all parts of our population, then they fail to address the risk for the most vulnerable who need it.

What will the GDPR really mean for online safety improvement? What will it define as online services for remuneration in the IoT? And who will be considered as children, “targeted at” or “offered to”?

An active decision is required in the UK. Will 16 remain the default age needed for consent to access Information Society Services, or will we adopt 13 which needs a legal change?

As banal as these questions sound they need close attention paid, and clarity, between now and May 25, 2018 if the UK is to be GDPR ready for providers of online services to know who and how they should treat Internet access, participation and age [parental] verification.

How will the “controller” make “reasonable efforts to verify in such cases that consent is given or authorised by the holder of parental responsibility over the child”, and “taking into consideration available technology”.

These are fundamental questions of what the Internet is and means to people today. And if the current government approach to security is anything to go by, safety will not mean what we think it will mean.

It will matter how these plans join up. Age verification was not being considered in UK law in relation to how we would derogate GDPR, even as late as in October 2016 despite age verification requirements already in the Digital Economy Bill. It shows a lack of joined up digital thinking across our government and needs addressed with urgency to get into the next Parliamentary round.

In recent draft legislation I am yet to see the UK government address Internet rights and safety for young people as anything other than a protection issue, treating the online space in the same way as offline, irl, focused on stranger danger, and sexting.

The UK Digital Strategy commits to the implementation of the General Data Protection Regulation by May 2018, and frames it as a business issue, labelling data as “a global commodity” and as such, its handling is framed solely as a requirements needed to ensure “that our businesses can continue to compete and communicate effectively around the world” and that adoption “will ensure a shared and higher standard of protection for consumers and their data.”

The Digital Economy Bill, despite being a perfect vehicle for this has failed to take on children’s rights, and in particular the requirements of GDPR for consent at all. It was clear if we were to do any future digital transactions we need to level up to GDPR, not drop to the lowest common denominator between that and existing laws.

It was utterly ignored. So were children’s rights to have their own views heard in the consultation to comment on the GDPR derogations for children, with little chance for involvement from young people’s organisations, and less than a monthto respond.

We must now get this right in any new Digital Strategy and bill in the coming parliament.

Crouching Tiger Hidden Dragon: the making of an IoT trust mark

The Internet of Things (IoT) brings with it unique privacy and security concerns associated with smart technology and its use of data.

  • What would it mean for you to trust an Internet connected product or service and why would you not?
  • What has damaged consumer trust in products and services and why do sellers care?
  • What do we want to see different from today, and what is necessary to bring about that change?

These three pairs of questions implicitly underpinned the intense day of  discussion at the London Zoo last Friday.

The questions went unasked, and could have been voiced before we started, although were probably assumed to be self-evident:

  1. Why do you want one at all [define the problem]?
  2. What needs to change and why [define the future model]?
  3. How do you deliver that and for whom [set out the solution]?

If a group does not agree on the need and drivers for change, there will be no consensus on what that should look like, what the gap is to achieve it, and even less on making it happen.

So who do you want the trustmark to be for, why will anyone want it, and what will need to change to deliver the aims? No one wants a trustmark per se. Perhaps you want what values or promises it embodies to  demonstrate what you stand for, promote good practice, and generate consumer trust. To generate trust, you must be seen to be trustworthy. Will the principles deliver on those goals?

The Open IoT Certification Mark Principles, as a rough draft was the outcome of the day, and are available online.

Here’s my reflections, including what was missing on privacy, and the potential for it to be considered in future.

I’ve structured this first, assuming readers attended the event, at ca 1,000 words. Lists and bullet points. The background comes after that, for anyone interested to read a longer piece.

Many thanks upfront, to fellow participants, to the organisers Alexandra D-S and Usman Haque and the colleague who hosted at the London Zoo. And Usman’s Mum.  I hope there will be more constructive work to follow, and that there is space for civil society to play a supporting role and critical friend.


The mark didn’t aim to fix the IoT in a day, but deliver something better for product and service users, by those IoT companies and providers who want to sign up. Here is what I took away.

I learned three things

  1. A sense of privacy is not homogenous, even within people who like and care about privacy in theoretical and applied ways. (I very much look forward to reading suggestions promised by fellow participants, even if enforced personal openness and ‘watching the watchers’ may mean ‘privacy is theft‘.)
  2. Awareness of current data protection regulations needs improved in the field. For example, Subject Access Requests already apply to all data controllers, public and private. Few have read the GDPR, or the e-Privacy directive, despite importance for security measures in personal devices, relevant for IoT.
  3. I truly love working on this stuff, with people who care.

And it reaffirmed things I already knew

  1. Change is hard, no matter in what field.
  2. People working together towards a common goal is brilliant.
  3. Group collaboration can create some brilliantly sharp ideas. Group compromise can blunt them.
  4. Some men are particularly bad at talking over each other, never mind over the women in the conversation. Women notice more. (Note to self: When discussion is passionate, it’s hard to hold back in my own enthusiasm and not do the same myself. To fix.)
  5. The IoT context, and risks within it are not homogenous, but brings new risks and adverseries. The risks for manufacturers and consumers and the rest of the public are different, and cannot be easily solved with a one-size-fits-all solution. But we can try.

Concerns I came away with

  1. If the citizen / customer / individual is to benefit from the IoT trustmark, they must be put first, ahead of companies’ wants.
  2. If the IoT group controls both the design, assessment to adherence and the definition of success, how objective will it be?
  3. The group was not sufficiently diverse and as a result, reflects too little on the risks and impact of the lack of diversity in design and effect, and the implications of dataveillance .
  4. Critical minority thoughts although welcomed, were stripped out from crowdsourced first draft principles in compromise.
  5. More future thinking should be built-in to be robust over time.

IoT adversaries: via Twitter, unknown source

What was missing

There was too little discussion of privacy in perhaps the most important context of IoT – inter connectivity and new adversaries. It’s not only about *your* thing, but things that it speaks to, interacts with, of friends, passersby, the cityscape , and other individual and state actors interested in offense and defense. While we started to discuss it, we did not have the opportunity to discuss sufficiently at depth to be able to get any thinking into applying solutions in the principles.

One of the greatest risks that users face is the ubiquitous collection and storage of data about users that reveal detailed, inter-connected patterns of behaviour and our identity and not seeing how that is used by companies behind the scenes.

What we also missed discussing is not what we see as necessary today, but what we can foresee as necessary for the short term future, brainstorming and crowdsourcing horizon scanning for market needs and changing stakeholder wants.

Future thinking

Here’s the areas of future thinking that smart thinking on the IoT mark could consider.

  1. We are moving towards ever greater requirements to declare identity to use a product or service, to register and log in to use anything at all. How will that change trust in IoT devices?
  2. Single identity sign-on is becoming ever more imposed, and any attempts for multiple presentation of who I am by choice, and dependent on context, therefore restricted. [not all users want to use the same social media credentials for online shopping, with their child’s school app, and their weekend entertainment]
  3. Is this imposition what the public wants or what companies sell us as what customers want in the name of convenience? What I believe the public would really want is the choice to do neither.
  4. There is increasingly no private space or time, at places of work.
  5. Limitations on private space are encroaching in secret in all public city spaces. How will ‘handoffs’ affect privacy in the IoT?
  6. Public sector (connected) services are likely to need even more exacting standards than single home services.
  7. There is too little understanding of the social effects of this connectedness and knowledge created, embedded in design.
  8. What effects may there be on the perception of the IoT as a whole, if predictive data analysis and complex machine learning and AI hidden in black boxes becomes more commonplace and not every company wants to be or can be open-by-design?
  9. Ubiquitous collection and storage of data about users that reveal detailed, inter-connected patterns of behaviour and our identity needs greater commitments to disclosure. Where the hand-offs are to other devices, and whatever else is in the surrounding ecosystem, who has responsibility for communicating interaction through privacy notices, or defining legitimate interests, where the data joined up may be much more revealing than stand-alone data in each silo?
  10. Define with greater clarity the privacy threat models for different groups of stakeholders and address the principles for each.

What would better look like?

The draft privacy principles are a start, but they’re not yet aspirational as I would have hoped. Of course the principles will only be adopted if possible, practical and by those who choose to. But where is the differentiator from what everyone is required to do, and better than the bare minimum? How will you sell this to consumers as new? How would you like your child to be treated?

The wording in these 5 bullet points, is the first crowdsourced starting point.

  • The supplier of this product or service MUST be General Data Protection Regulation (GDPR) compliant.
  • This product SHALL NOT disclose data to third parties without my knowledge.
  • I SHOULD get full access to all the data collected about me.
  • I MAY operate this device without connecting to the internet.
  • My data SHALL NOT be used for profiling, marketing or advertising without transparent disclosure.

Yes other points that came under security address some of the crossover between privacy and surveillance risks, but there is as yet little substantial that is aspirational to make the IoT mark a real differentiator in terms of privacy. An opportunity remains.

It was that and how young people perceive privacy that I hoped to bring to the table. Because if manufacturers are serious about future success, they cannot ignore today’s children and how they feel. How you treat them today, will shape future purchasers and their purchasing, and there is evidence you are getting it wrong.

The timing is good in that it now also offers the opportunity to promote consistent understanding, and embed the language of GDPR and ePrivacy regulations into consistent and compatible language in policy and practice in the #IoTmark principles.

User rights I would like to see considered

These are some of the points I would think privacy by design would mean. This would better articulate GDPR Article 25 to consumers.

Data sovereignty is a good concept and I believe should be considered for inclusion in explanatory blurb before any agreed privacy principles.

  1. Goods should by ‘dumb* by default’ until the smart functionality is switched on. [*As our group chair/scribe called it]  I would describe this as, “off is the default setting out-of-the-box”.
  2. Privact by design. Deniability by default. i.e. not only after opt out, but a company should not access the personal or identifying purchase data of anyone who opts out of data collection about their product/service use during the set up process.
  3. The right to opt out of data collection at a later date while continuing to use services.
  4. A right to object to the sale or transfer of behavioural data, including to third-party ad networks and absolute opt-in on company transfer of ownership.
  5. A requirement that advertising should be targeted to content, [user bought fridge A] not through jigsaw data held on users by the company [how user uses fridge A, B, C and related behaviour].
  6. An absolute rejection of using children’s personal data gathered to target advertising and marketing at children

Background: Starting points before privacy

After a brief recap on 5 years ago, we heard two talks.

The first was a presentation from Bosch. They used the insights from the IoT open definition from 5 years ago in their IoT thinking and embedded it in their brand book. The presenter suggested that in five years time, every fridge Bosch sells will be ‘smart’. And the  second was a fascinating presentation, of both EU thinking and the intellectual nudge to think beyond the practical and think what kind of society we want to see using the IoT in future. Hints of hardcore ethics and philosophy that made my brain fizz from , soon to retire from the European Commission.

The principles of open sourcing, manufacturing, and sustainable life cycle were debated in the afternoon with intense arguments and clearly knowledgeable participants, including those who were quiet.  But while the group had assigned security, and started work on it weeks before, there was no one pre-assigned to privacy. For me, that said something. If they are serious about those who earn the trustmark being better for customers than their competition, then there needs to be greater emphasis on thinking like their customers, and by their customers, and what use the mark will be to customers, not companies. Plan early public engagement and testing into the design of this IoT mark, and make that testing open and diverse.

To that end, I believe it needed to be articulated more strongly, that sustainable public trust is the primary goal of the principles.

  • Trust that my device will not become unusable or worthless through updates or lack of them.
  • Trust that my device is manufactured safely and ethically and with thought given to end of life and the environment.
  • Trust that my source components are of high standards.
  • Trust in what data and how that data is gathered and used by the manufacturers.

Fundamental to ‘smart’ devices is their connection to the Internet, and so the last for me, is therefore key to successful public perception and it actually making a difference, beyond the PR value to companies. The value-add must be measured from consumers point of view.

All the openness about design functions and practice improvements, without attempting to change privacy infringing practices, may be wasted effort. Why? Because the perceived benefit of the value of the mark, will be proportionate to what risks it is seen to mitigate.

Why?

Because I assume that you know where your source components come from today. I was shocked to find out not all do and that ‘one degree removed’ is going to be an improvement? Holy cow, I thought. What about regulatory requirements for product safety recalls? These differ of course for different product areas, but I was still surprised. Having worked in global Fast Moving Consumer Goods (FMCG) and food industry, semiconductor and optoelectronics, and medical devices it was self-evident for me, that sourcing is rigorous. So that new requirement to know one degree removed, was a suggested minimum. But it might shock consumers to know there is not usually more by default.

Customers also believe they have reasonable expectations of not being screwed by a product update, left with something that does not work because of its computing based components. The public can take vocal, reputation-damaging action when they are let down.

In the last year alone, some of the more notable press stories include a manufacturer denying service, telling customers, “Your unit will be denied server connection,” after a critical product review. Customer support at Jawbone came in for criticism after reported failings. And even Apple has had problems in rolling out major updates.

While these are visible, the full extent of the overreach of company market and product surveillance into our whole lives, not just our living rooms, is yet to become understood by the general population. What will happen when it is?

The Internet of Things is exacerbating the power imbalance between consumers and companies, between government and citizens. As Wendy Grossman wrote recently, in one sense this may make privacy advocates’ jobs easier. It was always hard to explain why “privacy” mattered. Power, people understand.

That public discussion is long overdue. If open principles on IoT devices mean that the signed-up companies differentiate themselves by becoming market leaders in transparency, it will be a great thing. Companies need to offer full disclosure of data use in any privacy notices in clear, plain language  under GDPR anyway, but to go beyond that, and offer customers fair presentation of both risks and customer benefits, will not only be a point-of-sales benefit, but potentially improve digital literacy in customers too.

The morning discussion touched quite often on pay-for-privacy models. While product makers may see this as offering a good thing, I strove to bring discussion back to first principles.

Privacy is a human right. There can be no ethical model of discrimination based on any non-consensual invasion of privacy. Privacy is not something I should pay to have. You should not design products that reduce my rights. GDPR requires privacy-by-design and data protection by default. Now is that chance for IoT manufacturers to lead that shift towards higher standards.

We also need a new ethics thinking on acceptable fair use. It won’t change overnight, and perfect may be the enemy of better. But it’s not a battle that companies should think consumers have lost. Human rights and information security should not be on the battlefield at all in the war to win customer loyalty.  Now is the time to do better, to be better, demand better for us and in particular, for our children.

Privacy will be a genuine market differentiator

If manufacturers do not want to change their approach to exploiting customer data, they are unlikely to be seen to have changed.

Today feelings that people in US and Europe reflect in surveys are loss of empowerment, feeling helpless, and feeling used. That will shift to shock, resentment, and any change curve will predict, anger.

A 2014 survey for the Royal Statistical Society by Ipsos MORI, found that trust in institutions to use data is much lower than trust in them in general.

“The poll of just over two thousand British adults carried out by Ipsos MORI found that the media, internet services such as social media and search engines and telecommunication companies were the least trusted to use personal data appropriately.” [2014, Data trust deficit with lessons for policymakers, Royal Statistical Society]

In the British student population, one 2015 survey of university applicants in England, found of 37,000 who responded, the vast majority of UCAS applicants agree that sharing personal data can benefit them and support public benefit research into university admissions, but they want to stay firmly in control. 90% of respondents said they wanted to be asked for their consent before their personal data is provided outside of the admissions service.

In 2010, a multi method model of research with young people aged 14-18, by the Royal Society of Engineering, found that, “despite their openness to social networking, the Facebook generation have real concerns about the privacy of their medical records.” [2010, Privacy and Prejudice, RAE, Wellcome]

When people use privacy settings on Facebook set to maximum, they believe they get privacy, and understand little of what that means behind the scenes.

Are there tools designed by others, like Projects by If licenses, and ways this can be done, that you’re not even considering yet?

What if you don’t do it?

“But do you feel like you have privacy today?” I was asked the question in the afternoon. How do people feel today, and does it matter? Companies exploiting consumer data and getting caught doing things the public don’t expect with their data, has repeatedly damaged consumer trust. Data breaches and lack of information security have damaged consumer trust. Both cause reputational harm. Damage to reputation can harm customer loyalty. Damage to customer loyalty costs sales, profit and upsets the Board.

Where overreach into our living rooms has raised awareness of invasive data collection, we are yet to be able to see and understand the invasion of privacy into our thinking and nudge behaviour, into our perception of the world on social media, the effects on decision making that data analytics is enabling as data shows companies ‘how we think’, granting companies access to human minds in the abstract, even before Facebook is there in the flesh.

Governments want to see how we think too, and is thought crime really that far away using database labels of ‘domestic extremists’ for activists and anti-fracking campaigners, or the growing weight of policy makers attention given to predpol, predictive analytics, the [formerly] Cabinet Office Nudge Unit, Google DeepMind et al?

Had the internet remained decentralized the debate may be different.

I am starting to think of the IoT not as the Internet of Things, but as the Internet of Tracking. If some have their way, it will be the Internet of Thinking.

Considering our centralised Internet of Things model, our personal data from human interactions has become the network infrastructure, and data flows, are controlled by others. Our brains are the new data servers.

In the Internet of Tracking, people become the end nodes, not things.

And it is this where the future users will be so important. Do you understand and plan for factors that will drive push back, and crash of consumer confidence in your products, and take it seriously?

Companies have a choice to act as Empires would – multinationals, joining up even on low levels, disempowering individuals and sucking knowledge and power at the centre. Or they can act as Nation states ensuring citizens keep their sovereignty and control over a selected sense of self.

Look at Brexit. Look at the GE2017. Tell me, what do you see is the direction of travel? Companies can fight it, but will not defeat how people feel. No matter how much they hope ‘nudge’ and predictive analytics might give them this power, the people can take back control.

What might this desire to take-back-control mean for future consumer models? The afternoon discussion whilst intense, reached fairly simplistic concluding statements on privacy. We could have done with at least another hour.

Some in the group were frustrated “we seem to be going backwards” in current approaches to privacy and with GDPR.

But if the current legislation is reactive because companies have misbehaved, how will that be rectified for future? The challenge in the IoT both in terms of security and privacy, AND in terms of public perception and reputation management, is that you are dependent on the behaviours of the network, and those around you. Good and bad. And bad practices by one, can endanger others, in all senses.

If you believe that is going back to reclaim a growing sense of citizens’ rights, rather than accepting companies have the outsourced power to control the rights of others, that may be true.

There was a first principle asked whether any element on privacy was needed at all, if the text was simply to state, that the supplier of this product or service must be General Data Protection Regulation (GDPR) compliant. The GDPR was years in the making after all. Does it matter more in the IoT and in what ways? The room tended, understandably, to talk about it from the company perspective.  “We can’t” “won’t” “that would stop us from XYZ.” Privacy would however be better addressed from the personal point of view.

What do people want?

From the company point of view, the language is different and holds clues. Openness, control, and user choice and pay for privacy are not the same thing as the basic human right to be left alone. Afternoon discussion reminded me of the 2014 WAPO article, discussing Mark Zuckerberg’s theory of privacy and a Palo Alto meeting at Facebook:

“Not one person ever uttered the word “privacy” in their responses to us. Instead, they talked about “user control” or “user options” or promoted the “openness of the platform.” It was as if a memo had been circulated that morning instructing them never to use the word “privacy.””

In the afternoon working group on privacy, there was robust discussion whether we had consensus on what privacy even means. Words like autonomy, control, and choice came up a lot. But it was only a beginning. There is opportunity for better. An academic voice raised the concept of sovereignty with which I agreed, but how and where  to fit it into wording, which is at once both minimal and applied, and under a scribe who appeared frustrated and wanted a completely different approach from what he heard across the group, meant it was left out.

This group do care about privacy. But I wasn’t convinced that the room cared in the way that the public as a whole does, but rather only as consumers and customers do. But IoT products will affect potentially everyone, even those who do not buy your stuff. Everyone in that room, agreed on one thing. The status quo is not good enough. What we did not agree on, was why, and what was the minimum change needed to make a enough of a difference that matters.

I share the deep concerns of many child rights academics who see the harm that efforts to avoid restrictions Article 8 the GDPR will impose. It is likely to be damaging for children’s right to access information, be discriminatory according to parents’ prejudices or socio-economic status, and ‘cheating’ – requiring secrecy rather than privacy, in attempts to hide or work round the stringent system.

In ‘The Class’ the research showed, ” teachers and young people have a lot invested in keeping their spheres of interest and identity separate, under their autonomous control, and away from the scrutiny of each other.” [2016, Livingstone and Sefton-Green, p235]

Employers require staff use devices with single sign including web and activity tracking and monitoring software. Employee personal data and employment data are blended. Who owns that data, what rights will employees have to refuse what they see as excessive, and is it manageable given the power imbalance between employer and employee?

What is this doing in the classroom and boardroom for stress, anxiety, performance and system and social avoidance strategies?

A desire for convenience creates shortcuts, and these are often met using systems that require a sign-on through the platforms giants: Google, Facebook, Twitter, et al. But we are kept in the dark how by using these platforms, that gives access to them, and the companies, to see how our online and offline activity is all joined up.

Any illusion of privacy we maintain, we discussed, is not choice or control if based on ignorance, and backlash against companies lack of efforts to ensure disclosure and understanding is growing.

“The lack of accountability isn’t just troubling from a philosophical perspective. It’s dangerous in a political climate where people are pushing back at the very idea of globalization. There’s no industry more globalized than tech, and no industry more vulnerable to a potential backlash.”

[Maciej Ceglowski, Notes from an Emergency, talk at re.publica]

Why do users need you to know about them?

If your connected *thing* requires registration, why does it? How about a commitment to not forcing one of these registration methods or indeed any at all? Social Media Research by Pew Research in 2016 found that  56% of smartphone owners ages 18 to 29 use auto-delete apps, more than four times the share among those 30-49 (13%) and six times the share among those 50 or older (9%).

Does that tell us anything about the demographics of data retention preferences?

In 2012, they suggested social media has changed the public discussion about managing “privacy” online. When asked, people say that privacy is important to them; when observed, people’s actions seem to suggest otherwise.

Does that tell us anything about how well companies communicate to consumers how their data is used and what rights they have?

There is also data with strong indications about how women act to protect their privacy more but when it comes to basic privacy settings, users of all ages are equally likely to choose a private, semi-private or public setting for their profile. There are no significant variations across age groups in the US sample.

Now think about why that matters for the IoT? I wonder who makes the bulk of purchasing decsions about household white goods for example and has Bosch factored that into their smart-fridges-only decision?

Do you *need* to know who the user is? Can the smart user choose to stay anonymous at all?

The day’s morning challenge was to attend more than one interesting discussion happening at the same time. As invariably happens, the session notes and quotes are always out of context and can’t possibly capture everything, no matter how amazing the volunteer (with thanks!). But here are some of the discussion points from the session on the body and health devices, the home, and privacy. It also included a discussion on racial discrimination, algorithmic bias, and the reasons why care.data failed patients and failed as a programme. We had lengthy discussion on ethics and privacy: smart meters, objections to models of price discrimination, and why pay-for-privacy harms the poor by design.

Smart meter data can track the use of unique appliances inside a person’s home and intimate patterns of behaviour. Information about our consumption of power, what and when every day, reveals  personal details about everyday lives, our interactions with others, and personal habits.

Why should company convenience come above the consumer’s? Why should government powers, trump personal rights?

Smart meter is among the knowledge that government is exploiting, without consent, to discover a whole range of issues, including ensuring that “Troubled Families are identified”. Knowing how dodgy some of the school behaviour data might be, that helps define who is “troubled” there is a real question here, is this sound data science? How are errors identified? What about privacy? It’s not your policy, but if it is your product, what are your responsibilities?

If companies do not respect children’s rights,  you’d better shape up to be GDPR compliant

For children and young people, more vulnerable to nudge, and while developing their sense of self can involve forming, and questioning their identity, these influences need oversight or be avoided.

In terms of GDPR, providers are going to pay particular attention to Article 8 ‘information society services’ and parental consent, Article 17 on profiling,  and rights to restriction of processing (19) right to erasure in recital 65 and rights to portability. (20) However, they  may need to simply reassess their exploitation of children and young people’s personal data and behavioural data. Article 57 requires special attention to be paid by regulators to activities specifically targeted at children, as ‘vulnerable natural persons’ of recital 75.

Human Rights, regulations and conventions overlap in similar principles that demand respect for a child, and right to be let alone:

(a) The development of the child ‘s personality, talents and mental and physical abilities to their fullest potential;

(b) The development of respect for human rights and fundamental freedoms, and for the principles enshrined in the Charter of the United Nations.

A weakness of the GDPR is that it allows derogation on age and will create inequality and inconsistency  for children as a result. By comparison Article one of the Convention on the Rights of the Child (CRC) defines who is to be considered a “child” for the purposes of the CRC, and states that: “For the purposes of the present Convention, a child means every human being below the age of eighteen years unless, under the law applicable to the child, majority is attained earlier.”<

Article two of the CRC says that States Parties shall respect and ensure the rights set forth in the present Convention to each child within their jurisdiction without discrimination of any kind.

CRC Article 16 says that no child shall be subjected to arbitrary or unlawful interference with his or her honour and reputation.

Article 8 CRC requires respect for the right of the child to preserve his or her identity […] without unlawful interference.

Article 12 CRC demands States Parties shall assure to the child who is capable of forming his or her own views the right to express those views freely in all matters affecting the child, the views of the child being given due weight in accordance with the age and maturity of the child.

That stands in potential conflict with GDPR article 8. There is much on GDPR on derogations by country, and or children, still to be set.

What next for our data in the wild

Hosting the event at the zoo offered added animals, and during a lunch tour we got out on a tour, kindly hosted by a fellow participant. We learned how smart technology was embedded in some of the animal enclosures, and work on temperature sensors with penguins for example. I love tigers, so it was a bonus that we got to see such beautiful and powerful animals up close, if a little sad for their circumstances and as a general basic principle, seeing big animals caged as opposed to in-the-wild.

Freedom is a common desire in all animals. Physical, mental, and freedom from control by others.

I think any manufacturer that underestimates this element of human instinct is ignoring the ‘hidden dragon’ that some think is a myth.  Privacy is not dead. It is not extinct, or even unlike the beautiful tigers, endangered. Privacy in the IoT at its most basic, is the right to control our purchasing power. The ultimate people power waiting to be sprung. Truly a crouching tiger. People object to being used and if companies continue to do so without full disclosure, they do so at their peril. Companies seem all-powerful in the battle for privacy, but they are not.  Even insurers and data brokers must be fair and lawful, and it is for regulators to ensure that practices meet the law.

When consumers realise our data, our purchasing power has the potential to control, not be controlled, that balance will shift.

“Paper tigers” are superficially powerful but are prone to overextension that leads to sudden collapse. If that happens to the superficially powerful companies that choose unethical and bad practice, as a result of better data privacy and data ethics, then bring it on.

I hope that the IoT mark can champion best practices and make a difference to benefit everyone.

While the companies involved in its design may be interested in consumers, I believe it could be better for everyone, done well. The great thing about the efforts into an #IoTmark is that it is a collective effort to improve the whole ecosystem.

I hope more companies will realise their privacy rights and ethical responsibility in the world to all people, including those interested in just being, those who want to be let alone, and not just those buying.

“If a cat is called a tiger it can easily be dismissed as a paper tiger; the question remains however why one was so scared of the cat in the first place.”

The Resistance to Theory (1982), Paul de Man

Further reading: Networks of Control – A Report on Corporate Surveillance, Digital Tracking, Big Data & Privacy by Wolfie Christl and Sarah Spiekermann

Are UK teacher and pupil profile data stolen, lost and exposed?

Update received from Edmodo, VP Marketing & Adoption, June 1:


While everyone is focused on #WannaCry ransomware, it appears that a global edTech company has had a potential global data breach that few are yet talking about.

Edmodo is still claiming on its website it is, “The safest and easiest way for teachers to connect and collaborate with students, parents, and each other.” But is it true, and who verifies that safe is safe?

Edmodo data from 78 million users for sale

Matt Burgess wrote in VICE: “Education website Edmodo promises a way for “educators to connect and collaborate with students, parents, and each other”. However, 78 million of its customers have had their user account details stolen. Vice’s Motherboard reports that usernames, email addresses, and hashed passwords were taken from the service and have been put up for sale on the dark web for around $1,000 (£700).

“Data breach notification website LeakBase also has a copy of the data and provided it to Motherboard. According to LeakBase around 40 million of the accounts have email addresses connected to them. The company said it is aware of a “potential security incident” and is investigating.”

The Motherboard article by Joseph Cox, says it happened last month. What has been done since? Why is there no public information or notification about the breach on the company website?

Joseph doesn’t think profile photos are at risk, unless someone can log into an account. He was given usernames, email addresses, and hashed passwords, and as far as he knows, that was all that was stolen.

“The passwords have apparently been hashed with the robust bcrypt algorithm, and a string of random characters known as a salt, meaning hackers will have a much harder time obtaining user’s actual login credentials. Not all of the records include a user email address.”

Going further back, it looks like Edmodo’s weaknesses had already been identified 4 years ago. Did anything change?

So far I’ve been unable to find out from Edmodo directly. There is no telephone technical support. There is no human that can be reached dialling the headquarters telephone number.

Where’s the parental update?

No one has yet responded to say whether UK pupils and teachers’ data was among that reportedly stolen. (Update June 1, the company did respond with confirmation of UK users involved.)

While there is no mention of the other data the site holds being in the breach, details are as yet sketchy, and Edmodo holds children’s data. Where is the company assurance what was and was not stolen?

As it’s a platform log on I would want to know when parents will be told exactly what was compromised and how details have been exposed. I would want clarification if this could potentially be a weakness for further breaches of other integrated systems, or not.

Are edTech and IoT toys fit for UK children?

In 2016, more than 727,000 UK children had their information compromised following a cyber attack on VTech, including images. These toys are sold as educational, even if targeted at an early age.

In Spring 2017, CloudPets, the maker of Internet of Things teddy bears, “smart toys” left more than two million voice recordings from children online without any security protections and exposing children’s personal details.

As yet UK ministers have declined our civil society recommendations to act and take steps on the public sector security of national pupil data or on the private security of Internet connected toys and things. The latter in line with Germany for example.

It is right that the approach is considered. The UK government must take these risks seriously in an evidence based and informed way, and act, not with knee jerk reactions. But it must act.

Two months after Germany banned the Cayla doll, we still had them for sale here.

Parents are often accused of being uninformed, but we must be able to expect that our products pass a minimum standard of tech and data security testing as part of pre-sale consumer safety testing.

Parents have a responsibility to educate themselves to a reasonable level of user knowledge. But the opportunities are limited when there’s no transparency. Much of the use of a child’s personal data and system data’s interaction with our online behaviour, in toys, things, and even plain websites remains hidden to most of us.

So too, the Edmodo privacy policy contained no mention of profiling or behavioural web tracking, for example. Only when this savvy parent spotted it was happening, it appears the company responded properly to fix it. Given strict COPPA rules it is perhaps unsurprising, though it shouldn’t have happened at all.

How will the uses of these smart toys, and edTech apps be made safe, and is the government going to update regulations to do so?

Are public sector policy, practice and people, fit for managing UK children’s data privacy needs?

While these private edTech companies used directly in schools can expose children to risk, so too does public data collected in schools, being handed out to commercial companies, by government departments. Our UK government does not model good practice.

Two years on, I’m still working on asking for fixes in basic national pupil data improvement.  To make safe data policy, this is far too slow.

The Department for Education is still cagey about transparency, not telling schools it gives away national pupil data including to commercial companies without pupil or parental knowledge, and hides the Home Office use, now on a monthly basis, by not publishing it on a regular basis.

These uses of data are not safe, and expose children to potential greater theft, loss and selling of their personal data. It must change.

Whether the government hands out children’s data to commercial companies at national level and doesn’t tell schools, or staff in schools do it directly through in-class app registrations, it is often done without consent, and without any privacy impact assessment or due diligence up front. Some send data to the US or Australia. Schools still tell parents these are ‘required’ without any choice. But have they ensured that there is an equal and adequate level of data protection offered to personal data that they extract from the SIMs?

 

School staff and teachers manage, collect, administer personal data daily, including signing up children as users of web accounts with technology providers. Very often telling parents after the event, and with no choice. How can they and not put others at risk, if untrained in the basics of good data handling practices?

In our UK schools, just like the health system, the basics are still not being fixed or good practices on offer to staff. Teachers in the UK, get no data privacy or data protection training in their basic teacher training. That’s according to what I’ve been told so far from teacher trainers, CDP leaders, union members and teachers themselves,

Would you train fire fighters without ever letting them have hose practice?

Infrastructure is known to be exposed and under invested, but it’s not all about the tech. Security investment must also be in people.

Systemic failures seen this week revealed by WannaCry are not limited to the NHS. This from George Danezis could be, with few tweaks, copy pasted into education. So the question is not if, but when the same happens in education, unless it’s fixed.

“…from poor security standards in heath informatics industries; poor procurement processes in heath organizations; lack of liability on any of the software vendors (incl. Microsoft) for providing insecure software or devices; cost-cutting from the government on NHS cyber security with no constructive alternatives to mitigate risks; and finally the UK/US cyber-offense doctrine that inevitably leads to proliferation of cyber-weapons and their use on civilian critical infrastructures.” [Original post]

The power behind today’s AI in public services

The power behind today’s AI in public services

Thinking about whether education in England is preparing us for the jobs of the future, means also thinking about how technology will influence it.

Time and again, thinking and discussion about these topics is siloed. At the Turing Institute, the Royal Society, the ADRN and EPSRC, in government departments, discussions on data, or within education practitioner, and public circles — we are all having similar discussions about data and ethics, but with little ownership and no goals for future outcomes. If government doesn’t get it, or have time for it, or policy lacks ethics by design, is it in the public interest for private companies, Google et al., to offer a fait accompli?

There is lots of talking about Machine Learning (ML), Artificial Intelligence (AI) and ethics. But what is being done to ensure that real values — respect for rights, human dignity, and autonomy — are built into practice in the public services delivery?

In most recent data policy it is entirely absent. The Digital Economy Act s33 risks enabling, through removal of inter and intra-departmental data protections, an unprecedented expansion of public data transfers, with “untrammelled powers”. Powers without codes of practice, promised over a year ago. That has fall out for the trustworthiness of legislative process, and data practices across public services.

Predictive analytics is growing but poorly understood in the public and public sector.

There is already dependence on computers in aspects of public sector work. Its interactions with others in sensitive situations demands better knowledge of how systems operate and can be wrong. Debt recovery, and social care to take two known examples.

Risk averse, staff appear to choose not to question the outcome of ‘algorithmic decision making’ or do not have the ability to do so. There is reportedly no analysis training for practitioners, to understand the basis or bias of conclusions. This has the potential that instead of making us more informed, decision-making by machine makes us humans less clever.

What does it do to professionals, if they feel therefore less empowered? When is that a good thing if it overrides discriminatory human decisions? How can we tell the difference and balance these risks if we don’t understand or feel able to challenge them?

In education, what is it doing to children whose attainment is profiled, predicted, and acted on to target extra or less focus from school staff, who have no ML training and without informed consent of pupils or parents?

If authorities use data in ways the public do not expect, such as to ID homes of multiple occupancy without informed consent, they will fail the future to deliver uses for good. The ‘public interest’, ‘user need,’ and ethics can come into conflict according to your point of view. The public and data protection law and ethics object to harms from use of data. This type of application has potential to be mind-blowingly invasive and reveal all sorts of other findings.

Widely informed thinking must be made into meaningful public policy for the greatest public good

Our politicians are caught up in the General Election and buried in Brexit.

Meanwhile, the commercial companies taking AI first rights to capitalise on existing commercial advantage could potentially strip public assets, use up our personal data and public trust, and leave the public with little public good. We are already used by global data players, and by machine-based learning companies, without our knowledge or consent. That knowledge can be used to profit business models, that pay little tax into the public purse.

There are valid macro economic arguments about whether private spend and investment are preferable compared with a state’s ability to do the same. But these companies make more than enough to do it all. Does it signal a failure to a commitment to the wider community; not paying just amounts of taxes, is it a red flag to a company’s commitment to public good?

What that public good should look like, depends on who is invited to participate in the room, and not to tick boxes, but to think and to build.

The Royal Society’s Report on AI and Machine Learning published on April 25, showed a working group of 14 participants, including two Google DeepMind representatives, one from Amazon, private equity investors, and academics from cognitive science and genetics backgrounds.

Our #machinelearning working group chair, professor Peter Donnelly FRS, on today’s major #RSMachinelearning report https://t.co/PBYjzlESmB pic.twitter.com/RM9osnvOMX

— The Royal Society (@royalsociety) April 25, 2017

If we are going to form objective policies the inputs that form the basis for them must be informed, but must also be well balanced, and be seen to be balanced. Not as an add on, but be in the same room.

As Natasha Lomas in TechCrunch noted, “Public opinion is understandably a big preoccupation for the report authors — unsurprisingly so, given that a technology that potentially erodes people’s privacy and impacts their jobs risks being drastically unpopular.”

“The report also calls on researchers to consider the wider impact of their work and to receive training in recognising the ethical implications.”

What are those ethical implications? Who decides which matter most? How do we eliminate recognised discriminatory bias? What should data be used for and AI be working on at all? Who is it going to benefit? What questions are we not asking? Why are young people left out of this debate?

Who decides what the public should or should not know?

AI and ML depend on data. Data is often talked about as a panacea to problems of better working together. But data alone does not make people better informed. In the same way that they fail, if they don’t feel it is their job to pick up the fax. A fundamental building block of our future public and private prosperity is understanding data and how we, and the AI, interact. What is data telling us and how do we interpret it, and know it is accurate?

How and where will we start to educate young people about data and ML, if not about their own and use by government and commercial companies?

The whole of Chapter 5 in the report is very good as a starting point for policy makers who have not yet engaged in the area. Privacy while summed up too short in conclusions, is scattered throughout.

Blind spots remain, however.

  • Over willingness to accommodate existing big private players as their expertise leads design, development and a desire to ‘re-write regulation’.
  • Slowness to react to needed regulation in the public sector (caught up in Brexit) while commercial drivers and technology change forge ahead
  • ‘How do we develop technology that benefits everyone’ must not only think UK, but global South, especially in the bias in how AI is being to taught, and broad socio-economic barriers in application
  • Predictive analytics and professional application = unwillingness to question the computer result. In children’s social care this is already having a damaging upturn in the family courts (S31)
  • Data and technology knowledge and ethics training, must be embedded across the public sector, not only post grad students in machine learning.
  • Harms being done to young people today and potential for intense future exploitation, are being ignored by policy makers and some academics. Safeguarding is often only about blocking in case of liability to the provider, stopping children seeing content, or preventing physical exploitation. It ignores exploitation by online platform firms, and app providers and games creators, of a child’s synthesised online life and use. Laws and government departments’ own practices can be deeply flawed.
  • Young people are left out of discussions which, after all, are about their future. [They might have some of the best ideas, we miss at our peril.]

There is no time to waste

Children and young people have the most to lose while their education, skills, jobs market, economy, culture, care, and society goes through a series of gradual but seismic shift in purpose, culture, and acceptance before finding new norms post-Brexit. They will also gain the most if the foundations are right. One of these must be getting age verification right in GDPR, not allowing it to enable a massive data grab of child-parent privacy.

Although the RS Report considers young people in the context of a future workforce who need skills training, they are otherwise left out of this report.

“The next curriculum reform needs to consider the educational needs of young people through the lens of the implications of machine learning and associated technologies for the future of work.”

Yes it does, but it must give young people and the implications of ML broader consideration for their future, than classroom or workplace.

Facebook has targeted vulnerable young people, it is alleged, to facilitate predatory advertising practices. Some argue that emotive computing or MOOCs belong in the classroom. Who decides?

We are not yet talking about the effects of teaching technology to learn, and its effect on public services and interactions with the public. Questions that Sam Smith asked in Shadow of the smart machine: Will machine learning end?

At the end of this Information Age we are at a point when machine learning, AI and biotechnology are potentially life enhancing or could have catastrophic effects, if indeed “AI will cause people ‘more pain than happiness” as described by Alibaba’s founder Jack Ma.

The conflict between commercial profit and public good, what commercial companies say they will do and actually do, and fears and assurances over predicted outcomes is personified in the debate between Demis Hassabis, co-founder of DeepMind Technologies, (a London-based machine learning AI startup), and Elon Musk, discussing the perils of artificial intelligence.

Vanity Fair reported that, Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.””

Musk was of the opinion that A.I. was probably humanity’s “biggest existential threat.”

We are not yet joining up multi disciplinary and cross sector discussions of threats and opportunities

Jobs, shift in needed skill sets for education, how we think, interact, value each other, accept or reject ownership and power models; and later, from the technology itself. We are not yet talking conversely, the opportunities that the seismic shifts offer in real terms. Or how and why to accept or reject or regulate them.

Where private companies are taking over personal data given in trust to public services, it is reckless for the future of public interest research to assume there is no public objection. How can we object, if not asked? How can children make an informed choice? How will public interest be assured to be put ahead of private profit? If it is intended on balance to be all about altruism from these global giants, then they must be open and accountable.

Private companies are shaping how and where we find machine learning and AI gathering data about our behaviours in our homes and public spaces.

SPACE10, an innovation hub for IKEA is currently running a survey on how the public perceives and “wants their AI to look, be, and act”, with an eye on building AI into their products, for us to bring flat-pack into our houses.

As the surveillance technology built into the Things in our homes attached to the Internet becomes more integral to daily life, authorities are now using it to gather evidence in investigations; from mobile phones, laptops, social media, smart speakers, and games. The IoT so far seems less about the benefits of collaboration, and all about the behavioural data it collects and uses to target us to sell us more things. Our behaviours tell much more than how we act. They show how we think inside the private space of our minds.

Do you want Google to know how you think and have control over that? The companies of the world that have access to massive amounts of data, and are using that data to now teach AI how to ‘think’. What is AI learning? And how much should the State see or know about how you think, or try to predict it?

Who cares, wins?

It is not overstated to say society and future public good of public services, depends on getting any co-dependencies right. As I wrote in the time of care.data, the economic value of data, personal rights and the public interest are not opposed to one another, but have synergies and co-dependency. One player getting it wrong, can create harm for all. Government must start to care about this, beyond the side effects of saving political embarrassment.

Without joining up all aspects, we cannot limit harms and make the most of benefits. There is nuance and unknowns. There is opaque decision making and secrecy, packaged in the wording of commercial sensitivity and behind it, people who can be brilliant but at the end of the day, are also, human, with all our strengths and weaknesses.

And we can get this right, if data practices get better, with joined up efforts.

Our future society, as our present, is based on webs of trust, on our social networks on- and offline, that enable business, our education, our cultural, and our interactions. Children must trust they will not be used by systems. We must build trustworthy systems that enable future digital integrity.

The immediate harm that comes from blind trust in AI companies is not their AI, but the hidden powers that commercial companies have to nudge public and policy maker behaviours and acceptance, towards private gain. Their ability and opportunity to influence regulation and future direction outweighs most others. But lack of transparency about their profit motives is concerning. Carefully staged public engagement is not real engagement but a fig leaf to show ‘the public say yes’.

The unwillingness by Google DeepMind, when asked at their public engagement event, to discuss their past use of NHS patient data, or the profit model plan or their terms of NHS deals with London hospitals, should be a warning that these questions need answers and accountability urgently.

As TechCrunch suggested after the event, this is all “pretty standard playbook for tech firms seeking to workaround business barriers created by regulation.” Calls for more data, might mean an ever greater power shift.

Companies that have already extracted and benefited from personal data in the public sector, have already made private profit. They and their machines have learned for their future business product development.

A transparent accountable future for all players, private and public, using public data is a necessary requirement for both the public good and private profit. It is not acceptable for departments to hide their practices, just as it is unacceptable if firms refuse algorithmic transparency.

Rebooting antitrust for the information age will not be easy. It will entail new risks: more data sharing, for instance, could threaten privacy. But if governments don’t want a data economy dominated by a few giants, they will need to act soon.” [The Economist, May 6]

If the State creates a single data source of truth, or private Giant tech thinks it can side-step regulation and gets it wrong, their practices screw up public trust. It harms public interest research, and with it our future public good.

But will they care?

If we care, then across public and private sectors, we must cherish shared values and better collaboration. Embed ethical human values into development, design and policy. Ensure transparency of where, how, who and why my personal data has gone.

We must ensure that as the future becomes “smarter”, we educate ourselves and our children to stay intelligent about how we use data and AI.

We must start today, knowing how we are used by both machines, and man.


First published on Medium for a change.

Is education preparing us for the jobs of the future?

The Fabian Women, Glass Ceiling not Glass Slipper event, asked last week:

Is Education preparing us for the jobs of the future?

The panel talked about changing social and political realities. We considered the effects on employment. We began discussion how those changes should feed into education policy and practice today. It is discussion that should be had by the public. So far, almost a year after the Referendum, the UK government is yet to say what post-Brexit Britain might look like. Without a vision, any mandate for the unknown, if voted for on June 9th, will be meaningless.

What was talked about and what should be a public debate:

  • What jobs will be needed in the future?
  • Post Brexit, what skills will we need in the UK?
  • How can the education system adapt and improve to help future generations develop skills in this ever changing landscape?
  • How do we ensure women [and anyone else] are not left behind?

Brexit is the biggest change management project I may never see.

As the State continues making and remaking laws, reforming education, and starts exiting the EU, all in parallel, technology and commercial companies won’t wait to see what the post-Brexit Britain will look like. In our state’s absence of vision, companies are shaping policy and ‘re-writing’ their own version of regulations. What implications could this have for long term public good?

What will be needed in the UK future?

A couple of sentences from Alan Penn have stuck with me all week. Loosely quoted, we’re seeing cultural identity shift across the country, due to the change of our available employment types. Traditional industries once ran in a family, with a strong sense of heritage. New jobs don’t offer that. It leaves a gap we cannot fill with “I’m a call centre worker”. And this change is unevenly felt.

There is no tangible public plan in the Digital Strategy for dealing with that change in the coming 10 to 20 years employment market and what it means tied into education. It matters when many believe, as do these authors in American Scientific, “around half of today’s jobs will be threatened by algorithms. 40% of today’s top 500 companies will have vanished in a decade.”

So what needs thought?

  • Analysis of what that regional jobs market might look like, should be a public part of the Brexit debate and these elections →
    We need to see those goals, to ensure policy can be planned for education and benchmark its progress towards achieving its aims
  • Brexit and technology will disproportionately affect different segments of the jobs market and therefore the population by age, by region, by socio-economic factors →
    Education policy must therefore address aspects of skills looking to the future towards employment in that new environment, so that we make the most of opportunities, and mitigate the harms.
  • Brexit and technology will disproportionately affect communities → What will be done to prevent social collapse in regions hardest hit by change?

Where are we starting from today?

Before we can understand the impact of change, we need to understand what the present looks like. I cannot find a map of what the English education system looks like. No one I ask seems to have one or have a firm grasp across the sector, of how and where all the parts of England’s education system fit together, or their oversight and accountability. Everyone has an idea, but no one can join the dots. If you have, please let me know.

Nothing is constant in education like change; in laws, policy and its effects in practice, so I shall start there.

1. Legislation

In retrospect it was a fatal flaw, missed in post-Referendum battles of who wrote what on the side of a bus, that no one did an assessment of education [and indeed other] ‘legislation in progress’. There should have been recommendations made on scrapping inappropriate government bills in entirety or in parts. New laws are now being enacted, rushed through in wash up, that are geared to our old status quo, and we risk basing policy only on what we know from the past, because on that, we have data.

In the timeframe that Brexit will become tangible, we will feel the effects of the greatest shake up of Higher Education in 25 years. Parts of the Higher Education and Research Act, and Technical and Further Education Act are unsuited to the new order post-Brexit.

What it will do: The new HE law encourages competition between institutions, and the TFE Act centred in large part on how to manage insolvency.

What it should do: Policy needs to promote open, collaborative networks if within a now reduced research and academic circle, scholarly communities are to thrive.

If nothing changes, we will see harm to these teaching institutions and people in them. The stance on counting foreign students in total migrant numbers, to take an example, is singularly pointless.

Even the Royal Society report on Machine Learning noted the UK approach to immigration as a potential harm to prosperity.

Local authorities cannot legally build schools under their authority today, even if needed. They must be free schools. This model has seen high turnover and closures, a rather instable model.

Legislation has recently not only meant restructure, but repurposing of what education [authorities] is expected to offer.

A new Statutory Instrument — The School and Early Years Finance (England) Regulations 2017 — makes music, arts and playgrounds items; ‘That may be removed from maintained schools’ budget shares’.

How will this withdrawal of provision affect skills starting from the Early Years throughout young people’s education?

2. Policy

Education policy if it continues along the grammar school path, will divide communities into ‘passed’ and the ‘unselected’. A side effect of selective schooling— a feature or a bug dependent on your point of view — is socio-economic engineering. It builds class walls in the classroom, while others, like Fabian Women, say we should be breaking through glass ceilings. Current policy in a wider sense, is creating an environment that is hostile to human integration. It creates division across the entire education system for children aged 2–19.

The curriculum is narrowing, according to staff I’ve spoken to recently, as a result of measurement focus on Progress 8, and due to funding constraints.

What effect will this have on analysis of knowledge, discernment, how to assess when computers have made a mistake or supplied misinformation, and how to apply wisdom? Skills that today still distinguish human from machine learning.

What narrowing the curriculum does: Students have fewer opportunities to discover their skill set, limiting opportunities for developing social skills and cultural development, and their development as rounded, happy, human beings.

What we could do: Promote long term love of learning in-and-outside school and in communities. Reinvest in the arts, music and play, which support mental and physical health and create a culture in which people like to live as well as work. Library and community centres funding must be re-prioritised, ensuring inclusion and provision outside school for all abilities.

Austerity builds barriers of access to opportunity and skills. Children who cannot afford to, are excluded from extra curricular classes. We already divide our children through private and state education, into those who have better facilities and funding to enjoy and explore a fully rounded education, and those whose funding will not stretch much beyond the bare curriculum. For SEN children, that has already been stripped back further.

All the accepted current evidence says selective schooling limits social mobility and limits choice. Talk of evidence based profession is hard to square with passion for grammars, an against-the-evidence based policy.

Existing barriers are likely to become entrenched in twenty years. What does it do to society, if we are divided in our communities by money, or gender, or race, and feel disempowered as individuals? Are we less responsible for our actions if there’s nothing we can do about it? If others have more money, more power than us, others have more control over our lives, and “no matter what we do, we won’t pass the 11 plus”?

Without joined-up scrutiny of these policy effects across the board, we risk embedding these barriers into future planning. Today’s data are used to train “how the system should work”. If current data are what applicants in 5 years will base future expectations on, will their decisions be objective and will in-built bias be transparent?

3. Sociological effects of legislation.

It’s not only institutions that will lose autonomy in the Higher Education and Research Act.

At present, the risk to the autonomy of science and research is theoretical — but the implications for academic freedom are troubling. [Nature 538, 5 (06 October 2016)]

The Secretary of State for Education now also has new Powers of Information about individual applicants and students. Combined with the Digital Economy Act, the law can ride roughshod over students’ autonomy and consent choices. Today they can opt out of UCAS automatically sharing their personal data with the Student Loans Company for example. Thanks to these new powers, and combined with the Digital Economy Act, that’s gone.

The Act further includes the intention to make institutions release more data about course intake and results under the banner of ‘transparency’. Part of the aim is indisputably positive, to expose discrimination and inequality of all kinds. It also aims to make the £ cost-benefit return “clearer” to applicants — by showing what exams you need to get in, what you come out with, and then by joining all that personal data to the longitudinal school record, tax and welfare data, you see what the return is on your student loan. The government can also then see what your education ‘cost or benefit’ the Treasury. It is all of course much more nuanced than that, but that’s the very simplified gist.

This ‘destinations data’ is going to be a dataset we hear ever more about and has the potential to influence education policy from age 2.

Aside from the issue of personal data disclosiveness when published by institutions — we already know of individuals who could spot themselves in a current published dataset — I worry that this direction using data for ‘advice’ is unhelpful. What if we’re looking at the wrong data upon which to base future decisions? The past doesn’t take account of Brexit or enable applicants to do so.

Researchers [and applicants, the year before they apply or start a course] will be looking at what *was* — predicted and achieved qualifying grades, make up of the class, course results, first job earnings — what was for other people, is at least 5 years old by the time it’s looked at it. Five years is a long time out of date.

4. Change

Teachers and schools have long since reached saturation point in the last 5 years to handle change. Reform has been drastic, in structures, curriculum, and ongoing in funding. There is no ongoing teacher training, and lack of CPD take up, is exacerbated by underfunding.

Teachers are fed up with change. They want stability. But contrary to the current “strong and stable” message, reality is that ahead we will get anything but, and must instead manage change if we are to thrive. Politically, we will see backlash when ‘stable’ is undeliverable.

But Teaching has not seen ‘stable’ for some time. Teachers are asking for fewer children, and more cash in the classroom. Unions talk of a focus on learning, not testing, to drive school standards. If the planned restructuring of funding happens, how will it affect staff retention?

We know schools are already reducing staff. How will this affect employment, adult and children’s skill development, their ambition, and society and economy?

Where could legislation and policy look ahead?

  • What are the big Brexit targets and barriers and when do we expect them?
  • How is the fall out from underfunding and reduction of teaching staff expected to affect skills provision?
  • State education policy is increasingly hands-off. What is the incentive for local schools or MATs to look much beyond the short term?
  • How do local decisions ensure education is preparing their community, but also considering society, health and (elderly) social care, Post-Brexit readiness and women’s economic empowerment?
  • How does our ageing population shift in the same time frame?

How can the education system adapt?

We need to talk more about other changes in the system in parallel to Brexit; join the dots, plus the potential positive and harmful effects of technology.

Gender here too plays a role, as does mitigating discrimination of all kinds, confirmation bias, and even in the tech itself, whether AI for example, is going to be better than us at decision-making, if we teach AI to be biased.

Dr Lisa Maria Mueller talked about the effects and influence of age, setting and language factors on what skills we will need, and employment. While there are certain skills sets that computers are and will be better at than people, she argued society also needs to continue to cultivate human skills in cultural sensitivities, empathy, and understanding. We all nodded. But how?

To develop all these human skills is going to take investment. Investment in the humans that teach us. Bennie Kara, Assistant Headteacher in London, spoke about school cuts and how they will affect children’s futures.

The future of England’s education must be geared to a world in which knowledge and facts are ubiquitous, and readily available online than at any other time. And access to learning must be inclusive. That means including SEN and low income families, the unskilled, everyone. As we become more internationally remote, we must put safeguards in place if we to support thriving communities.

Policy and legislation must also preserve and respect human dignity in a changing work environment, and review not only what work is on offer, but *how*; the kinds of contracts and jobs available.

Where might practice need to adapt now?

  • Re-consider curriculum content with its focus on facts. Will success risk being measured based on out of date knowledge, and a measure of recall? Are these skills in growing or dwindling need?
  • Knowledge focus must place value on analysis, discernment, and application of facts that computers will learn and recall better than us. Much of that learning happens outside school.
  • Opportunities have been cut, together with funding. We need communities brought back together, if they are not to collapse. Funding centres of local learning, restoring libraries and community centres will be essential to local skill development.

What is missing?

Although Sarah Waite spoke (in a suitably Purdah appropriate tone), about the importance of basic skills in the future labour market we didn’t get to talking about education preparing us for the lack of jobs of the future and what that changed labour market will look like.

What skills will *not* be needed? Who decides? If left to companies’ sponsor led steer in academies, what effects will we see in society?

Discussions of a future education model and technology seem to share a common theme: people seem reduced in making autonomous choices. But they share no positive vision.

  • Technology should empower us, but it seems to empower the State and diminish citizens’ autonomy in many of today’s policies, and in future scenarios especially around the use of personal data and Digital Economy.
  • Technology should enable greater collaboration, but current tech in education policy is focused too little on use on children’s own terms, and too heavily on top-down monitoring: of scoring, screen time, search terms. Further restrictions by Age Verification are coming, and may access and reduce participation in online services if not done well.
  • Infrastructure weakness is letting down the skill training: University Technical Colleges (UTCs) are not popular and failing to fill places. There is lack of an overarching area wide strategic plan for pupils in which UTCS play a part. Local Authorities played an important part in regional planning which needs restored to ensure joined up local thinking.

How do we ensure women are not left behind?

The final question of the evening asked how women will be affected by Brexit and changing job market. Part of the risks overall, the panel concluded, is related to [lack of] equal-pay. But where are the assessments of the gendered effects in the UK of:

  • community structural change and intra-family support and effect on demand for social care
  • tech solutions in response to lack of human interaction and staffing shortages including robots in the home and telecare
  • the disproportionate drop out of work, due to unpaid care roles, and difficulty getting back in after a break.
  • the roles and types of work likely to be most affected or replaced by machine learning and robots
  • and how will women be empowered or not socially by technology?

We quickly need in education to respond to the known data where women are already being left behind now. The attrition rate for example in teaching in England after two-three years is poor, and getting worse. What will government do to keep teachers teaching? Their value as role models is not captured in pupils’ exams results based entirely on knowledge transfer.

Our GCSEs this year go back to pure exam based testing, and remove applied coursework marking, and is likely to see lower attainment for girls than boys, say practitioners. Likely to leave girls behind at an earlier age.

“There is compelling evidence to suggest that girls in particular may be affected by the changes — as research suggests that boys perform more confidently when assessed by exams alone.”

Jennifer Tuckett spoke about what fairness might look like for female education in the Creative Industries. From school-leaver to returning mother, and retraining older women, appreciating the effects of gender in education is intrinsic to the future jobs market.

We also need broader public understanding of the loop of the impacts of technology, on the process and delivery of teaching itself, and as school management becomes increasingly important and is male dominated, how will changes in teaching affect women disproportionately? Fact delivery and testing can be done by machine, and supports current policy direction, but can a computer create a love of learning and teach humans how to think?

“There is a opportunity for a holistic synthesis of research into gender, the effect of tech on the workplace, the effect of technology on care roles, risks and opportunities.”

Delivering education to ensure women are not left behind, includes avoiding women going into education as teenagers now, to be led down routes without thinking of what they want and need in future. Regardless of work.

Education must adapt to changed employment markets, and the social and community effects of Brexit. If it does not, barriers will become embedded. Geographical, economic, language, familial, skills, and social exclusion.

In short

In summary, what is the government’s Brexit vision? We must know what they see five, 10, and for 25 years ahead, set against understanding the landscape as-is, in order to peg other policy to it.

With this foundation, what we know and what we estimate we don’t know yet can be planned for.

Once we know where we are going in policy, we can do a fit-gap to map how to get people there.

Estimate which skills gaps need filled and which do not. Where will change be hardest?

Change is not new. But there is current potential for massive long term economic and social lasting damage to our young people today. Government is hindered by short term political thinking, but it has a long-term responsibility to ensure children are not mis-educated because policy and the future environment are not aligned.

We deserve public, transparent, informed debate to plan our lives.

We enter the unknown of the education triangle at our peril; Brexit, underfunding, divisive structural policy, for the next ten years and beyond, without appropriate adjustment to pre-Brexit legislation and policy plans for the new world order.

The combined negative effects on employment at scale and at pace must be assessed with urgency, not by big Tech who will profit, but with an eye on future fairness, and public economic and social good. Academy sponsors, decision makers in curriculum choices, schools with limited funding, have no incentives to look to the wider world.

If we’re going to go it alone, we’d be better be robust as a society, and that can’t be just some of us, and can’t only be about skills as seen as having an tangible output.

All this discussion is framed by the premise that education’s aim is to prepare a future workforce for work, and that it is sustainable.

Policy is increasingly based on work that is measured by economic output. We must not leave out or behind those who do not, or cannot, or whose work is unmeasured yet contributes to the world.

‘The only future worth building includes everyone,’ said the Pope in a recent TedTalk.

What kind of future do you want to see yourself living in? Will we all work or will there be universal basic income? What will happen on housing, an ageing population, air pollution, prisons, free movement, migration, and health? What will keep communities together as their known world in employment, and family life, and support collapse? How will education enable children to discover their talents and passions?

Human beings are more than what we do. The sense of a country of who we are and what we stand for is about more than our employment or what we earn. And we cannot live on slogans alone.

Who do we think we in the UK will be after Brexit, needs real and substantial answers. What are we going to *do* and *be* in the world?

Without this vision, any mandate as voted for on June 9th, will be made in the dark and open to future objection writ large. ‘We’ must be inclusive based on a consensus, not simply a ‘mandate’.

Only with clear vision for all these facets fitting together in a model of how we will grow in all senses, will we be able to answer the question, is education preparing us [all] for the jobs of the future?

More than this, we must ask if education is preparing people for the lack of jobs, for changing relationships in our communities, with each other, and with machines.

Change is coming, Brexit or not. But Brexit has exacerbated the potential to miss opportunities, embed barriers, and see negative side-effects from changes already underway in employment, in an accelerated timeframe.

If our education policy today is not gearing up to that change, we must.

Failing a generation is not what post-Brexit Britain needs

Basically Britain needs Prof. Brian Cox shaping education policy:

“If it were up to me I would increase pay and conditions and levels of responsibility and respect significantly, because it is an investment that would pay itself back many times over in the decades to come.”

Don’t use children as ‘measurement probes’ to test schools

What effect does using school exam results to reform the school system have on children? And what effect does it have on society?

Last autumn Ofqual published a report and their study on consistency of exam marking and metrics.

The report concluded that half of pupils in English Literature, as an example, are not awarded the “correct” grade on a particular exam paper due to marking inconsistencies and the design of the tests.
Given the complexity and sensitivity of the data, Ofqual concluded, it is essential that the metrics stand up to scrutiny and that there is a very clear understanding behind the meaning and application of any quality of marking.  They wrote that, “there are dangers that information from metrics (particularly when related to grade boundaries) could be used out of context.”

Context and accuracy are fundamental to the value of and trust in these tests. And at the moment, trust is not high in the system behind it. There must also be trust in policy behind the system.

This summer two sets of UK school tests, will come under scrutiny. GCSEs and SATS. The goal posts are moving for children and schools across the country. And it’s bad for children and bad for Britain.

Grades A-G will be swapped for numbers 1 -9

GCSE sitting 15-16 year olds will see their exams shift to a numerical system, scoring from the highest Grade 9 to Grade 1, with the three top grades replacing the current A and A*. The alphabetical grading system will be fully phased out by 2019.

The plans intended that roughly the same proportion of students as have achieved a Grade C will be awarded a new Grade 4 and as Schools Week reported: “There will be two GCSE pass rates in school performance tables.”

One will measure grade 5s or above, and this will be called the ‘strong’ pass rate. And the other will measure grade 4s or above, and this will be the ‘standard’ pass rate.

Laura McInerney summed up, “in some senses, it’s not a bad idea as it will mean it is easier to see if the measures are comparable. We can check if the ‘standard’ rate is better or worse over the next few years. (This is particularly good for the DfE who have been told off by the government watchdog for fiddling about with data so much that no one can tell if anything has worked anymore).”

There’s plenty of confusion in parents, how the numerical grading system will work. The confusion you can gauge in playground conversations, is also reflected nationally in a more measurable way.

Market research in a range of audiences – including businesses, head teachers, universities, colleges, parents and pupils – found that just 31 per cent of secondary school pupils and 30 per cent of parents were clear on the new numerical grading system.

So that’s a change in the GCSE grading structure. But why? If more differentiators are needed, why not add one or two more letters and shift grade boundaries? A policy need for these changes is unclear.

Machine marking is training on ten year olds

I wonder if any of the shift to numerical marking, is due in any part to a desire to move GCSEs in future to machine marking?

This year, ten and eleven year olds, children in their last year of primary school, will have their SATs tests computer marked.

That’s everything in maths and English. Not multiple choice papers or one word answers, but full written responses. If their f, b or g doesn’t look like the correct  letter in the correct place in the sentence, then it gains no marks.

Parents are concerned about children whose handwriting is awful, but their knowledge is not. How well can they hope to be assessed? If exams are increasingly machine marked out of sight, many sent to India, where is our oversight of the marking process and accuracy?

The concerns I’ve heard simply among local parents and staff, seem reflected in national discussions and the assessor, Oftsed. TES has reported Ofsted’s most senior officials as saying that the inspectorate is just as reluctant to use this year’s writing assessments as it was in 2016. Teachers and parents locally are united in feeling it is not accurate, not fair, and not right.

The content is also to be tougher.

How will we know what is being accurately measured and the accuracy of the metrics with content changes at the same time? How will we know if children didn’t make the mark, or if the marks were simply not awarded?

The accountability of the process is less than transparent to pupils and parents. We have little opportunity for Ofqual’s recommended scrutiny of these metrics, or the data behind the system on our kids.

Causation, correlation and why we should care

The real risk is that no one will be able to tell if there is an error, where it stems from, and where there is a reason if pass rates should be markedly different from what was expected.

After the wide range of changes across pupil attainment, exam content, school progress scores, and their interaction and dependencies, can they all fit together and be comparable with the past at all?

If the SATS are making lots of mistakes simply due to being bad at reading ten year’ old’s handwriting, how will we know?

Or if GCSE scores are lower, will we be able to see if it is because they have genuinely differentiated the results in a wider spread, and stretched out the fail, pass and top passes more strictly than before?

What is likely, is that this year’s set of children who were expecting As and A star at GCSE but fail to be the one of the two children nationally who get the new grade 9, will be disappointed to feel they are not, after all, as great as they thought they were.

And next year, if you can’t be the one or two to get the top mark, will the best simply stop stretching themselves and rest a bit easier, because, whatever, you won’t get that straight grade As anyway?

Even if children would not change behaviours were they to know, the target range scoring sent by third party data processors to schools, discourages teachers from stretching those at the top.

Politicians look for positive progress, but policies are changing that will increase the number of schools deemed to have failed. Why?

Our children’s results are being used to reform the school system.

Coasting and failing schools can be compelled to become academies.

Government policy on this forced academisation was rejected by popular revolt. It appears that the government is determined that schools *will* become academies with the same fervour that they *will* re-introduce grammar schools. Both are unevidenced and unwanted. But there is a workaround.  Create evidence. Make the successful scores harder to achieve, and more will be seen to fail.

A total of 282 secondary schools in England were deemed to be failing by the government this January, as they “have not met a new set of national standards”.

It is expected that even more will attain ‘less’ this summer. Tim Leunig, Chief Analyst & Chief Scientific Adviser Department for Education, made a personal guess at two reaching the top mark.

The context of this GCSE ‘failure’ is the changes in how schools are measured. Children’s progress over 8 subjects, or “P8” is being used as an accountability measure of overall school quality.

But it’s really just: “a school’s average Attainment 8 score adjusted for pupils’ Key Stage 2 attainment.” [Dave Thomson, Education Datalab]

Work done by FFT Education Datalab showed that contextualising P8 scores can lead to large changes for some schools.  (Read more here and here). You cannot meaningfully compare schools with different types of intake, but it appears that the government is determined to do so. Starting ever younger if new plans go ahead.

Data is being reshaped to tell stories to fit to policy.

Shaping children’s future

What this reshaping doesn’t factor in at all, is the labelling of a generation or more, with personal failure, from age ten and up.

All this tinkering with the data, isn’t just data.

It’s tinkering badly with our kids sense of self, their sense of achievement, aspiration, and with that; the country’s future.

Education reform has become the aim, and it has replaced the aims of education.

Post-Brexit Britain doesn’t need policy that delivers ideology. We don’t need “to use children as ‘measurement probes’ to test schools.

Just as we shouldn’t use children’s educational path to test their net worth or cost to the economy. Or predict it in future.

Children’s education and human value cannot be measured in data.

Thinking to some purpose