All posts by jenpersson

Can Data Trusts be trustworthy?

The Lords Select Committee report on AI in the UK in March 2018, suggested that,“the Government plans to adopt the Hall-Pesenti Review recommendation that ‘data trusts’ be established to facilitate the ethical sharing of data between organisations.”

Since data distribution already happens, what difference would a Data Trust model make to ‘ethical sharing‘?

A ‘set of relationships underpinned by a repeatable framework, compliant with parties’ obligations’ seems little better than what we have today, with all its problems including deeply unethical policy and practice.

The ODI set out some of the characteristics Data Trusts might have or share. As importantly, we should define what Data Trusts are not. They should not simply be a new name for pooling content and a new single distribution point. Click and collect.

But is a Data Trust little more than a new description for what goes on already? Either a physical space or legal agreements for data users to pass around the personal data from the unsuspecting, and sometimes unwilling, public. Friends-with-benefits who each bring something to the party to share with the others?

As with any communal risk, it is the standards of the weakest link, the least ethical, the one that pees in the pool, that will increase reputational risk for all who take part, and spoil it for everyone.

Importantly, the Lords AI Committee report recognised that there is an inherent risk how the public would react to Data Trusts, because there is no social license for this new data sharing.

“Under the current proposals, individuals who have their personal data contained within these trusts would have no means by which they could make their views heard, or shape the decisions of these trusts.

Views those keen on Data Trusts seem keen to ignore.

When the Administrative Data Research Network was set up in 2013, a new infrastructure for “deidentified” data linkage, extensive public dialogue was carried across across the UK. It concluded in a report with very similar findings as was apparent at dozens of care.data engagement events in 2014-15;

There is not public support for

  • “Creating large databases containing many variables/data from a large number of public sector sources,
  • Establishing greater permanency of datasets,
  • Allowing administrative data to be linked with business data, or
  • Linking of passively collected administrative data, in particular geo-location data”

The other ‘red-line’ for some participants was allowing “researchers for private companies to access data, either to deliver a public service or in order to make profit. Trust in private companies’ motivations were low.”

All of the above could be central to Data Trusts. All of the above highlight that in any new push to exploit personal data, the public must not be the last to know. And until all of the above are resolved, that social-license underpinning the work will always be missing.

Take the National Pupil Database (NPD) as a case study in a Data Trust done wrong.

It is a mega-database of over 20 other datasets. Raw data has been farmed out for years under terms and conditions to third parties, including users who hold an entire copy of the database, such as the somewhat secretive and unaccountable Fischer Family Trust, and others, who don’t answer to Freedom-of-Information, and whose terms are hidden under commercial confidentilaity. Buying and benchmarking data from schools and selling it back to some, profiling is hidden from parents and pupils, yet the FFT predictive risk scoring can shape a child’s school experience from age 2. They don’t really want to answer how staff tell if a child’s FFT profile and risk score predictions are accurate, or of they can spot errors or a wrong data input somewhere.

Even as the NPD moves towards risk reduction, its issues remain. When will children be told how data about them are used?

Is it any wonder that many people in the UK feel a resentment of institutions and orgs who feel entitled to exploit them, or nudge their behaviour, and a need to ‘take back control’?

It is naïve for those working in data policy and research to think that it does not apply to them.

We already have safe infrastructures in the UK for excellent data access. What users are missing, is the social license to do so.

Some of today’s data uses are ethically problematic.

No one should be talking about increasing access to public data, before delivering increased public understanding. Data users must get over their fear of what if the public found out.

If your data use being on the front pages would make you nervous, maybe it’s a clue you should be doing something differently. If you don’t trust the public would support it, then perhaps it doesn’t deserve to be trusted. Respect individuals’ dignity and human rights. Stop doing stupid things that undermine everything.

Build the social license that care.data was missing. Be honest. Respect our right to know, and right to object. Build them into a public UK data strategy to be understood and be proud of.


Part 1. Ethically problematic
Ethics is dissolving into little more than a buzzword. Can we find solutions underpinned by law, and ethics, and put the person first?

Part 2. Can Data Trusts be trustworthy?
As long as data users ignore data subjects rights, Data Trusts have no social license.



Ethically problematic

Five years ago, researchers at the Manchester University School of Social Sciences wrote, “It will no longer be possible to assume that secondary data use is ethically unproblematic.”

Five years on, other people’s use of the language of data ethics puts social science at risk. Event after event, we are witnessing the gradual dissolution of the value and meaning of ‘ethics’, into little more than a buzzword.

Companies and organisations are using the language of ‘ethical’ behaviour blended with ‘corporate responsibility’ modelled after their own values, as a way to present competitive advantage.

Ethics is becoming shorthand for, ‘we’re the good guys’. It is being subverted by personal data users’ self-interest. Not to address concerns over the effects of data processing on individuals or communities, but to justify doing it anyway.

An ethics race

There’s certainly a race on for who gets to define what data ethics will mean. We have at least three new UK institutes competing for a voice in the space. Digital Catapult has formed an AI ethics committee. Data charities abound. Even Google has developed an ethical AI strategy of its own, in the wake of their Project Maven.

Lessons learned in public data policy should be clear by now. There should be no surprises how administrative data about us are used by others. We should expect fairness. Yet these basics still seem hard for some to accept.

The NHS Royal Free Hospital in 2015 was rightly criticised – because they tried “to commercialise personal confidentiality without personal consent,” as reported in Wired recently.

The shortcomings we found were avoidable,” wrote Elizabeth Denham in 2017 when the ICO found six ways the Google DeepMind — Royal Free deal did not comply with the Data Protection Act. The price of innovation, she said, didn’t need to be the erosion of fundamental privacy rights underpinned by the law.

If the Centre for Data Ethics and Innovation is put on a statutory footing where does that leave the ICO, when their views differ?

It’s why the idea of DeepMind funding work in Ethics and Society seems incongruous to me. I wait to be proven wrong. In their own words, “technologists must take responsibility for the ethical and social impact of their work“. Breaking the law however, is conspicuous by its absence, and the Centre must not be used by companies, to generate pseudo lawful or ethical acceptability.

Do we need new digital ethics?

Admittedly, not all laws are good laws. But if recognising and acting under the authority of the rule-of-law is now an optional extra, it will undermine the ICO, sink public trust, and destroy any hope of achieving the research ambitions of UK social science.

I am not convinced there is any such thing as digital ethics. The claimed gap in an ability to get things right in this complex area, is too often after people simply get caught doing something wrong. Technologists abdicate accountability saying “we’re just developers,” and sociologists say, “we’re not tech people.

These shrugs of the shoulders by third-parties, should not be rewarded with more data access, or new contracts. Get it wrong, get out of our data.

This lack of acceptance of responsibility creates a sense of helplessness. We can’t make it work, so let’s make the technology do more. But even the most transparent algorithms will never be accountable. People can be accountable, and it must be possible to hold leaders to account for the outcomes of their decisions.

But it shouldn’t be surprising no one wants to be held to account. The consequences of some of these data uses are catastrophic.

Accountability is the number one problem to be solved right now. It includes openness of data errors, uses, outcomes, and policy. Are commercial companies, with public sector contracts, checking data are accurate and corrected from people who the data are about, before applying in predictive tools?

Unethical practice

As Tim Harford in the FT once asked about Big Data uses in general: “Who cares about causation or sampling bias, though, when there is money to be made?”

Problem area number two, whether researchers are are working towards a profit model, or chasing grant funding is this:

How data users can make unbiased decisions whether they should use the data? We have all the same bodies deciding on data access, that oversee its governance. Conflict of self interest is built-in by default, and the allure of new data territory is tempting.

But perhaps the UK key public data ethics problem, is that the policy is currently too often about the system goal, not about improving the experience of the people using systems. Not using technology as a tool, as if people mattered. Harmful policy, can generate harmful data.

Secondary uses of data are intrinsically dependent on the ethics of the data’s operational purpose at collection. Damage-by-design is evident right now across a range of UK commercial and administrative systems. Metrics of policy success and associated data may be just wrong.

Some of the damage is done by collecting data for one purpose and using it operationally for another in secret. Until these modus operandi change no one should think that “data ethics will save us”.

Some of the most ethical research aims try to reveal these problems. But we need to also recognise not all research would be welcomed by the people the research is about, and few researchers want to talk about it. Among hundreds of already-approved university research ethics board applications I’ve read, some were desperately lacking. An organisation is no more ethical than the people who make decisions in its name. People disagree on what is morally right. People can game data input and outcomes and fail reproducibility. Markets and monopolies of power bias aims. Trying to support the next cohort of PhDs and impact for the REF, shapes priorities and values.

Individuals turn into data, and data become regnant.” Data are often lacking in quality and completeness and given authority they do not deserve.

It is still rare to find informed discussion among the brightest and best of our leading data institutions, about the extensive everyday real world secondary data use across public authorities, including where that use may be unlawful and unethical, like buying from data brokers. Research users are pushing those boundaries for more and more without public debate. Who says what’s too far?

The only way is ethics? Where next?

The latest academic-commercial mash-ups on why we need new data ethics in a new regulatory landscape where the established is seen as past it, is a dangerous catch-all ‘get out of jail free card’.

Ethical barriers are out of step with some of today’s data politics. The law is being sidestepped and regulation diminished by lack of enforcement of gratuitous data grabs from the Internet of Things, and social media data are seen as a free-for-all. Data access barriers are unwanted. What is left to prevent harm?

I’m certain that we first need to take a step back if we are to move forward. Ethical values are founded on human rights that existed before data protection law. Fundamental human decency, rights to privacy, and to freedom from interference, common law confidentiality, tort, and professional codes of conduct on conflict of interest, and confidentiality.

Data protection law emphasises data use. But too often its first principles of necessity and proportionality are ignored. Ethical practice would ask more often, should we collect the data at all?

Although GDPR requires new necessary safeguards to ensure that technical and organisational measures are met to control and process data, and there is a clearly defined Right to Object, I am yet to see a single event thought giving this any thought.

Let’s not pretend secondary use of data is unproblematic, while uses are decided in secret. Calls for a new infrastructure actually seek workarounds of regulation. And human rights are dismissed.

Building a social license between data subjects and data users is unavoidable if use of data about people hopes to be ethical.

The lasting solutions are underpinned by law, and ethics. Accountability for risk and harm. Put the person first in all things.

We need more than hopes and dreams and talk of ethics.

We need realism if we are to get a future UK data strategy that enables human flourishing, with public support.

Notes of desperation or exasperation are increasingly evident in discourse on data policy, and start to sound little better than ‘we want more data at all costs’. If so, the true costs would be lasting.

Perhaps then it is unsurprising that there are calls for a new infrastructure to make it happen, in the form of Data Trusts. Some thoughts on that follow too.


Part 1. Ethically problematic

Ethics is dissolving into little more than a buzzword. Can we find solutions underpinned by law, and ethics, and put the person first?

Part 2. Can Data Trusts be trustworthy?

As long as data users ignore data subjects rights, Data Trusts have no social license.


Data Horizons: New Forms of Data For Social Research,

Elliot, M., Purdam, K., Mackey, E., School of Social Sciences, The University Of Manchester, CCSR Report 2013-312/6/2013

Leaving Facebook and flaws in Face Recognition

This Facebook ad was the final straw for me this week.

I’m finally leaving.

When I saw Facebook’s disingenuous appropriation of new data law as-a-good-thing I decided time’s up. While Zuckerberg talks about giving users more control, what they are doing is steering users away from better privacy and putting users outside the reach of new protections rather than stepping up to meet its obligations.

After eleven years, I’m done. I’ve used Facebook to run a business.  I’ve used it to keep in touch with real-life family and friends. I’ve had more positive than negative experiences on the site. But I’ve packed in my personal account.

I hadn’t actively used it since 2015. My final post that year was about Acxiom’s data broker agreement with Facebook. It has taken 3 hours to download  any remaining data, to review and remove others’ tags, posts and shared content linking me. I had already deactivated 18 apps, and have now used each individual ID that the Facebook-App link provided, to make Subject Access requests (SAR) and object to processing. Some were easy. Some weren’t.

Pinterest and Hootsuite were painful circular loops of online ‘support’ that didn’t offer any easy way to contact them.  But to their credit Hootsuite Twitter message support was ultra fast and suggested an email to hootsuite-dpa [at] hootsuite.com. Amazon required a log in to the Amazon account. Apple’s Aperture goes into a huge general page impossible to find any easy link to contact.  Ditto Networked Blogs.

Another app that has no name offered a link direct to a pre-filled form with no contact details and no option for free text you can send only the message please delete any data you hold about me — not make a SAR.

Another has a policy but no Data Controller listed. Who is http://a.pgtb.me/privacy ? Ideas welcome.

What about our personal data rights?

The Facebook ad says, you will be able to access, download or delete your data at any time. Not according to the definition of personal data we won’t.  And Facebook knows it. As Facebook’s new terms and condition says, some things that you do on Facebook aren’t stored in your account. For example, a friend may have messages from you after deletion. They don’t even mention data inferred. This information remains after you delete your account. It’s not ‘your’ data because it belongs to the poster, it seems according to Facebook. But it’s ‘your’ data because the data are about or related to you according to data protection law.

Rights are not about ownership.

That’s what Facebook appears to want to fail to understand. Or perhaps wants the reader to fail to understand. Subject Access requests should reveal this kind of data, and we all have a right to know what the Facebook user interface limits-by-design. But Facebook still keeps this hidden, while saying we have control.

Meanwhile, what is it doing?  Facebook appears to be running scared and removing  recourse to better rights.

Facebook, GDPR and flaws in Face Recognition

They’ve also started running Face Recognition. With the new feature enabled, you’re notified if you appear in a photo even if not tagged.

How will we be notified if we’re not tagged? Presumably Facebook uses previously stored facial images that were tagged, and is matching them using an image library behind the scenes.

In the past I have been mildly annoyed when friends who should know me better, have posted photos of my children on Facebook.

Moments like children’s birthday parties can mean a photo posted of ten fun-filled faces in which ten parents are tagged. Until everyone knew I’d rather they didn’t, I was often  tagged in photos of my young children.  Or rather my children were tagged as me.

Depending on your settings, you’ll receive a notification when someone tags a photo with your name.  Sure I can go and untag it, to change the audience that can see it, but cannot have control over it.

Facebook meanwhile pushes this back as if it is a flaw with the user and in a classic victim-blaming move suggests it’s your fault you don’t like it, not their failure to meet privacy-by-design, by saying,  If you don’t like something you’re tagged in, you can remove the tag or ask the person who tagged you to remove the post.

There is an illusion of control being given to the user, by companies and government at the moment. We must not let that illusion become the accepted norm.

Children whose parents are not on the site cannot get notifications. A parent may have no Facebook account.  (A child under 13 should no Facebook account, although Facebook has tried to grab those too.) The child with no account may never know, but Facebook is certainly processing, and might be building up a shadow profile about, the nameless child with face X anyway.

What happens next?

As GDPR requires a share of accountability for controller and processing responsibilities, what will it mean for posters who do so without consent of the people in photos? For Facebook it should mean they cannot process using biometric profiling, and its significant effects may be hidden or, especially for children, only appear in the future.

Does Facebook process across photos held on other platforms?

Since it was founded, Facebook has taken over several social media companies, the most familiar of which are Instagram in 2012 and WhatsApp in 2014. Facebook has also bought Oculus VR [VR headsets], Ascenta [drones], and ProtoGeo Oy [fitness trackers].

Bloomberg reported at the end of February that  a lawsuit alleging Facebook Inc. photo scanning technology flouts users’ privacy rights can proceed.

As TechCrunch summarised, when asked to clear a higher bar for privacy, Facebook has instead delved into design tricks to keep from losing our data.

Facebook needs to axe Face Recognition, or make it work in ways that are lawful, to face up to its responsibilities, and fast.

The Cambridge Analytica scandal has also brought personalised content targeting into the spotlight, but we are yet to see really constructive steps to row back to more straightfoward advertising, and away from todays’s highly invasive models of data collection and content micro-targeting designed to to grab your personalised attention.

Meanwhile policy makers and media are obsessed with screen time limits as a misplaced, over-simplified solution to complex problems, in young people using social media, which are more commonly likely to be exacerbating existing conditions and demonstrate correlations rather than cause.

Children are stuck in the middle.

Their rights to protection, privacy, reputation and participation must not become a political playground.

When is a profile no longer personal data

This is bothering me in current and future data protection.

When is a profile no longer personal?

I’m thinking of a class, or even a year group of school children.

If you strip off enough identifiers or aggregate data so that individuals are no longer recognisable and could not be identified from other data — directly or indirectly — that is in your control or may come into your control, you no longer process personal data.

How far does Article 4(1) go to the boundary of what is identifiable on economic, cultural or social identity?

There is a growing number of research projects using public sector data (including education but often in conjunction and linkage with various others) in which personal data are used to profile and identify sets of characteristics, with a view to intervention.

Let’s take a case study.

Exclusions and absence, poverty, ethnicity, language, SEN and health, attainment indicators, their birth year and postcode areas are all profiled on individual level data in a data set of 100 London schools all to identify the characteristics of children more likely than others to drop out.

It’s a research project, with a view to shaping a NEET intervention program in early education (Not in Education, Employment or Training). There is no consent sought for using the education, health, probation, Police National Computer, and HMRC data like this, because it’s research, and enjoys an exemption.

Among the data collected BAME ethnicity and non-English language students in certain home postcodes are more prevalent. The names of pupils and DOB and their school address have been removed.

In what is in effect a training dataset, to teach the researchers, “what does a potential NEET look like?” pupils with characteristics like Mohammed Jones, are more likely than others to be prevalent.

It does not permit the identification of the data subject as himself, but the data knows exactly what a pupil like MJ looks like.

Armed with these profiles of what potential NEETs look like, researchers now work with the 100 London schools, to give the resulting knowledge, to help teachers identify their children at risk of potential drop out, or exclusion, or of becoming a NEET.

In one London school, MJ, is a perfect match for the profile. The teacher is relieved from any active judgement who should join the program, he’s a perfect match for what to look for.  He’s asked to attend a special intervention group, to identify and work on his risk factors.

The data are accurate. His profile does match. But would he have gone on to become NEET?

Is this research, or was it a targeted intervention?

Are the tests for research exemptions met?

Is this profiling and automated decision-making?

If the teacher is asked to “OK” the list, but will not in practice edit it, does that make it exempt from the profiling restriction for children?

The GDPR also sets out the rules (at Article 6(4)) on factors a controller must take into account to assess whether a new processing purpose is compatible with the purpose for which the data were initially collected.

But if the processing is done only after the identifiers are removed that could identify MJ, not just someone like him, does it apply?

In a world that talks about ever greater personalisation, we are in fact being treated less and less as an individual, but instead we’re constantly assessed by comparison, and probability, how we measure up against other profiles of other people built up from historical data.

Then it is used to predict what someone with a similar profile would do, and therefore by inference, what we the individual would do.

What is the difference in reality, of having given the researchers all the education, health, probation, Police National Computer, and HMRC — as they had it — and then giving them the identifying school datasets with pupils’ named data, and saying “match them up.”

I worry that we are at great risk, in risk prediction, of not using the word research, to mean what we think it means.

And our children are unprotected from bias and unexpected consequences as a result.

The Trouble with Boards at the Ministry of Magic

Peter Riddell, the Commissioner for Public Appointments, has completed his investigation into the recent appointments to the Board of the Office for Students and published his report.

From the “Number 10 Googlers,”  that NUS affiliation — an interest in student union representation was seen as undesirable, to “undermining the policy goals” and what the SpAds supported, the whole report is worth a read.

Perception of the process

The concern that the Commissioner raises, over the harm  done to the public’s perception of the public appointments process means more needs done to fix these problems, before and after appointments.

This process reinforces what people think already. Jobs for the [white Oxford] boys, and yes-men.  And so what, why should I get involved anyway, and what can we hope to change?

Possibilities for improvement

What should the Department for Education (DfE) now offer and what should be required after the appointments process, for the OfS and other bodies, boards and groups et al?

  • Every board at the Department for Education, its name, aim, and members — internal and external — should be published.
  • Every board at the Department for Education should be required to publish its Terms of Appointment, and Terms of Reference.
  • Every board at the Department for Education should be required to publish agendas before meetings and meaningful meeting minutes promptly.

Why? Because there’s all sorts of boards around and their transparency is frankly non-existent. I know because I sit on one. Foolishly I did not make it a requirement to publish minutes before I agreed to join. But in a year it has only met twice, so you’ve not missed much. Who else sits where, on what policy, and why?

In another I used to sit on I got increasingly frustrated that the minutes were not reflective of the substance of discussion. This does the public a disservice twice over. The purpose of the boards look insipid and the evidence for what challenge they are intended to offer,  their very reason for being, is washed away. Show the public what’s hard, that there’s debate, that risks are analysed and balanced, and then decisions taken. Be open to scrutiny.

The public has a right to know

When scrutiny really matters, it is wrong — just as the Commissioner report reads — for any Department or body to try to hide the truth.

The purpose of transparency must be to hold to account and ensure checks-and-balances are upheld in a democratic system.

The DfE withdrew from a legal hearing scheduled at the First Tier Information Rights Tribunal last year a couple of weeks beforehand, and finally accepted an ICO decision notice in my favour. I had gone through a year of the Freedom-of-Information appeal process to get hold of the meeting minutes of the Department for Education Star Chamber Scrutiny Board, from November 2015.

It was the meeting in which I had been told members approved the collection of nationality and country of birth in the school census.

“The Star Chamber Scrutiny Board”.  Not out of Harry Potter and the Ministry of Magic but appointed by the DfE.

It’s a board that mentions actively seeking members of certain teaching unions but omits others. It publishes no meeting minutes. Its terms of reference are 38 words long, and it was not told the whole truth before one of the most important and widely criticised decisions it ever made affecting the lives of millions of children across England and harm and division in the classroom.

Its annual report doesn’t mention the controversy at all.

After sixteen months, the DfE finally admitted it had kept the Star Chamber Scrutiny Board in the dark on at least one of the purposes of expanding the school census. And on its pre-existing active, related data policy passing pupil data over to the Home Office.

The minutes revealed the Board did not know anything about the data sharing agreement already in place between the DfE and Home Office or that “(once collected) nationality data” [para 15.2.6] was intended to share with the Border Force Casework Removals Team.

Truth that the DfE was forced to reveal, and only came out two years after the meeting, and a full year after the change in law.

If the truth, transparency, diversity of political opinion on boards are allowed to die so does democracy

I spoke to Board members in 2016. They were shocked to find out what the MOU purposes were for the new data,  and that regular data transfers had already begun without their knowledge, when they were asked to sign off the nationality data collection.

Their lack of concerns raised was given in written evidence to the House of Lords Secondary Legislation Scrutiny Committee that it had been properly reviewed.

How trustworthy is anything that the Star Chamber now “approves” and our law making process to expand school data? How trustworthy is the Statutory Instrument scrutiny process?

“there was no need for DfE to discuss with SCSB the sharing of data with Home Office as: a.) none of the data being considered by the SCSB as part of the proposal supporting this SI has been, or will be, shared with any third-party (including other government departments);

[omits it “was planned to be”]

and b.) even if the data was to be shared externally, those decisions are outside the SCSB terms of reference.”

Outside the terms of reference that are 38 words long and should scrutinise but not too closely or reject on the basis of what exactly?

Not only is the public not being told the full truth about how these boards are created, and what their purpose is, it seems board members are not always told the full truth they deserve either.

Who is invited to the meeting, and who is left out? What reports are generated with what recommendations? What facts or opinion cannot be listened to, scrutinised and countered, that could be so damaging as to not even allow people to bring the truth to the table?

If the meeting minutes would be so controversial and damaging to making public policy by publishing them, then who the heck are these unelected people making such significant decisions and how? Are they qualified, are they independent, and are they accountable?

If alternately, what should be ‘independent’ boards, or panels, or meetings set up to offer scrutiny and challenge, are in fact being manipulated to manoeuvre policy and ready-made political opinions of the day,  it is a disaster for public engagement and democracy.

It should end with this ex- OfS hiring process at DfE, today.

The appointments process and the ongoing work by boards must have full transparency, if they are ever to be seen as trustworthy.

Is Hancock’s App Age Appropriate?

What can Matt Hancock learn from his app privacy flaws?

Note: since starting this blog, the privacy policy has been changed since what was live at 4.30 and the “last changed date” backdated on the version that is now live at 21.00. It shows the challenge I point out in 5:

It’s hard to trust privacy policy terms and conditions that are not strong and stable. 


The Data Protection Bill about to pass through the House of Commons requires the Information Commissioner to prepare and issue codes of practice — which must be approved by the Secretary of State — before they can become statutory and enforced.

One of those new codes (clause 124) is about age-appropriate data protection design. Any provider of an Information Society Service — as outlined in GDPR Article 8, where a child’s data are collected on the legal basis of consent — must have regard for the code, if they target the site use at a child.

For 13 -18 year olds what changes might mean compared with current practices can be demonstrated by the Minister for Digital, Culture, Media and Sport’s new app, launched today.

This app is designed to be used by children 13+. Regardless that the terms say, [more aligned with US COPPA laws rather than GDPR] the app requires parental approval 13-18, it still needs to work for the child.

Apps could and should be used to open up what politics is about to children. Younger users are more likely to use an app than read a paper for example. But it must not cost them their freedoms. As others have written, this app has privacy flaws by design.

Children merit specific protection with regard to their personal data, as they may be less aware of the risks, consequences and safeguards concerned and their rights in relation to the processing of personal data. (GDPR Recital 38).

The flaw in the intent to protect by age, in the app, GDPR and UK Bill overall, is that understanding needed for consent is not dependent on age, but on capacity. The age-based model to protect the virtual child, is fundamentally flawed. It’s shortsighted, if well intentioned, but bad-by-design and does little to really protect children’s rights.

Future age verification for example; if it is to be helpful, not harm, or  a nuisance like a new cookie law, must be “a narrow form of ‘identity assurance’ – where only one attribute (age) need be defined.” It must also respect Recital 57, and not mean a lazy data grab like GiffGaff’s.

On these 5 things this app fails to be age appropriate:

  1. Age appropriate participation, privacy, and consent design.
  2. Excessive personal data collection and permissions. (Article 25)
  3. The purposes of each data collected must be specified, explicit and not further processed for something incompatible with them. (Principle 2).
  4. The privacy policy terms and conditions must be easily understood by a child, and be accurate. (Recital 58)
  5. It’s hard to trust privacy policy terms and conditions that are not strong and stable. Among things that can change are terms on a free trial which should require active and affirmative action not continue the account forever, that may compel future costs.  Any future changes, should also be age-appropriate of themselves,  and in the way that consent is re-managed.

How much profiling does the app enable and what is it used for? The Article 29 WP recommends, “Because children represent a more vulnerable group of society, organisations should, in general, refrain from profiling them for marketing purposes.” What will this mean for any software that profile children’s meta-data to share with third parties, or commercial apps with in-app purchases, or “bait and switch” style models? As this app’s privacy policy refers to.

The Council of Europe 2016-21 Strategy on the Rights of the Child, recognises “provision for children in the digital environment ICT and digital media have added a new dimension to children’’s right to education” exposing them to new risk, “privacy and data protection issues” and that “parents and teachers struggle to keep up with technological developments. ” [6. Growing up in a Digital World, Para 21]

Data protection by design really matters to get right for children and young people.

This is a commercially produced app and will only be used on a consent and optional basis.

This app shows how hard it can be for people buying tech from developers to understand and to trust what’s legal and appropriate.

For developers with changing laws and standards they need clarity and support to get it right. For parents and teachers they will need confidence to buy and let children use safe, quality technology.

Without relevant and trustworthy guidance, it’s nigh on impossible.

For any Minister in charge of the data protection rights of children, we need the technology they approve and put out for use by children, to be age-appropriate, and of the highest standards.

This app could and should be changed to meet them.

For children across the UK, more often using apps offers them no choice whether or not to use it. Many are required by schools that can make similar demands for their data and infringe their privacy rights for life. How much harder then, to protect their data security and rights, and keep track of their digital footprint where data goes.

If the Data protection Bill could have an ICO code of practice for  children that goes beyond consent based data collection; to put clarity, consistency and confidence at the heart of good edTech for children, parents and schools, it would be warmly welcomed.


Here’s detailed examples what the Minister might change to make his app in line with GDPR, and age-appropriate for younger users.

1. Is the app age appropriate by design?

Unless otherwise specified in the App details on the applicable App Store, to use the App you must be 18 or older (or be 13 or older and have your parent or guardian’s consent).

Children over 13 can use the app, but this app needs parental consent. That’s different from GDPR– consent over and above the new laws as will apply in the UK from May. That age will vary across the EU. Inconsistent age policies are going to be hard to navigate.

Many of the things that matter to privacy, have not been included in the privacy policy (detailed below), but in the terms and conditions.

What else needs changed?

2. Personal data protection by design and default

Excessive personal data collection cannot be justified through a “consent” process, by agreeing to use the app. There must be data protection by design and default using the available technology. That includes data minimisation, and limited retention. (Article 25)

The apps powers are vast and collect far more personal data than is needed, and if you use it, even getting permission to listen to your mic. That is not data protection by design and default, which must implement data-protection principles, such as data minimisation.

If as has been suggested, in the newest version of android each permission is asked for at the point of use not on first install, that could be a serious challenge for parents who think they have reviewed and approved permissions pre-install (and fails beyond the scope of this app). An app only requires consent to install and can change the permissions behind the scenes at any time. It makes privacy and data protection by design even more important.

Here’s a copy of what the android Google library page says it can do. Once you click into “permissions” and scroll. This is excessive. “Matt Hancock” is designed to prevent your phone from sleeping, read and modify the contents of storage, and access your microphone.

Version 2.27 can access:
 
Location
  • approximate location (network-based)
Phone
  • read phone status and identity
Photos / Media / Files
  • read the contents of your USB storage
  • modify or delete the contents of your USB storage
Storage
  • read the contents of your USB storage
  • modify or delete the contents of your USB storage
Camera
  • take pictures and videos
Microphone
  • record audio
Wi-Fi connection information
  • view Wi-Fi connections
Device ID & call information
  • read phone status and identity
Other
  • control vibration
  • manage document storage
  • receive data from Internet
  • view network connections
  • full network access
  • change your audio settings
  • control vibration
  • prevent device from sleeping

“Matt Hancock” knows where you live

The app makers – and Matt Hancock – should have no necessity to know where your phone is at all times, where it is regularly, or whose other phones you are near, unless you switch it off. That is excessive.

It’s not the same as saying “I’m a constituent”. It’s 24/7 surveillance.

The Ts&Cs say more.

It places the onus on the user to switch off location services — which you may expect for other apps such as your Strava run — rather than the developers take responsibility for your privacy by design. [Click image to see larger] [Full source policy].

[update since writing this post on February 1, the policy has been greatly added to]

It also collects ill-defined “technical information”. How should a 13 year old – or parent for that matter – know what these information are? Those data are the meta-data, the address and sender tags etc.

By using the App, you consent to us collecting and using technical information about your device and related information for the purpose of helping us to improve the App and provide any services to you.

As NSA General Counsel Stewart Baker has said, “metadata absolutely tells you everything about somebody’s life. General Michael Hayden, former director of the NSA and the CIA, has famously said, “We kill people based on metadata.”

If you use this app and “approve” the use, do you really know what the location services are tracking and how that data are used? For a young person, it is impossible to know, or see where their digital footprint has gone, or knowledge about them, have been used.

3. Specified, explicit, and necessary purposes

As a general principle, personal data must be only collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes. The purposes of these very broad data collection, are not clearly defined. That must be more specifically explained, especially given the data are so broad, and will include sensitive data. (Principle 2).

While the Minister has told the BBC that you maintain complete editorial control, the terms and conditions are quite different.

The app can use user photos, files, your audio and location data, and that once content is shared it is “a perpetual, irrevocable” permission to use and edit, this is not age-appropriate design for children who might accidentally click yes, or not appreciate what that may permit. Or later wish they could get that photo back. But now that photo is on social media potentially worldwide —  “Facebook, Twitter, Pinterest, YouTube, Instagram and on the Publisher’s own websites,” and the child’s rights to privacy and consent, are lost forever.

That’s not age appropriate and not in line with GDPR on rights to withdraw consent, to object or to restrict processing. In fact the terms, conflict with the app privacy policy which states those rights [see 4. App User Data Rights] Just writing “there may be valid reasons why we may be unable to do this” is poor practice and a CYA card.

4. Any privacy policy and app must do what it says

A privacy policy and terms and conditions must be easily understood by a child, [indeed any user] and be accurate.

Journalists testing the app point out that even if the user clicks “don’t allow”, when prompted to permit access to the photo library, the user is allowed to post the photo anyway.


What does consent mean if you don’t know what you are consenting to? You’re not. GDPR requires that privacy policies are written in a way that their meaning can be understood by a child user (not only their parent). They need to be jargon-free and meaningful in “clear and plain language that the child can easily understand.” (Recital 58)

This privacy policy is not child-appropriate. It’s not even clear for adults.

5. What would age appropriate permissions for  charging and other future changes look like?

It should be clear to users if there may be up front or future costs, and there should be no assumption that agreeing once to pay for an app, means granting permission forever, without affirmative action.

Couching Bait-and-Switch, Hidden Costs

This is one of the flaws that the Matt Hancock app terms and conditions shares with many free education apps used in schools. At first, they’re free. You register, and you don’t even know when your child  starts using the app, that it’s a free trial. But after a while, as determined by the developer, the app might not be free any more.

That’s not to say this is what the Matt Hancock app will do, in fact it would be very odd if it did. But odd then, that its privacy policy terms and conditions state it could.

The folly of boiler plate policy, or perhaps simply wanting to keep your options open?

Either way, it’s bad design for children– indeed any user — to agree to something that in fact, is meaningless because it could change at any time, and automatic renewals are convenient but who has not found they paid for an extra month of a newspaper or something else they intended to only use for a limited time?  And to avoid any charges, you must cancel before the end of the free trial – but if you don’t know it’s free, that’s hard to do. More so for children.

From time to time we may offer a free trial period when you first register to use the App before you pay for the subscription.[…] To avoid any charges, you must cancel before the end of the free trial.

(And on the “For more details, please see the product details in the App Store before you download the App.” there aren’t any, in case you’re wondering).

What would age appropriate future changes be?

It should be clear to parents that what they consent to on behalf of a child, or if a child consents, at the time of install. What that means must empower them to better digital understanding and to stay in control, not allow the company to change the agreement, without the user’s clear and affirmative action.

One of the biggest flaws for parents in children using apps is that what they think they have reviewed, thought appropriate, and permitted, can change at any time, at the whim of the developer and as often as they like.

Notification “by updating the Effective Date listed above” is not any notification at all.  And PS. they changed the policy and backdated it today from February 1, 2018, to July 2017. By 8 months. That’s odd.

The statements in this “changes” contradict one another. It’s a future dated get-out-of-jail-free-card for the developer and a transparency and oversight nightmare for parents. “Your continued use” is not clear, affirmative, and freely given consent, as demanded by GDPR.

Perhaps the kindest thing to say about this policy, and its poor privacy approach to rights and responsibilities, is that maybe the Minister did not read it. Which highlights the basic flaw in privacy policies in the first place. Data usage reports how your personal data have actually been used, versus what was promised, are of much greater value and meaning. That’s what children need in schools.


Statutory Instruments, the #DPBill and the growth of the Database State

First they came for the lists of lecturers. Did you speak out?

Last week Chris Heaton-Harris MP wrote to vice-chancellors to ask for a list of lecturers’ names and course content, “With particular reference to Brexit”.  Academics on social media spoke out in protest. There has been little reaction however, to a range of new laws that permit the incremental expansion of the database state on paper and in practice.

The government is building ever more sensitive lists of names and addresses, without oversight. They will have access to information about our bank accounts. They are using our admin data to create distress-by-design in a ‘hostile environment.’ They are writing laws that give away young people’s confidential data, ignoring new EU law that says children’s data merits special protections.

Earlier this year, Part 5 of the new Digital Economy Act reduced the data protection infrastructure between different government departments. This week, in discussion on the Codes of Practice, some local government data users were already asking whether safeguards can be further relaxed to permit increased access to civil registration data and use our identity data for more purposes.

Now in the Data Protection Bill, the government has included clauses in Schedule 2, to reduce our rights to question how our data are used and that will remove a right to redress where things go wrong.  Clause 15 designs-in open ended possibilities of Statutory Instruments for future change.

The House of Lords Select Committee on the Constitution point out  on the report on the Bill, that the number and breadth of the delegated powers, are, “an increasingly common feature of legislation which, as we have repeatedly stated, causes considerable concern.”

Concern needs to translate into debate, better wording and safeguards to ensure Parliament maintains its role of scrutiny and where necessary constrains executive powers.

Take as case studies, three new Statutory Instruments on personal data  from pupils, students, and staff. They all permit more data to be extracted from individuals and to be sent to national level:

  • SI 807/2017 The Education (Information About Children in Alternative Provision) (England) (Amendment) Regulations 2017
  • SI No. 886 The Education (Student Information) (Wales) Regulations 2017 (W. 214) and
  • SL(5)128 – The Education (Supply of Information about the School Workforce) (Wales) Regulations 2017

The SIs typically state “impact assessment has not been prepared for this Order as no impact on businesses or civil society organisations is foreseen. The impact on the public sector is minimal.” Privacy Impact Assessments are either not done, not published or refused via FOI.

Ever expanding national databases of names

Our data are not always used for the purposes we expect in practice, or what Ministers tell us they will be used for.

Last year the government added nationality to the school census in England, and snuck the change in law through Parliament in the summer holidays.  (SI 808/2016). Although the Department for Education conceded after public pressure, “These data will not be passed to the Home Office,” the intention was very real to hand over “Nationality (once collected)” for immigration purposes. The Department still hands over children’s names and addresses every month.

That SI should have been a warning, not a process model to repeat.

From January, thanks to yet another rushed law without debate, (SI 807/2017) teen pregnancy, young offender and mental health labels will be added to children’s records for life in England’s National Pupil Database. These are on a named basis, and highly sensitive. Data from the National Pupil Database, including special needs data (SEN) are passed on for a broad range of purposes to third parties, and are also used across government in Troubled Families, shared with National Citizen Service, and stored forever; on a named basis, all without pupils’ consent or parents’ knowledge. Without a change in policy, young offender and pregnancy, will be handed out too.

Our children’s privacy has been outsourced to third parties since 2012. Not anonymised data, but  identifiable and confidential pupil-level data is handed out to commercial companies, charities and press, hundreds of times a year, without consent.

Near-identical wording  that was used in 2012 to change the law in England, reappears in the new SI for student data in Wales.

The Wales government introduced regulations for a new student database of names, date of birth and ethnicity, home address including postcode, plus exam results. The third parties listed who will get given access to the data without asking for students’ consent, include the Student Loans Company and “persons who, for the purpose of promoting the education or well-being of students in Wales, require the information for that purpose”, in SI No. 886, the Education (Student Information) (Wales) Regulations 2017 (W. 214).

The consultation was conflated with destinations data, and while it all sounds for the right reasons, the SI is broad on purposes and prescribed persons. It received 10 responses.

Separately, a 2017 consultation on the staff data collection received 34 responses about building a national database of teachers, including names, date of birth, National Insurance numbers, ethnicity, disability, their level of Welsh language skills, training, salary and more. Unions and the Information Commissioner’s Office both asked basic questions in the consultation that remain unanswered, including who will have access. It’s now law thanks  to SL(5)128 – The Education (Supply of Information about the School Workforce) (Wales) Regulations 2017. The questions are open.

While I have been assured this weekend in writing that these data will not be used for commercial purposes or immigration enforcement, any meaningful safeguards are missing.

More failings on fairness

Where are the communications to staff, students and parents? What oversight will there be? Will a register of uses be published? And why does government get to decide without debate, that our fundamental right to privacy can be overwritten by a few lines of law? What protections will pupils, students and staff have in future how these data will be used and uses expanded for other things?

Scope creep is an ever present threat. In 2002 MPs were assured on the changes to the “Central Pupil Database”, that the Department for Education had no interest in the identity of individual pupils.

But come 2017 and the Department for Education has become the Department for Deportation.

Children’s names are used to match records in an agreement with the Home Office handing over up to 1,500 school pupils’ details a month. The plan was parliament and public should never know.

This is not what people expect or find reasonable. In 2015 UCAS had 37,000 students respond to an Applicant Data Survey. 62% of applicants think sharing their personal data for research is a good thing, and 64% see personal benefits in data sharing.  But over 90% of applicants say they should be asked first, regardless of whether their data is to be used for research, or other things. This SI takes away their right to control their data and their digital identity.

It’s not in young people’s best interests to be made more digitally disempowered and lose control over their digital identity. The GDPR requires data privacy by design. This approach should be binned.

Meanwhile, the Digital Economy Act codes of practice talk about fair and lawful processing as if it is a real process that actually happens.

That gap between words on paper, and reality, is a caredata style catastrophe across every sector of public data and government waiting to happen. When will the public be told how data are used?

Better data must be fairer and safer in the future

The new UK Data Protection Bill is in Parliament right now, and its wording will matter. Safe data, transparent use, and independent oversight are not empty slogans to sling into the debate.

They must shape practical safeguards to prevent there being no course of redress if you are slung into a Border Force van at dawn, your bank account is frozen, or you get a 30 days notice-to-leave letter all by mistake.

To ensure our public [personal] data are used well, we need to trust why they’re collected and see how they are used. But instead the government has drafted their own get-out-of-jail-free-card to remove all our data protection rights to know in the name of immigration investigation and enforcement, and other open ended public interest exemptions.

The pursuit of individuals and their rights under an anti-immigration rhetoric without evidence of narrow case need, in addition to all the immigration law we have, is not the public interest, but ideology.

If these exemptions becomes law, every one of us loses right to ask where our data came from, why it was used for that purpose, or course of redress.

The Digital Economy Act removed some of the infrastructure protections between Departments for datasharing. These clauses will remove our rights to know where and why that data has been passed around between them.

These lines are not just words on a page. They will have real effects on real people’s lives. These new databases are lists of names, and addresses, or attach labels to our identity that last a lifetime.

Even the advocates in favour of the Database State know that if we want to have good public services, their data use must be secure and trustworthy, and we have to be able to trust staff with our data.

As the Committee sits this week to review the bill line by line, the Lords must make sure common sense sees off the scattering of substantial public interest and immigration exemptions in the Data Protection Bill. Excessive exemptions need removed, not our rights.

Otherwise we can kiss goodbye to the UK as a world leader in tech that uses our personal data, or research that uses public data. Because if the safeguards are weak, the commercial players who get it wrong in trials of selling patient data,  or who try to skip around the regulatory landscape asking to be treated better than everyone else, and fail to comply with Data Protection law, or when government is driven to chasing children out of education, it doesn’t  just damage their reputation, or the potential of innovation for all, they damage public trust from everyone, and harm all data users.

Clause 15 leaves any future change open ended by Statutory Instrument. We can already see how SIs like these are used to create new national databases that can pop up at any time, without clear evidence of necessity, and without chance for proper scrutiny. We already see how data can be used, beyond reasonable expectations.

If we don’t speak out for our data privacy, the next time they want a list of names, they won’t need to ask. They’ll already know.


First they came …” is with reference to the poem written by German Lutheran pastor Martin Niemöller (1892–1984).

The Future of Data in Public Life

What is means to be human is going to be different. That was the last word of a panel of four excellent speakers, and the sparkling wit and charm of chair Timandra Harkness, at tonight’s Turing Institute event, hosted at the British Library, on the future of data.

The first speaker, Bernie Hogan, of the Oxford Internet Institute, spoke of Facebook’s emotion experiment,  and the challenges of commercial companies ownership and concentrations of knowledge, as well as their decisions controlling what content you get to see.

He also explained simply what an API is in human terms. Like a plug in a socket and instead of electricity, you get a flow of data, but the data controller can control which data can come out of the socket.

And he brilliantly brought in a thought what would it mean to be able to go back in time to the Nuremberg trials, and regulate not only medical ethics, but the data ethics of indirect and computational use of information. How would it affect today’s thinking on AI and machine learning and where we are now?

“Available does not mean accessible, transparent does not mean accountable”

Charles from the Bureau of Investigative Journalism, who had also worked for Trinity Mirror using data analytics, introduced some of the issues that large datasets have for the public.

  • People rarely have the means to do any analytics well.
  • Even if open data are available, they are not necessarily accessible due to the volume of data to access, and constraints of common software (such as excel) and time constraints.
  • Without the facts they cannot go see a [parliamentary] representative or community group to try and solve the problem.
  • Local journalists often have targets for the number of stories they need to write, and target number of Internet views/hits to meet.

Putting data out there is only transparency, but not accountability if we cannot turn information into knowledge that can benefit the public.

“Trust, is like personal privacy. Once lost, it is very hard to restore.”

Jonathan Bamford, Head of Parliamentary and Government Affairs at the ICO, took us back to why we need to control data at all. Democracy. Fairness. The balance of people’s rights,  like privacy, and Freedom-of-Information, and the power of data holders. The awareness that power of authorities and companies will affect the lives of ordinary citizens. And he said that even early on there was a feeling there was a need to regulate who knows what about us.

The third generation of Data Protection law he said, is now more important than ever to manage the whole new era of technology and use of data that did not exist when previous laws were made.

But, he said, the principles stand true today. Don’t be unfair. Use data for the purposes people expect. Security of data matters. As do rights to see the data people hold about us.  Make sure data are relevant, accurate, necessary and kept for a sensible amount of time.

And even if we think that technology is changing, he argued, the principles will stand, and organisations need to consider these principles before they do things, considering privacy as a fundamental human right by default, and data protection by design.

After all, we should remember the Information Commissioner herself recently said,

“privacy does not have to be the price we pay for innovation. The two can sit side by side. They must sit side by side.

It’s not always an easy partnership and, like most relationships, a lot of energy and effort is needed to make it work. But that’s what the law requires and it’s what the public expects.”

“We must not forget, evil people want to do bad things. AI needs to be audited.”

Joanna J. Bryson was brilliant her multifaceted talk, summing up how data will affect our lives. She explained how implicit biases work, and how we reason, make decisions and showed up how we think in some ways  in Internet searches. She showed in practical ways, how machine learning is shaping our future in ways we cannot see. And she said, firms asserting that doing these things fairly and openly and that regulation no longer fits new tech, “is just hoo-hah”.

She talked about the exciting possibilities and good use of data, but that , “we must not forget, evil people want to do bad things. AI needs to be audited.” She summed up, we will use data to predict ourselves. And she said:

“What is means to be human is going to be different.”

That is perhaps the crux of this debate. How do data and machine learning and its mining of massive datasets, and uses for ‘prediction’, affect us as individual human beings, and our humanity?

The last audience question addressed inequality. Solutions like transparency, subject access, accountability, and understanding biases and how we are used, will never be accessible to all. It needs a far greater digital understanding across all levels of society.   How can society both benefit from and be involved in the future of data in public life? The conclusion was made, that we need more faith in public institutions working for people at scale.

But what happens when those institutions let people down, at scale?

And some institutions do let us down. Such as over plans for how our NHS health data will be used. Or when our data are commercialised without consent breaking data protection law. Why do 23 million people not know how their education data are used? The government itself does not use our data in ways we expect, at scale. School children’s data used in immigration enforcement fails to be fair, is not the purpose for which it was collected, and causes harm and distress when it is used in direct interventions including “to effect removal from the UK”, and “create a hostile environment.” There can be a lack of committment to independent oversight in practice, compared to what is promised by the State. Or no oversight at all after data are released. And ethics in researchers using data are inconsistent.

The debate was less about the Future of Data in Public Life,  and much more about how big data affects our personal lives. Most of the discussion was around how we understand the use of our personal information by companies and institutions, and how will we ensure democracy, fairness and equality in future.

The question went unanswered from an audience member, how do we protect ourselves from the harms we cannot see, or protect the most vulnerable who are least able to protect themselves?

“How can we future proof data protection legislation and make sure it keeps up with innovation?”

That audience question is timely given the new Data Protection Bill. But what legislation means in practice, I am learning rapidly, can be very different from what is in the written down in law.

One additional tool in data privacy and rights legislation is up for discussion, right now,  in the UK. If it matters to you, take action.

NGOs could be enabled to make complaints on behalf of the public under article 80 of the General Data Protection Regulation (GDPR). However, the government has excluded that right from the draft UK Data Protection Bill launched last week.

“Paragraph 53 omits from Article 80, representation of data subjects, where provided for by Member State law” from paragraph 1 and paragraph 2,” [Data Protection Bill Explanatory notes, paragraph 681 p84/112]. 80 (2) gives members states the option to provide for NGOs to take action independently on behalf of many people that may have been affected.

If you want that right, a right others will be getting in other countries in the EU, then take action. Call your MP or write to them. Ask for Article 80, the right to representation, in UK law. We need to ensure that our human rights continue to be enacted and enforceable to the maximum, if, “what is means to be human is going to be different.”

For the Future of Data, has never been more personal.

Data Protection Bill 2017: summary of source links

The Data Protection Bill [Exemptions from GDPR] was introduced to the House of Lords on 13 September 2017
*current status April 6, 2018* Report Stage House of Commons — dates, to be announced
Debates

Dates for all stages of the passage of the Bill, including links to the debates.

EU GDPR Progress Overviews

Updates of GDPR age of consent mapping: Better Internet for Kids

Bird and Bird GDPR Tracker [Shows how and where GDPR has been supplemented locally, highlighting where Member States have taken the opportunities available in the law for national variation.]

ISiCo Tracker (Site in German language) with links.

UK Data Protection Bill Overview
  • Data Protection Bill Explanatory Notes [PDF], 1.2MB, 112 pages
  • Data Protection Bill Overview Factsheet [PDF], 229KB, 4 pages
  • Data Protection Bill Impact Assessment [PDF], 123KB, 5 pages
The General Data Protection Regulation

The General Data Protection Regulation [PDF] 959KB, 88 pages

Related Factsheets
  • General Processing Factsheet, [PDF], 141KB, 3 pages
  • Law Enforcement Data Processing Factsheet [PDF], 226KB, 3 pages
  • National Security Data Processing Factsheet [PDF], 231KB, 4 pages
These parts of the bill concern the function of the Information Commissioner and her powers of enforcement
  • Information Commissioner and Enforcement Factsheet [PDF] 223KB, 4 pages
  • Data sharing code of practice [PDF]
GDPR possible derogations

Source credit Amberhawk: Chris Pounder

Member State law can allow modifications to Articles 4(7), 4(9),  6(2), 6(3)(b), 6(4),  8(1), 8(3), 9(2)(a), 9(2)(b), 9(2)(g), 9(2)(h), 9(2)(i), 9(2)(j), 9(3), 9(4),  10,  14(5)(b), 14(5)(c), 14(5)(d),  17(1)(e), 17(3)(b), 17(3)(d), 22(2)(b),  23(1)(e),  26(1),  28(3), 28(3)(a), 28(3)(g), 28(3)(h), 28(4),  29,  32(4),  35(10), 36(5),  37(4),  38(5),  49(1)(g), 49(4), 49(5),  53(1), 53(3),  54(1), 54(2),  58(1)(f), 58(2), 58(3), 58(4), 58(5),  59,  61(4)(b),  62(3),  80,  83(5)(d), 83(7), 83(8),  85,  86,  87,  88,  89,  and 90 of the GDPR.

Other relevant significant connected legislation
  • The Police and Crime Directive [web link] 
  • EU Charter of Fundamental Rights – European Commission [link]
  • The proposed Regulation on Privacy and Electronic Communications [web link]
  • Draft modernised convention for the protection of individuals with regard to the processing of personal data (convention 108)
Data Protection Bill Statement of Intent
  • DCMS Statement of Intent [PDF] 229KB, 4 pages
  • Letter to Stakeholders [PDF] 184KB, 2 pages 7 Aug 2017
Other links on derogations and data processing
  • On Adequacy: Data transfers between the EU and UK post Brexit? Andrew D. Murray Article [link]
  • Two Birds [web link]
  • ICO legal basis for processing and children [link]
  • Public authorities under the Freedom of Information Act (ICO) Public authorities under FOIA 120160901 Version: 2.2 [link] 
  • ICO information for education [link]

Blogs on key issues [links in date of post]

  • Amberhawk
    • DP Bill’s new immigration exemption can put EU citizens seeking a right to remain at considerable disadvantage [09.10] re: Schedule 2, paragraph 4, new Immigration exemption.
    • On Adequacy:  Draconian powers in EU Withdrawal Bill can negate new Data Protection law [13.09]
    • Queen’s Speech, and the promised “Data Protection (Exemptions from GDPR) Bill [29.06]
  • defenddigitalme
    • Response to the Data Protection Bill debate and Green Paper on Online Strategy [11.10.2017]
  • Jon Baines
    • Serious DCMS error about consent data protection [11.08]
  • Eoin O’Dell
    • The UK’s Data Protection Bill 2017: repeals and compensation – updated: On DCMS legislating for Art 82 GDPR. [14.09]

Data Protection Bill Consultation: General Data Protection Regulation Call for Views on exemptions
  • New Data Protection Bill: Our planned reforms [PDF] 952KB, 30 pages
  • London Economics: Research and analysis to quantify benefits arising from personal data rights under the GDPR [PDF] 3.76MB 189 pages
  • ICO response to DCMS [link]
  • ESRC joint submissions on EU General Data Protection Regulation in the UK – Wellcome led multi org submission plus submission from British Academy / Erdos [link]
  • defenddigitalme response to the DCMS [link]
Minister for Digital Matt Hancock’s keynote address to the UK Internet Governance Forum, 13 September [link].

“…the Data Protection Bill, which will bring our data protection regime into the twenty first century, giving citizens more sovereignty over their data, and greater penalties for those who break the rules.

“With AI and machine learning, data use is moving fast. Good use of data isn’t just about complying with the regulations, it’s about the ethical use of data too.

“So good governance of data isn’t just about legislation – as important as that is – it’s also about establishing ethical norms and boundaries, as a society.  And this is something our Digital Charter will address too.”

Media links

14.09 BBC UK proposes exemptions to Data Protection Bill


Edits:

11.10.2017 to add links to the Second Reading in the House of Lords

The Queen’s Speech, Information Society Services and GDPR

The Queen’s Speech promised new laws to ensure that the United Kingdom retains its world-class regime protecting personal data. And the government proposes a new digital charter to make the United Kingdom the safest place to be online for children.

Improving online safety for children should mean one thing. Children should be able to use online services without being used by them and the people and organisations behind it. It should mean that their rights to be heard are prioritised in decisions about them.

As Sir Tim Berners-Lee is reported as saying, there is a need to work with companies to put “a fair level of data control back in the hands of people“. He rightly points out that today terms and conditions are “all or nothing”.

There is a gap in discussions that we fail to address when we think of consent to terms and conditions, or “handing over data”. It is that this assumes that these are always and can be always, conscious acts.

For children the question of whether accepting Ts&Cs giving them control and whether it is meaningful becomes even more moot. What are the agreeing to? Younger children cannot give free and informed consent. After all most privacy policies standardly include phrases such as, “If we sell all or a portion of our business, we may transfer all of your information, including personal information, to the successor organization,” which means in effect that “accepting” a privacy policy today, is effectively a blank cheque for anything tomorrow.

The GDPR requires terms and conditions to be laid out in policies that a child can understand.

The current approach to legislation around children and the Internet is heavily weighted towards protection from seen threats. The threats we need to give more attention to, are those unseen.

By 2024 more than 50% of home Internet traffic will be used by appliances and devices, rather than just for communication and entertainment…The IoT raises huge questions on privacy and security, that have to be addressed by government, corporations and consumers. (WEF, 2017)

Our lives as measured in our behaviours and opinions, purchases and likes, are connected by trillions of sensors. My parents may have described using the Internet as going online. Today’s online world no longer means our time is spent ‘on the computer’, but being online, all day every day. Instead of going to a desk and booting up through a long phone cable, we have wireless computers in our pockets and in our homes, with functionality built-in to enable us to do other things; make a phonecall, make toast, and play. In a smart city surrounded by sensors under pavements, in buildings, cameras and tracking everywhere we go, we are living ever more inside an overarching network of cloud computers that store our data. And from all that data decisions are made, which adverts to show us, on which network sites, what we get offered and do not, and our behaviours and our conscious decision-making may be nudged quite invisibly.

Data about us, whether uniquely identifiable or not, is all too often collected passively, IP Address, linked sign-ins that extract friends lists, and some decide if we can either use the thing or not. It’s part of the deal. We get the service, they get to trade our identity, like Top Trumps, behind the scenes. But we often don’t see it, and under GDPR, there should be no contractual requirement as part of consent. I.e. agree or don’t get the service, is not an option.

From May 25, 2018 there will be special “conditions applicable to child’s consent in relation to information society services,” in Data Protection law which are applicable to the collection of data.

As yet, we have not had debate in the UK what that means in concrete terms, and if we do not soon, we risk it becoming an afterthought that harms more than helps protect children’s privacy, and therefore their digital identity.

I think of five things needed by policy shapers to tackle it:

  • In depth understanding of what ‘online’ and the Internet mean
  • Consistent understanding of what threat models and risk are connected to personal data, which today are underestimated
  • A grasp of why data privacy training is vital to safeguarding
    Confront the idea that user regulation as a stand-alone step will create a better online experience for users, when we know that perceived problems are created by providers or other site users
  • Siloed thinking that fails to be forward thinking or join the dots of tactics across Departments into cohesive inclusive strategy

If the government’s new “major new drive on internet safety” involves the world’s largest technology companies in order to make the UK the “safest place in the world for young people to go online,” then we must also ensure that these strategies and papers join things up and above all, a technical knowledge of how the Internet works needs to join the dots of risks and benefits in order to form a strategy that will actually make children safe, skilled and see into their future.

When it comes to children, there is a further question over consent and parental spyware. Various walk-to-school apps, lauded by the former Secretary of State two years running, use spyware and can be used without a child’s consent. Guardian Gallery, which could be used to scan for nudity in photos on anyone’s phone that the ‘parent’ phone holder has access to install it on, can be made invisible on the ‘child’ phone. Imagine this in coercive relationships.

If these technologies and the online environment are not correctly assessed with regard to “online safety” threat models for all parts of our population, then they fail to address the risk for the most vulnerable who need it.

What will the GDPR really mean for online safety improvement? What will it define as online services for remuneration in the IoT? And who will be considered as children, “targeted at” or “offered to”?

An active decision is required in the UK. Will 16 remain the default age needed for consent to access Information Society Services, or will we adopt 13 which needs a legal change?

As banal as these questions sound they need close attention paid, and clarity, between now and May 25, 2018 if the UK is to be GDPR ready for providers of online services to know who and how they should treat Internet access, participation and age [parental] verification.

How will the “controller” make “reasonable efforts to verify in such cases that consent is given or authorised by the holder of parental responsibility over the child”, and “taking into consideration available technology”.

These are fundamental questions of what the Internet is and means to people today. And if the current government approach to security is anything to go by, safety will not mean what we think it will mean.

It will matter how these plans join up. Age verification was not being considered in UK law in relation to how we would derogate GDPR, even as late as in October 2016 despite age verification requirements already in the Digital Economy Bill. It shows a lack of joined up digital thinking across our government and needs addressed with urgency to get into the next Parliamentary round.

In recent draft legislation I am yet to see the UK government address Internet rights and safety for young people as anything other than a protection issue, treating the online space in the same way as offline, irl, focused on stranger danger, and sexting.

The UK Digital Strategy commits to the implementation of the General Data Protection Regulation by May 2018, and frames it as a business issue, labelling data as “a global commodity” and as such, its handling is framed solely as a requirements needed to ensure “that our businesses can continue to compete and communicate effectively around the world” and that adoption “will ensure a shared and higher standard of protection for consumers and their data.”

The Digital Economy Bill, despite being a perfect vehicle for this has failed to take on children’s rights, and in particular the requirements of GDPR for consent at all. It was clear if we were to do any future digital transactions we need to level up to GDPR, not drop to the lowest common denominator between that and existing laws.

It was utterly ignored. So were children’s rights to have their own views heard in the consultation to comment on the GDPR derogations for children, with little chance for involvement from young people’s organisations, and less than a monthto respond.

We must now get this right in any new Digital Strategy and bill in the coming parliament.