Category Archives: politics

Ensuring people have a say in future data governance

Based on a talk prepared for an event in parliament, hosted by Connected By Data and chaired by Lord Tim Clement-Jones, focusing on the Data Protection and Digital Information Bill, on Monday 5th December 17:00-19:00. “Ensuring people have a say in future data governance”.

Some reflections on Data in Schools (a) general issues; (b) the direction of travel the Government going in and; (c) what should happen, in the Bill or more widely.

Following Professor Sonia Livingstone who focussed primarily on the issues connected with edTech, I focussed on the historical and political context of where we are today, on ‘having a say’ in education data and its processing in, across, and out of, the public sector.


What should be different with or without this Bill?

Since I ran out of time yesterday I’m going to put first what I didn’t get around to: the key conclusions that point to what is possible with or without new Data Protection law. We should be better at enabling the realisation of existing data rights in the education sector today. The state and extended services could build tools for schools to help them act as controllers and for children to realise rights like a PEGE (a personalized exam grade explainer to show exam candidates what data was used to calculate their grade and how), Data usage reports should be made available at least annually from schools to help families understand what data about their children has gone where; and methods that enable the child or family to correct errors or express a Right to Object should be mandatory in schools’ information management systems.  Supplier standards on accuracy and error notifications should be made explicit and statutory, and supplier service level agreements affected by repeated failures.

Where is the change needed to create the social license for today’s practice, even before we look to the future?

“Ensuring people have a say in future data governance”. There has been a lot of asking lots of people for a say in the last decade. When asked, the majority of people generally want the same thingsboth those who are willing and less willing to have personal data about them re-used that was collected for administrative purposes in the public sectorto be told what data is collected for and how it is used, opt-in to re-use, to be able to control distribution, and protections for redress and against misuse strengthened in legislation.

Read Doteveryone’s public attitudes work. Or the Ipsos MORI polls or work by Wellcome. (see below). Or even the care.data summaries.

The red lines in the “Dialogues on Data” report from workshops carried out across different devolved regions of the UK for the 2013 ADRN remain valid today (about the reuse of deidentified linked public admin datasets by qualified researchers in safe settings not even raw identifying data), in particular with relation to:

  • Creating large databases containing many variables/data from a large number of public sector sources
  • Allowing administrative data to be linked with business data
  • Linking of passively collected administrative data, in particular geo-location data

“All of the above were seen as having potential privacy implications or allowing the possibility of reidentification of individuals within datasets. The other ‘red-line’ for some participants was allowing researchers for private companies to access data, either to deliver a public service or in order to make profit. Trust in private companies’ motivations were low.”

Much of this reflects what children and young people say as well. RAENG (2010) carried out engagement work with children on health data Privacy and Prejudice: young people’s views on the development and use of Electronic Patient Records (911.18 KB). They are very clear about wanting to keep their medical details under their own control and away from the ‘wrong hands’ which includes potential employers, commercial companies and parents.

Our own engagement work with a youth group aged 14-25 at a small scale was published in 2020 in our work, The Words We Use in Data Policy: Putting People Back in the Picture, and reflected what the Office for the Regulation of National Statistics went to publish in their own 2022 report, Visibility, Vulnerability and Voice (as a framework to explore whether the current statistics are helping society to understand the experiences of children and young people in all aspects of their lives). Young people worry about misrepresentation, about the data being used in place of conversations about them to take decisions that affect their lives, and about the power imbalance it creates without practical routes for complaint or redress. We all agree children’s voice is left out of the debate on data about them.

Parents are left out too. Defenddigitalme commissioned a parental survey via Survation (2018) under 50% felt they had sufficient control of their child’s digital footprint, and 2/3rds had not heard of the National Pupil Database or its commercial reuse.

So why is it that the public voice, loud and clear, is ignored in public policy and ignored in the drafting of the Data Protection and Digital Information Bill?

When it comes to education, debate should start with children’s and family rights in education, and education policy, not about data produced as its by-product.

The Universal Declaration of Human Rights Article 26 grafts a parent’s right onto child’s right to education, to choose the type of that education and it defines the purposes of education.

Education shall be directed to the full development of the human personality and to the strengthening of respect for human rights and fundamental freedoms. It shall promote understanding, tolerance and friendship among all nations, racial or religious groups, and shall further the activities of the United Nations for the maintenance of peace. Becoming a set of data points for product development or research is not the reason children go to school and hand over their personal details in the admissions process at all.

The State of the current landscape
To realise change, we must accept the current state of play and current practice. This includes a backdrop of trying to manage data well in the perilous state of public infrastructure, shrinking legal services and legal aid for children, ever-shrinking educational services in and beyond mainstream education, staff shortages and retention issues, and the lack of ongoing training or suitable and sustainable IT infrastructure for staff and learners.

Current institutional guidance and national data policy in the field is poor and takes the perspective of the educational setting but not the person.

Three key issues are problems from top-down and across systems:

  • Data repurposing i.e. SATS Key Stage 2 tests which are supposed to be measures of school performance not individual attainment are re-used as risk indicators in Local Authority datasets used to identify families for intervention, which it’s not designed for.
  • Vast amount of data distribution and linkage with other data: policing, economic drivers (LEO) and Local Authority broad data linkage without consent for purposes that exceed the original data collection purpose parents are told and use it like Kent, or Camden, “for profiling the needs of the 38,000 families across the borough”  plus further automated decision-making.
  • Accuracy in education data is a big issue, in part because families never get to see the majority of data created about a child much of which is opinion, and not submitted by them: ie the Welsh government fulfilled a Subject Access Request to one parent concerned with their own child’s record, and ended up revealing that every child in 2010 had been wrongly recorded thanks to a  Capita SIMS coding error, as having been in-care at some point in the past. Procurement processes should build penalties for systemic mistakes and lessons learned like this, into service level agreements, but instead we seem to allow the same issues to repeat over and over again.

What the DfE Does today

Government needs to embrace the fact it can only get data right, if it does the right thing. That includes policy that upholds the law by design. This needs change in its own purposes and practice.

National Pupil Data is a bad example from the top down. The ICO 2019-20 audit of the Department for Education — it is not yet published in full but findings included failings such as no Record of Processing Activity (ROPA), Not able to demonstrate compliance, and no fair processing. All of which will be undermined further by the Bill.

The Department for Education has been giving away 15 million people’s personal confidential data since 2012 and never told them. They know this. They choose to ignore it. And on top of that, didn’t inform people who were in school since then, that Mr Gove changed the law. So now over 21 million people’s pupil records are being given away to companies and other third parties, for use in ways we do not expect, and it is misused too. In 2015, more secret data sharing began, with the Home Office. And another pilot in 2018 with the DWP.

Government wanted to and changed the law on education admin data in 2012 and got it wrong. Education data alone is a sin bin of bad habits and complete lack of public and professional engagement, before even starting to address data quality and accuracy and backwards looking policy built on bad historic data.

The Commercial department do not have appropriate controls in place to protect personal data being processed on behalf of the DfE by data processors.” (ICO audit of the DfE , 2020)

Gambling companies ended up misusing access to learner records for over two years exposed in 2020 by journalists at the Sunday Times.

The government wanted nationality data from the Department for Education to be collected for the purposes of another (the Home Office) and got it very wrong. People boycotted the collection until it was killed off and data later destroyed.

Government changed the law on Higher Education in 2017 and got it wrong.  Now  third parties pass around named equality monitoring records like religion, sexual orientation, and disability and it is stored forever on named national pupil records. The Department for Education (DfE) now holds sexual orientation data on almost 3.2 million, and religious belief data on 3.7 million people.

After the summary findings published by the ICO of their compulsory audit of the Department for Education,  the question now is what will the Department and government do to address the 139 recommendations for improvement, with over 60% classified as urgent or high priority. Is the government intentional about change? We don’t think so at defend digital me, so we are, and welcome any support of our legal challenge.

Before we write new national law we must recognise and consider UK inconsistency and differences across education

Existing frameworks law and statutory guidance and recommendations need understood in the round (eg devolved education, including the age of a child and their capacity to undertake a contract in Scotland (at 16), the geographical applications of the Protection of Freedoms Act 2012, also the Prevent Duty since 2015 and its wider effects as a result of profiling children in counter-terrorism that reach beyond poor data protection and impacts on privacy (see The UN Special Rapporteur 2014 report on children’s rights and freedom of expression) – a plethora of Council of Europe work is applicable here in education that applies to UK as a member state: guidelines on data protection, AI, human rights, rule of law and the role of education in the promotion of democratic citizenship and a protection against authoritarian regimes and extreme nationalism.

The Bill itself
The fundamental principles of the GDPR and Data Protection law are undermined further from an already weak starting point since the 2018 Bill adopted exemptions that were not introduced by other countries in immigration and law enforcement.

  • The very definitions of personal and biometric data need close scrutiny.
  • Accountability is weakened (DPO, DPIA and prior consultation for high risk no longer necessary, ROPA)
  • Purpose limitation is weakened (legitimate interests and additional conditions for LI)
  • Redress is missing (Children and routes for child justice)
  • Henry VIII powers on customer data and business data must go.
  • And of course it only covers the living. What about children’s data misuse that causes distress and harms to human dignity but that is not covered strictly by UK Data Protection law, such as the children whose identities were used for undercover police in the SpyCops scandal. Recital 27 under the GDPR permits a possible change here.

Where are the Lessons Learned reflected in the Bill?

This Bill should be able to look at recent ICO enforcement action or judicial reviews to learn where and what is working and not working in data protection law. Lessons learned should be plentiful on public communications and fair processing, on the definitions of research, on discrimination, accuracy and bad data policy decisions. But where are those lessons in the Bill learned from health data sharing, why the care.data programme ran into trouble and similar failures repeated in the most recent GP patient data grab, or Google DeepMind and the RoyalFree? In policing from the Met Police Gangs Matrix?  In Home Affairs from the judicial review launched to challenge the lawfulness of an algorithm used by the Home Office to process visa applications? Or in education from the summer of 2020 exams fiasco?

The major data challenges as a result of government policy are not about data at all, but bad policy decisions which invariably mean data is involved because of ubiquitous digital first policy, public administration, and the nature of digital record keeping. In education examples include:

  • Partisan political agendas: i.e. the narrative of absence numbers makes no attempt to disaggregate the “expected” absence rate from anything on top, and presenting the idea as fact, that 100,000 children have not returned to school, “as a result of all of this”, is badly misleading to the point of being a lie.
  • Policy that ignores the law. The biggest driver of profiling children in the state education sector, despite the law that profiling children should not be routine, is the Progress 8 measure: about which Leckie & late Harvey Goldstein (2017) concluded in their work on the evolution of school league tables in England 1992-2016: ‘Contextual value-added’, ‘expected progress’ and ‘progress 8’ that, “all these progress measures and school league tables more generally should be viewed with far more scepticism and interpreted far more cautiously than have often been to date.”

The Direction of Travel
Can any new consultation or debate on the changes promised in data protection reform, ensure people have a say in future data governance, the topic for today, and what if any difference would it make?

Children’s voice and framing of children in National Data Strategy is woeful, either projected as victims or potential criminals. That must change.

Data protection law has existed in much similar form to today since 1984. Yet we have scant attention paid to it in ways that meet public expectations, fulfil parental and children’s expectations, or respect the basic principles of the law today. We have enabled technologies to enter into classrooms without any grasp of scale or risks in England that even Scotland has not with their Local Authority oversight and controls over procurement standards. Emerging technologies: tools that claim to be able to identify emotion and mood and use brain scanning, the adoption of e-proctoring, and mental health prediction apps which are treated very differently from they would be in the NHS Digital environment with ethical oversight and quality standards to meet — these are all in classrooms interfering with real children’s lives and development now, not some far-off imagined future.

This goes beyond data protection into procurement, standards, safety, understanding pedagogy, behavioural influence, and policy design and digital strategy. It is furthermore, naive to think this legislation, if it happens at all, is going to be the piece of law that promotes children’s rights when the others in play from the current government do not: the revision of the Human Rights Act, the recent PCSC Bill clauses on data sharing, and the widespread use of exemptions and excuses around data for immigration enforcement.

Conclusion
If policymakers who want more data usage treat people as producers of a commodity, and continue to ignore the publics’ “say in future data governance” then we’ll keep seeing the boycotts and the opt-outs and create mistrust in government as well as data conveners and controllers widening the data trust deficit**. The culture must change in education and other departments.

Overall, we must reconcile the focus of the UK national data strategy, with a rights-based governance framework to move forward the conversation in ways that work for the economy and research, and with the human flourishing of our future generations at its heart. Education data plays a critical role in social, economic, democratic and even security policy today and should be recognised as needing urgent and critical attention.


References:

Local Authority algorithms

The Data Justice Lab has researched how public services are increasingly automated and government institutions at different levels are using data systems and AI. However, our latest report, Automating Public Services: Learning from Cancelled Systems, looks at another current development: The cancellation of automated decision-making systems (ADS) that did not fulfil their goals, led to serious harm, or met caused significant opposition through community mobilization, investigative reporting, or legal action. The report provides the first comprehensive overview of systems being cancelled across western democracies.

New Research Report: Learning from Cancelled Systems

The Children of Covid: Where are they now? #CPC22

At Conservative Party Conference (“CPC22”) yesterday, the CSJ Think Tank hosted an event called, The Children of Lockdown: Where are they now?

When the speakers were finished, and other questions had been asked, I had the opportunity to raise the following three points.

They matter to me because I am concerned that bad policy-making for children will come from the misleading narrative based on bad data. The data used in the discussion is bad data for a number of reasons, based on our research over the last 4 years at defenddigitalme, and previously as part of the Counting Children coalition with particular regard to the Schools Bill.

The first is a false fact that has been often bandied about over the last year in the media and in Parliamentary debate, and that the Rt Hon Sir Iain Duncan Smith MP repeated in opening the panel discussion, that 100,000 children have not returned to school, “as a result of all of this“.

Full Fact has sought to correct this misrepresentation by individuals and institutions in the public domain several times, including one year ago today, when a Sunday Times article, published on 3 October 2021, claimed new figures showed “that between 95,000 and 135,000 children did not return to school in the autumn term, credited to the Commission on Young Lives, a task force headed up by former Children’s Commissioner for England.” Anne Longfield had then told Full Fact, that on 16 September 2021, “the rate of absence was around 1.5 percentage points higher than would normally be expected in the autumn term pre-pandemic.

Full Fact wrote, “This analysis attempts to highlight an estimated level of ‘unexplained absence’, and comes with a number of caveats—for example it is just one day’s data, and it does not record or estimate persistent absence.”

There was no attempt made in the CPC22 discussion to disaggregate the “expected” absence rate from anything on top, and presenting the idea as fact, that 100,000 children have not returned to school, “as a result of all of this”, is misleading.

Suggesting this causation for 100,000 children is wrong for two reasons. The first, is not talking about the number of children within that number who were out of school before the pandemic and reasons for that. The CSJ’s own report published in 2021, said that, “In the autumn term of 2019, i.e pre-Covid 60,244 pupils were labeled as severely absent.”

Whether it is the same children or not who were out of school before and afterwards also matters to apply causation. This named pupil-level absence data is already available for every school child at national level on a termly basis, alongside the other personal details collected termly in the school census, among other collections.

Full Fact went on to say, “The Telegraph reported in April 2021 that more than 20,000 children had “fallen off” school registers when the Autumn 2020 term began. The Association of Directors of Children’s Services projected that, as of October 2020, more than 75,000 children were being educated at home. However, as explained above, this is not the same as being persistently absent.”

The second point I made yesterday, was that the definition of persistent absence has changed three times since 2010, so that children are classified as persistently absent more quickly now at 10%, than when it meant 20% or more of sessions were missed.

(It’s also worth noting that data are inconsistent over time in another way too. The 2019 Guide to Absence Statistics draws attention to the fact that, “Year on year comparisons of local authority data may be affected by schools converting to academies.”)

And third and finally, I pointed out where we have found a further problem in counting children correctly. Local Authorities do this in different ways. Some count each actual child once in the year in their data, some count each time a child changes status (i.e a move from mainstream into Alternative Provision to Elective Home Education could see the same child counted three times in total, once in each dataset across the same year), and some count full-time equivalent funded places (i.e. if five children each have one day a week outside mainstream education, they would be counted only as one single full-time child in total in the reported data).

Put together, this all means not only that the counts are wrong, but the very idea of “ghost children” who simply ‘disappear’ from school without anything known about them anywhere at all, is a fictitious and misleading presentation.

All schools (including academies and independent schools) must notify their local authority when they are about to remove a pupil’s name from the school admission register under any of the fifteen grounds listed in Regulation 8(1) a-n of the Education (Pupil Registration) (England) Regulations 2006. On top of that, children are recorded as Children Missing Education, “CME” where the Local Authority decides a child is not in receipt of suitable education.

For those children,  processing of personal data of children not-in-school by Local Authorities is already required under s436Aof the The Education Act 1996, Duty to make arrangements to identify children not receiving education.

Research done as part of the Counting Children coalition with regards to the Schools Bill, has found every Local Authority that has replied to date (with a 67% response rate to FOI on July 5, 2022) upholds its statutory duty to record these children who either leave state education, or who are found to be otherwise missing education. Every Local Authority has a record of these children, by name, together with much more detailed data.**  The GB News journalist on the panel said she had taken her children out of school and the Local Authority had not contacted her. But as a home-educating audience member then pointed out, that does not mean therefore the LA did not know about her decision, since they would already have her child-/ren’s details recorded. There is law in place already on what LAs must track. Whether or not and how the LA is doing its job, was beyond this discussion, but the suggestion that more law is needed to make them collect the same data as is already required is superfluous.

This is not only about the detail of context and nuance in the numbers and its debate, but substantially alters the understanding of the facts. This matters to have correct, so that bad policy doesn’t get made based on bad data and misunderstanding the conflated causes.

Despite this, in closing Iain Duncan Smith asked the attendees to go out from the meeting and evangelise about these issues. If they do so based on his selection of ‘facts’ they will spread misinformation.

At the event, I did not mention two further parts of this context that matter if policy makers and the public are to find solutions to what is no doubt an important series of problems, and that must not be manipulated to present as if they are entirely as a result of the pandemic. And not only the pandemic, but lockdowns specifically.

Historically, the main driver for absence is illness. In 2020/21, this was 2.1% across the full year. This was a reduction on the rates seen before the pandemic (2.5% in 2018/19).

A pupil on-roll is identified as a persistent absentee if they miss 10% or more of their possible sessions (one school day has two sessions, morning and afternoon.)  1.1% of pupil enrolments missed 50% or more of their possible sessions in 2020/21. Children with additional educational and health needs or disability, have higher rates of absence. During Covid, the absence rate for pupils with an EHC plan was 13.1% across 2020/21.

Authorised other reasons has risen to 0.9% from 0.3%, reflecting that vulnerable children were prioritised to continue attending school but where parents did not want their child to attend, schools were expected to authorise the absence.” (DfE data, academic year 2020/21)

While there were several references made by the panel to the impact of the pandemic on children’s poor mental health, no one mentioned the cuts to youth services’ funding by 70% over ten years, that has allowed CAMHS funding and service provision to wither and fail children well before 2020. The pandemic has exacerbated children’s pre-existing needs that the government has not only failed to meet since, but actively reduced provision for.

It was further frustrating to hear, as someone with Swedish relatives, of their pandemic approach presented as comparable with the UK and that in effect, they managed it ‘better’. It seems absurd to me, to compare the UK uncritically with a country with the population density of Sweden. But if we *are* going to do comparisons with other countries, it should be with fuller understanding of context, and all of their data, and caveats if comparison is to be meaningful.

I was somewhat surprised that Iain Duncan Smith also failed to acknowledge, even once, that thousands of people in the UK have died and continue to die or have lasting effects as a result of and with COVID-19. According to the King’s Fund report,Overall, the number of people who have died from Covid-19 to end-July 2022 is 180,000, about 1 in 8 of all deaths in England and Wales during the pandemic.” Furthermore in England and Wales, “The pandemic has resulted in about 139,000 excess deaths“. “Among comparator high-income countries (other than the US), only Spain and Italy had higher rates of excess mortality in the pandemic to mid-2021 than the UK.” I believe that if we’re going to compare ‘lockdown success’ at all, we should look at the wider comparable data before making it. He might also have chosen to mention alongside this, the UK success story of research and discovery, and the NHS vaccination programme.

And there was no mention at all made of the further context, that while much was made of the economic harm of the impact of the pandemic on children, “The Children of Lockdown” are also, “The Children of Brexit”. It is non-partisan to point out this fact, and, I would suggest, disingenuous to leave out entirely in any discussion of the reasons for or impact of economic downturn in the UK in the last three years. In fact, the FT recently called it a “deafening silence.”

At defenddigitalme, we raised the problem of this inaccurate “counting” narrative numerous times including with MPs, members of the House of Lords in the Schools Bill debate as part of the Counting Children coalition, and in a letter to The Telegraph in March this year. More detail is here, in a blog from April.


Update May 23, 2023

Today I received the DfE held figures of he number of children who leave an educational setting for an unknown onward destination, a section of the Common Transfer Files holding space, in effect a digital limbo after leaving an educational setting until the child is ‘claimed’ by the destination. It’s  known as, the Lost Pupils Database.

Furthermore, the DfE has published exploratory statistics on EHE
and ad hoc stats on CME too.

October 2022. More background:

The panel was chaired by the Rt Hon Sir Iain Duncan Smith MP and other speakers included Fraser Nelson, Editor of The Spectator Magazine; Kieron Boyle, Chief Executive Officer of Guy’s & St Thomas Foundation; the Rt Hon Robert Halfon MP, Education Select Committee Chair; and Mercy Muroki, Journalist at GB News.

We have previously offered to share our original research data and discuss with the Department for Education, and repeated this offer to the panel to help correct the false facts. I look forward in the hope they will take it up.

** Data collected in the record by Local Authorities when children are deregistered from state education (including to move to private school) may include a wide range of personal details, including as an example in Harrow: Family Name, Forename, Middle name, DOB, Unique Pupil Number (“UPN”), Former UPN, Unique Learner Number, Home Address (multi-field), Chosen surname, Chosen given name, NCY (year group), Gender, Ethnicity, Ethnicity source, Home Language, First Language, EAL (English as an additional language), Religion, Medical flag, Connexions Assent, School name, School start date, School end date, Enrol Status, Ground for Removal, Reason for leaving, Destination school, Exclusion reason, Exclusion start date, Exclusion end date, SEN Stage, SEN Needs, SEN History, Mode of travel, FSM History, Attendance, Student Service Family, Carer details, Carer address details, Carer contract details, Hearing Impairment And Visual Impairment, Education Psychology support, and Looked After status.

Policing thoughts, proactive technology, and the Online Safety Bill

Former counter-terrorism police chief attacks Rishi Sunak’s Prevent plans“, reads a headline in today’s Guardian. “Former counter-terrorism chief Sir Peter Fahy […] said: “The widening of Prevent could damage its credibility and reputation. It makes it more about people’s thoughts and opinions. Fahy said: “The danger is the perception it creates that teachers and health workers are involved in state surveillance.”

This article leaves out that today’s reality is already far ahead of proposals or perception. School children and staff are already surveilled in these ways. Not only are things monitored that people think type or read or search for online and offline in the digital environment, but copies may be collected, retained by companies and interventions made.

The products don’t only permit monitoring of trends on aggregated data in overviews of student activity but the behaviours of individual students. And these can be deeply intrusive and sensitive when you are talking about self harm, abuse, and terrorism.

(For more on the safety tech sector, often using AI in proactive monitoring, see my previous post (May 2021) The Rise of Safety Tech.)

Intrusion through inference and interventions

From 1 July 2015 all schools have been subject to the Prevent duty under section 26 of the Counter-Terrorism and Security Act 2015, in the exercise of their functions, to have “due regard to the need to prevent people from being drawn into terrorism”.  While these products are about monitoring far more than the remit of Prevent,  many companies actively market online filtering, blocking and monitoring safety products as a way of meeting that in the digital environment. Such as, “Lightspeed Filter™ helps you meet all of the Prevent Duty’s online regulations…

Despite there being no obligation to date, to fulfil this duty through technology, some companies’ way of selling such tools could be interpreted as threatening if schools don’t use it. Like this example:

“Failure to comply with the requirements may result in intervention from the Prevent Oversight Board, prompt an Ofsted inspection or incur loss of funding.”

Such products may create and send real-time alerts to company or school staff when children attempt to reach sites or type “flagged words” related to radicalisation or extremism on any online platform.

Under the auspices of the safeguarding-in-schools data sharing and web monitoring in the Prevent programme children may be labelled with terrorism or extremism labels, data which may be passed on to others or stored outside the UK without their knowledge. The drift in what is considered significant, has been from terrorism into now more vague and broad terms of extremism and radicalisation. Away from some assessment of intent and capability of action, into interception and interventions for potentially insignificant potential vulnerabilities and inferred assumptions of disposition towards such ideas. This is not potentially going to police thoughts as suggested by Fahy of Sunak’s views. It is already doing so. Policing thoughts in the developing child and holding them accountable for it like this in ways that are unforeseeable, is inappropriate and requires thorough investigation into its effects on children, including mental health.

But it’s important to understand that these libraries of thousands of words, ever changing and in multiple languages, and what the systems are looking for and flag, often claiming to do it using Artificial Intelligence, go far beyond Prevent. ‘Legal but harmful’ is their bread and butter. Self harm, harm to or from others.

While companies have no obligations to publish how the monitoring or flagging operates, what the words or phrases or blocked websites are, their error rates (positive and negative) or the effects on children or school staff and their behaviour as a result, these companies have a great deal of influence what gets inferred from what children do online, and who decides what to act on.

Why does it matter?

Schools have normalized the premise that systems they introduce should monitor activity outside of the school network, and hours. And that strangers or their private companies’ automated systems should be involved in inferring or deciding what children are ‘up to’ before the school staff who know the children in front of them.

In a defenddigitalme report, The State of Data 2020, we included a case study on one company that has since been bought out.  And bought again. As of August 2018 eSafe was monitoring approximately one million school children plus staff across the UK. This case study they used in their public marketing raised all sorts of questions on professional  confidentiality and school boundaries, personal privacy, ethics, and companies’ role and technical capability, as well as the lack of any safety tech accountability.

“A female student had been writing an emotionally charged letter to her Mum using Microsoft Word, in which she revealed she’d been raped. Despite the device used being offline, eSafe picked this up and alerted John and his care team who were able to quickly intervene.”

Their then CEO  had told the House of Lords 2016 Communication Committee enquiry on the Children and the Internet, how the products are not only monitoring children in school or school hours:

“Bearing in mind we are doing this throughout the year, the behaviours we detect are not confined to the school bell starting in the morning and ringing in the afternoon, clearly; it is 24/7 and it is every day of the year. Lots of our incidents are escalated through activity on evenings, weekends and school holidays.”

Similar products offer a photo capturing feature of users (pupils while using the device being monitored) described as “common across most solutions in the sector” by this company:

When a critical safeguarding keyword is copied, typed or searched for across the school network, schools can turn on NetSupport DNA’s webcams capture feature (this feature is turned-off by default) to capture an image of the user (not a recording) who has triggered the keyword.

How many webcam photos have been taken of children by school staff or others through those systems, and for what purposes, kept by whom? In the U.S. in 2010, Lower Merion School District, Philadelphia settled a lawsuit for using laptop webcams to take photos of students.  Thousands of photos had been taken even at home, out of hours, without their knowledge.

Who decides what does and does not trigger interventions across different products? In the month of December 2017 alone, eSafe claims they added 2254 words to their threat libraries.

Famously, Impero’s system even included the word “biscuit” which they say is a term used to define a gun. Their system was used by more than “half a million students and staff in the UK” in 2018. And students had better not talk about “taking a wonderful bath.” Currently there is no understanding or oversight of the accuracy of this kind of software and black-box decision-making is often trusted without openness to human question or correction.

Aside from how the range of tools that are all different work, there are very basic questions about whether such policies and tools help or harm children in various ways at all. The UN Special Rapporteur’s 2014 report on children’s rights and freedom of expression stated:

“The result of vague and broad definitions of harmful information, for example in determining how to set Internet filters, can prevent children from gaining access to information that can support them to make informed choices, including honest, objective and age-appropriate information about issues such as sex education and drug use. This may exacerbate rather than diminish children’s vulnerability to risk.” (2014)

U.S. safety tech creates harms

Today in the U.S. the CDT published a report on school monitoring systems there, many of which are also used over here. The report revealed that 13 percent of students knew someone who had been outed as a result of student-monitoring software. Another conclusion the CDT draws, is that monitoring is used for discipline more often than for student safety.

We don’t have that same research for the UK, but we’ve seen IT staff openly admit to using the webcam feature to take photos of young boys who are “mucking about” on the school library computer.

The Online Safety Bill scales up problems like this

The Online Safety Bill seeks to expand how such ‘behavioural identification technology’ can be expanded outside schools.

“Proactive technology include content moderation technology, user profiling technology or behaviour identification technology which utilises artificial intelligence or machine learning.” (p151 Online Safety Bill, August 3, 2022)

The “proactive technology requirement” is as yet rather open ended, left to OFCOM in Codes of Practice but the scope creep of such AI-based tools has become ever more intrusive in education. Legal but harmful is decided by companies and the IWF and any number of opaque third parties whose process and decision-making we know little about. It’s important not to conflate filtering, blocking lists of ‘unsuitable’ websites that can be accessed in schools, with monitoring and tracking individual behaviours.

‘Technological developments that have the capacity to interfere with our freedom of thought fall clearly within the scope of “morally unacceptable harm,”‘ according to Algere (2017), and yet this individual interference is at the very core of school safeguarding tech and policy by design.

In 2018, the ‘lawful but harmful’ list of activities in the Online Harms White paper was nearly identical with those terms used by school Safety Tech companies. The Bill now appears to be trying to create a new legitimate basis for these practices, more about underpinning a developing market, than supporting children’s safety or rights.

Chilling speech is itself controlling content

While a lot of debate about the Bill has been the free speech impacts of content removal, there has been less about what is unwritten but how it will operate to prevent speech and participation in the digital environment for children. The chilling effect of surveillance on access and participation online is well documented. Younger people and women are more likely to be negatively affected (Penney, 2017). The chilling effect on thought and opinion is worsened in these types of tools that trigger an alert even when what is typed is quickly deleted or remains unsent or shared. Thoughts are no longer private.

The ability to use end-to-end encryption on private messaging platforms is simply worked around by these kinds of tools, trading security for claims of children’s safety. Anything on screen may be read in the clear by some systems, even capturing passwords and bank details.

Graham Smith has written, “It may seem like overwrought hyperbole to suggest that the [Online Harms] Bill lays waste to several hundred years of fundamental procedural protections for speech. But consider that the presumption against prior restraint appeared in Blackstone’s Commentaries (1769). It endures today in human rights law. That presumption is overturned by legal duties that require proactive monitoring and removal before an independent tribunal has made any determination of illegality.”

More than this, there is no determination of illegality in legal but harmful activity. It’s opinion. The government is prone to argue that, “nothing in the Bill says X…” but you need to understand this context of how such proactive behavioural monitoring tools work is through threat and the resultant chilling effect to impose unwritten control. This Bill does not create a safer digital environment, it creates threat models for users and companies, to control how we think and behave.

What do children and parents think?

Young people’s own views that don’t fit the online harms narrative have been ignored by Westminster scrutiny Committees. A 2019 survey by the Australian e-safety commissioner found that over half (57%) of child respondents were uncomfortable with background monitoring processes, and 43 %were unsure about these tools’ effectiveness in ensuring online safety.

And what of the role of parents? Article 3(2) of the UNCRC says: “States Parties undertake to ensure the child such protection and care as is necessary for his or her wellbeing, taking into account the rights and duties of his or her parents, legal guardians, or other individuals  legally responsible for him or her, and, to this end, shall take all appropriate legislative and administrative measures.” (my emphasis)

In 2018, 84% of 1,004 parents in England who we polled through Survation, agreed that children and guardians should be informed how this monitoring activity works and wanted to know what the keywords were. (We didn’t ask if it should happen at all or not.)

The wide ranging nature [of general monitoring] rather than targeted and proportionate interference has been judged to be in breach of law and a serious interference with rights, previously. Neither policy makers nor companies should assume parents want safety tech companies to remove autonomy, or make inferences about our children’s lives. Parents if asked, reject the secrecy in which it happens today and demand transparency and accountability. Teachers can feel anxious talking about it at all. There’s no clear routes for error corrections, in fact it’s not done because some claim in building up profiles staff should not delete anything and ignore claims of errors, in case a pattern of behaviour is missed. But there’s no independent assessments available to evidence these tools work or are worth the costs. There are no routes for redress or responsibility taken for tech-made mistakes. None of which makes children safer online.

Before broadening out where such monitoring tools are used, their use and effects on school children need to be understood and openly debated. Policy makers may justify turning a blind eye to harms created by one set of technology providers while claiming that only the other tech providers are the problem, because it suits political agendas or industry aims, but children’s rights and their wellbeing should not be sacrificed in doing so.  Opaque, unlawful and unsafe practice must stop. A quid pro quo for getting access to millions of children’s intimate behaviour, should be transparent access to their product workings, and accepting standards on universal safe accountable practices. Families need to know what’s recorded. To have routes for redress when a daughter researching ‘cliff walks’ gets flagged as a suicide risk or an environmentally interested teenage son searching for information on ‘black rhinos’ is asked about his potential gang membership. The tools sold as solutions to online harms, shouldn’t create more harm like these reported real-life case studies.

Teachers are ‘involved in state surveillance’ as Fahy put it, through Prevent. Sunak was wrong to point away from the threats of the far right in his comments. But the far broader unspoken surveillance of children’s personal lives, behaviours and thoughts through general monitoring in schools, and what will be imposed through the Online Safety Bill more broadly, should concern us far more than was said.

When the gold standard no longer exists: data protection and trust

Last week the DCMS announced that consultation on changes to Data Protection laws is coming soon.

  • UK announces intention for new multi-billion pound global data partnerships with the US, Australia and Republic of Korea
  • International privacy expert John Edwards named as preferred new Information Commissioner to oversee shake-up
  • Consultation to be launched shortly to look at ways to increase trade and innovation through data regime.

The Telegraph reported, Mr Dowden argues that combined, they will enable Britain to set the “gold standard” in data regulation, “but do so in a way that is as light touch as possible”.

It’s an interesting mixture of metaphors. What is a gold standard? What is light touch? These rely on assumptions in the reader to assume meaning, but don’t convey any actual content. Whether there will be substantive changes or not, we need to wait for the full announcement this month.

Oliver Dowden’s recent briefing to the Telegraph (August 25) was not the first trailer for changes that are yet to be announced. He wrote in the FT in February this year, that, “the UK has an opportunity to be at the forefront of global, data-driven growth,” and it looks like he has tried to co-opt the rights’ framing as his own.  …”the beginning of a new era in the UK — one  where we start asking ourselves not just whether we have the right to use data, but whether,  given its potential for good, we have the right not to.”

There was nothing more on that in this week’s announcement, but the focus was on international trade. The Government says it is prioritising six international agreements with “the US, Australia, Colombia, Singapore, South Korea and Dubaibut in the future it also intends to target the world’s fastest growing economies, among them, India, Brazil, Kenya and Indonesia.” (my bold)

Notably absent from the ‘fastest growing’ among them mentions’ list is China. What those included in the list have in common, is that they are countries not especially renowned for protecting human rights.

Human rights like privacy. The GDPR and in turn the UK-GDPR recognised that rights matter.  Data Protection is not designed in other regimes to be about prioritising the protection of rights but harmonisation of data in trade, and that may be where we are headed. If so, it would be out of step with how the digital environment has changed since those older laws were seen as satisfactory. But weren’t.  And the reason why the EU countries moved towards both better harmonisation *and* rights protection.

At the same time, while data protection laws increasingly align towards a high interoperable and global standard, data sovereignty and protectionism is growing too where transfers to the US remain unprotected from government surveillance.

Some countries are establishing stricter rules on the cross-border transfer of personal information, in the name of digital sovereignty, security or business growth. such as Hessen’s decision on Microsoft and “bring the data home” moves to German-based data centres.

In the big focus on data-for-trade post-Brexit fire sale,  the DCMS appears to be ignoring these risks of data distribution, despite having a good domestic case study on its doorstep in 2020. The Department for Education has been giving data away sensitive pupil data since 2012. Millions of people, including my own children, have no idea where it’s gone. The lack of respect for current law makes me wonder how I will trust that our own government, and those others we trade with, will respect our rights and risks in future trade deals.

Dowden complains in the Telegraph about the ICO that, “you don’t know if you have done something wrong until after you’ve done it”.  Isn’t that the way that enforcement usually works? Should the 2019-20 ICO audit have turned a blind eye to  the Department for Education lack of prioritisation of the rights of the named records of over 21 million pupils? Don’t forget even gambling companies had access to learners’ records of which the Department for Education claimed to be unaware. To be ignorant of law that applies to you, is a choice.

Dowden claims the changes will enable Britain to set the “gold standard” in data regulation. It’s an ironic analogy to use, since the gold standard while once a measure of global trust between countries, isn’t used by any country today. Our government sold off our physical gold over 20 years ago, after being the centre of the global gold market for over 300 years. The gold standard is a meaningless thing of the past that sounds good. A true international gold standard existed for fewer than 50 years (1871 to 1914). Why did we even need it? Because we needed a consistent trusted measure of monetary value, backed by trust in a commodity. “We have gold because we cannot trust governments,” President Herbert Hoover famously said in 1933 in his statement to Franklin D. Roosevelt. The gold standard was all about trust.

At defenddigitalme we’ve very recently been talking with young people about politicians’ use of language in debating national data policy.  Specifically, data metaphors. They object to being used as the new “oil” to “power 21st century Britain” as Dowden described it.

A sustainable national data strategy must respect human rights to be in step with what young people want. It must not go back to old-fashioned data laws only  shaped by trade and not also by human rights; laws that are not fit for purpose even in the current digital environment. Any national strategy must be forward-thinking. It otherwise wastes time in what should be an urgent debate.

In fact, such a strategy is the wrong end of the telescope from which to look at personal data at all— government should be focussing on the delivery of quality public services to support people’s interactions with the State and managing the administrative data that comes out of digital services as a by-product and externality. Accuracy. Interoperability. Registers. Audit. Rights’ management infrastructure. Admin data quality is quietly ignored while we package it up hoping no one will notice it’s really. not. good.

Perhaps Dowden is doing nothing innovative at all. If these deals are to be about admin data given away in international trade deals he is simply continuing a long tradition of selling off the family silver. The government may have got to the point where there is little left to sell. The question now would be whose family does it come from?

To use another bad metaphor, Dowden is playing with fire here if they don’t fix the issue of the future of trust. Oil and fire don’t mix well. Increased data transfers—without meaningful safeguards including minimized data collection to start with—will increase risk, and transfer that risk to you and me.

Risks of a lifetime of identity fraud are not just minor personal externalities in short term trade. They affect nation state security. Digital statecraft. Knowledge of your public services is business intelligence. Loss of trust in data collection creates lasting collective harm to data quality, with additional risk and harm as a result passed on to public health programmes and public interest research.

I’ll wait and see what the details of the plans are when announced. We might find it does little more than package up recommendations on Codes of Practice, Binding Corporate Contracts and other guidance that the EDPB has issued in the last 12 months. But whatever it looks like, so far we are yet to see any intention to put in place the necessary infrastructure of rights management that admin data requires. While we need data registers, those we had have been axed. Few new ones under the Digital Economy Act replaced them. Transparency and controls for people to exercise rights are needed if the government wants our personal data to be part of new deals.

 

img: René Magritte The False Mirror Paris 1929

=========

Join me at the upcoming lunchtime online event, on September 17th from 13:00 to talk about the effect of policy makers’ language in the context of the National Data Strategy: ODI Fridays: Data is not an avocado – why it matters to Gen Z https://theodi.org/event/odi-fridays-data-is-not-an-avocado-why-it-matters-to-gen-z/

Mutant algorithms, roadmaps and reports: getting real with public sector data

The CDEI has published ‘new analysis on the use of data in local government during the COVID-19 crisis’ (the Report) and it features some similarities in discussing data that the Office for AI roadmap (the Roadmap) did in January on machine learning.

A notable feature is that the CDEI work includes a public poll. Nearly a quarter of 2,000 adults said that the most important thing for them, to trust the council’s use of data, would be “a guarantee that information is anonymised before being shared, so your data can’t be linked back to you.”

Both the Report and the Roadmap shy away from or avoid that problematic gap in their conclusions, between public expectations and reality in the application of data used at scale in public service provision, especially in identifying vulnerability and risk prediction.

Both seek to provide vision and aims around the future development of data governance in the UK.

The fact is that everyone must take off their rose-tinted spectacles on data governance to accept this gap, and get basics fixed in existing practice to address it. In fact, as academic Michael Veale wrote, often the public sector is looking for the wrong solution entirely.The focus should be on taking off the ‘tech goggles’ to identify problems, challenges and needs, and to not be afraid to discover that other policy options are superior to a technology investment.”

But used as it is, the public sector procurement and use of big data at scale, whether in AI and Machine Learning or other systems, require significant changes in approach.

The CDEI poll asked, If an organisation is using an algorithmic tool to make decisions, what do you think are the most important safeguards that they should put in place  68% rated, that humans have a key role in overseeing the decision-making process, for example reviewing automated decisions and making the final decision, in their top three safeguards.

So what is this post about? Why our arms length bodies and various organisations’ work on data strategy are hindering the attainment of the goals they claim to promote, and what needs fixed to get back on track. Accountability.

Framing the future governance of data

On Data Infrastructure and Public Trust, the AI Council Roadmap stated an ambition to, “Lead the development of data governance options and its uses. The UK should lead in developing appropriate standards to frame the future governance of data.”

To suggest we not only should be a world leader but imagine that there is the capability to do so, suggests a disconnect with current reality, none of which was mentioned in the Roadmap but is drawn out a little more in the CDEI Report from local authority workshops.

When it comes to data policy and Artificial Intelligence (AI) or Machine Learning (ML) based on data processing and therefore dependent on its infrastructure, suggesting we should lead on data governance, as if separate from the existing standards and frameworks set out in law, would be disastrous for the UK and businesses in it.  Exports need to meet standards in the receiving countries. You cannot just ‘choose your own’ adventure here.

The CDEI Report says both that participants in their workshops found a lack of legal clarity “in the collection and use of data” and, “Participants finished the Forum by discussing ways of overcoming the barriers to effective and ethical data use.”

Lack of understanding of the law is a lack of competence and capability that I have seen and heard time and time and time again in participants at workshops, events, webinars, some of whom are in charge of deciding what tools are procured and how to implement public policy using administrative data, over the last 5 years. The law on data processing is accessible and generally straightforward.

If your work involves “overcoming barriers” then either there is not competence to understand what is lawful to proceed with confidence using data protections appropriately, or you are trying to avoid doing so. Neither is a good place to be in for public authorities, and bodes badly for the safe, fair, transparent and lawful use of our personal data by them.

But it is also lack of data infrastructure that increases the skills gap and leaves a bigger need to know what is lawful or not, because if your data is held in “excessive use of excel spreadsheets” then you need to make decisions about ‘sharing’ done through distribution of data. Data access can be more easily controlled through role-based access models, that make it clear when someone is working around their assigned security role, and creates an audit trail of access. You reduce risk by distributing access, not distributing data.

The CDEI Report quotes as a ‘concern’ that data access granted under emergency powers in the pandemic will be taken away. This is a mistaken view that should be challenged. That access was *always* conditional and time limited. It is not something that will be ‘taken away’ but an exceptional use only granted because it was temporary, for exceptional purposes in exceptional times. Had it not been time limited, you wouldn’t have had access. Emergency powers in law are not ‘taken away’, but can only be granted at all in an emergency. So let’s not get caught up in artificial imaginings of what could change and what ifs, but change what we know is necessary.

We would do well to get away from the hyperbole of being world-leading, and aim for a minimum high standard of competence and capability in all staff who have any data decision-making roles and invest in the basic data infrastructure they need to do a good job.

Appropriate standards to frame the future governance of data

The AI Council Roadmap suggested that, “The UK should lead in developing appropriate standards to frame the future governance of data.”  Let’s stop and really think for a minute, what did the Roadmap writers think they meant by that?

Because we have law that frames ‘appropriate standards.’ The UK government just seems unable or unwilling to meet it. And not only in these examples, in fact I’d challenge all the business owners on the AI Council to prove their own products meet it.

You could start with the Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (wp251rev.01). Or consider any of the Policy, recommendations, declarations, guidelines and other legal instruments issued by Council of Europe bodies or committees on artificial intelligence. Or valuable for export standards, ensure respect for the Convention 108  standards to which we are a signed up State party among its over 50 countries, and growing. That’s all before the simplicity of the UK Data Protection Act 2018 and the GDPR.

You could start with auditing current practice for lawfulness. The CDEI Roadmap says, “The CDEI is now working in partnership with local authorities, including Bristol City Council, to help them maximise the benefits of data and data-driven technologies.” I might suggest that includes a good legal team, as I think the Council needs one.

The UK is already involved in supporting the development of guidelines (as I was alongside UK representatives of government and the data regulator the ICO among hundreds of participants in drawing out Convention 108 Guidelines on data processing in education) but to suggest as a nation state that we have the authority to speak on the future governance of data without acknowledging what we should already be doing and where we get it wrong, is an odd place to start.

The current state of reality in various sectors

Take for example the ICO audit of the Department for Education.

Failures to meet basic principles of data protection law include knowing what data they’ve got, appropriate controls on distribution and failure to fair process (tell people you process their data). This is no small stuff. And it’s only highlights from the eight page summary.

The DfE don’t adequately understand what data they hold and not having a record of processing leads to a direct breach of #GDPR. Did you know the Department is not able to tell you to which third parties your own or your child’s sensitive, identifying personal data (from over 21m records) was sent, among 1000s of releases?

The approach on data releases has been to find a way to fit the law to suit data requests, rather than assess if data distribution should be approved at all. This ICO assessment was of only 400 applications — there’s been closer to 2,000 approved since 2012. One refusal was to the US. Another the MOD.


For too long, the DfE ‘internal cultural barriers and attitudes’ has meant it hasn’t cared about your rights and freedoms or meeting its lawful obligations. That is a national government Department in charge of over fifty such mega databases, the NPD is only one of. This is a systemic and structural set of problems, as a direct result of Ministerial decisions that changed the law in 2012 to give our personal data away from state education. It was a choice made not to tell the people whom the data were about. This continues to be in breach of the law. And that is the same across many government departments.

Why does it even matter some still ask? Because there is harm to people today. There is harm in history that must not be possible to repeat. And some of the data held could be used in dangerous ways.

You only need to glance at other applications in government departments and public services to see bad policy, bad data and bad AI or machine learning outcomes. And all of those lead to breakdowns in trust and relations between people and the systems meant to support them, that in turn lead to bad data, and policy.

Unless government changes its approach, the direction of travel is towards less trust, and for public health for example, we see the consequences in disastrous responses from not attending for vaccination based on mistrust of proven data sharing, to COVID conspiracy theories.

Commercial reuse of pubic admin data is a huge mistake and the direction of travel is damaging.

“Survey responses collected from more than 3,000 people across the UK and US show that in late 2018, some 95% of people were not willing to share their medical data with commercial industries. This contrasts with a Wellcome study conducted in 2016 which found that half of UK respondents were willing to do so.” (July 2020, Imperial College)

Mutant algorithms

Summer 2020 first saw no human accountability for grades “derailed by a mutant #algorithm — then the resignation of two  Ofqual executives. What aspects of the data governance failures will be addressed this year? Where’s the *fairness* —there is a legal duty to tell people how what data is used especially in its automated aspects.

Misplaced data and misplaced policy aims

In June 2020 The DWP argued in a court case that, “to change the way the benefit’s online computer calculation system worked in line with the original court ruling would undermine the principle of universal credit” — Not only does it fail its public interest purpose, and does harm, but is lax on its own #data governance controls. World leading is far, far, far away.

Entrenched racism

In August 2020 “The Home Office [has] agreed to stop using a computer algorithm to help decide visa applications after allegations that it contained “entrenched racism”. How did it ever get approved for use?

That entrenched racism is found in policing too. The Gangs Matrix use of data required an Enforcement Notice from the ICO and how it continues to operate at all, given its recognised discrimination and harm to young lives, is shocking.

Policy makers seem fixated on quick fixes that for the most part exist only in the marketing speak of the sellers of the products, while ignoring real problems in ethics and law, and denying harm.

“Now is a good time to stop.”

The most obvious case for me, where the Office for AI should step in, and where the CDEI Report from workshops with Local Authorities was most glaringly remiss, is where there is evidence of failure of efficacy and proven risk of danger to life through the procurement of technology in public policy. Don’t forget to ask what doesn’t work.

In January 2020  a report from researchers at The Turing institute, Rees Centre and What Works Centre published a report on ethics in Machine Learning in Children’s Social Care (CSC) and raised the “dangerous blind spots” and “lurking biases” in application of machine learning in UK children’s social care— totally unsuitable for life and death situations. Its later evidence showed models that do not work or wuld reach the threshold they set for defining ‘success’.

Out of the thirty four councils who had said they had acute difficulties in recruiting children’s social workers in December 2020 Local Government survey, 50 per cent said they had both difficulty recruiting generally and difficulty recruiting the required expertise, experience or qualification. Can staff in such challenging circumstances really have capacity to understand the limitations of developing technology on top of their every day expertise?

And when it comes to focussing on the data, there are problems too. By focusing on the data held, and using only that to make policy decisions rather than on the ground expertise, we end up in situations where only “those who get measured, get helped”.

As Michael Sanders wrote, on CSC, “Now is a good time to stop. With the global coronavirus pandemic, everything has been changed, all our data scrambled to the point of uselessness in any case.

There is no short cut

If the Office for AI Roadmap is to be taken seriously outside its own bubble, the board need to be and be seen to be independent of government. It must engage with reality of applied AI in practice in public services, getting basics fixed first.  Otherwise all its talk of “doubling down” and suggesting the UK government can build public trust and position the UK as a ‘global leader’ on Data Governance is misleading and a waste of everyone’s time and capacity.

I appreciate that it says, “This Roadmap and its recommendations reflects the views of the Council as well as 100+ additional experts.” All of whom I imagine are more expert than me. If so, which of them is working on fixing the basic underlying problems with data governance within public sector data, how and by when? If they are not, why are they not, and who is?

The CDEI report published today identified in local authorities that, “public consultation can be a ‘nice to have’, as it often involves significant costs where budgets are already limited.” If it’s a position the CDEI does not say is flawed, it may as well pack up and go home. On page 27 it reports, “When asked about their understanding of how their local council is currently using personal data and presented with a list of possible uses, 39% of respondents reported that they do not know how their personal data is being used.” The CDEI should be flagging this with a great big red pen as an indicator of unlawful practice.

The CDEI Report also draws on the GDS Ethical Framework but that will be forever flawed as long as its own users, not the used, are the leading principle focus, underpinning the aims. It starts with “Define and understand public benefit and user need.” There’s very little about ethics and it’s much more about “justifying our project”.

The Report did not appear to have asked the attendees what impact they think their processes have on everyday lives, and social justice.

Without fixes in these approaches, we will never be world leading, but will lag behind because we haven’t built the safe infrastructure necessitated by our vast public administrative data troves. We must end bad data practice which includes getting right the basic principles on retention and data minimisation, and security (all of which would be helped if we started by reducing those ‘vast public administrative data troves’ much of which ranges from poor to abysmal data quality anyway). Start proper governance and oversight procedures. And put in place all the communication channels, tools, policy and training to make telling people how data are used and fair processing happen. It is not, a ‘nice to have’ but is required in all data processing laws around the world.

Any genuine “barriers” to data use in data protection law,  are designed as protections for people; the people the public sector, its staff and these arms length bodies are supposed to serve.

Blaming algorithms, blaming lack of clarity in the law, blaming “barriers” is avoidance of one thing. Accountability. Accountability for bad policy, bad data and bad applications of tools is a human responsibility. The systems you apply to human lives affect people, sometimes forever and in the most harmful ways.

What would I love to see led from any of these arms length bodies?

  1. An audit of existing public admin data held, by national and local government, and consistent published registers of databases and algorithms / AI / ML currently in use.
  2. Expose where your data system is nothing more than excel spreadsheets and demand better infrastructure.
  3. Identify the lawful basis for each set of data processes, their earliest records dates and content.
  4. Publish that resulting ROPA and the retention schedule.
  5. Assign accountable owners to databases, tools and the registers.
  6. Sort out how you will communicate with people whose data you unlawfully process to meet the law, or stop processing it.
  7. And above all, publish a timeline for data quality processes and show that you understand how the degradation of data accuracy, quality, and storage limitations all affect the rights and responsibilities in law that change over time, as a result.

There is no short cut, to doing a good job, but a bad one.

If organisations and bodies are serious about “good data” use in the UK, they must stop passing the buck and spreading the hype. Let’s get on with what needs fixed.

In the words of Gavin Freeguard, then let’s see how it goes.

“Michal Serzycki” Data Protection Award 2021

It is a privilege to be a joint-recipient in the fourth year of the “Michal Serzycki” Data Protection Award, and I thank the Data Protection Authority in Poland (UODO) for the recognition of work for the benefit of promoting data protection values and the right to privacy.

I appreciate the award in particular as the founder of an NGO, and the indirect acknowledgement of the value of NGOs to be able to contribute to public policy, including openness towards international perspectives, standards, the importance of working together, and our role in holding the actions of state authorities and power to account, under the rule of law.

The award is shared with Mrs Barbara Gradkowska, Director of the Special School and Educational Center in Zamość, whose work in Poland has been central to the initiative, Your Data — Your Concern, an educational Poland-wide programme for schools that is supported and recognized by the UODO. It offers support to teachers in vocational training centres, primary, middle and high schools related to personal data protection and the right to privacy in education.

And it is also shared with Mr Maciej Gawronski, Polish legal advisor and authority in data protection, information technology, cloud computing, cybersecurity, intellectual property and business law.

The UODO has long been a proactive advocate in the schools’ sector in Poland for the protection of children’s data rights, including recent enforcement after finding the processing of children’s biometric data using fingerprint readers unlawful, when using a school canteen and ensuring destruction of pupil data obtained unlawfully.

In the rush to remote learning in 2020 in response to school closures in COVID-19, the UODO warmly received our collective international call for action, a letter in which over thirty organisations worldwide called on policy makers, data protection authorities and technology providers, to take action, and encouraged international collaboration to protect children around the world during the rapid adoption of digital educational technologies (“edTech”). The UODO issued statements and a guide on school IT security and data protection.

In September 2020, I worked with their Data Protection Office at a distance, in delivering a seminar for teachers, on remote education.

The award also acknowledges my part in the development of the Guidelines on Children’s Data Protection in an Education Setting adopted in November 2020, working in collaboration with country representatives at the Council of Europe Committee for Convention 108, as well as with observers, and the Committee’s incredible staff.

2020 was a difficult year for people around the world under COVID-19 to uphold human rights and hold the space to push back on encroachmentespecially for NGOs, and in community struggles from the Black Lives Matter movement to environmental action to  UK students on the streets of London to protest algorithmic unfairness. In Poland the direction of travel is to reduce women’s rights in particular. Poland’s ruling Law and Justice (PiS) party has been accused of politicising the constitutional tribunal and using it to push through its own agenda on abortion, and the government appears set on undermining the rule of law creating a ‘chilling effect’ for judges. The women of Poland are again showing the world, what it means and what it can cost to lose progress made.

In England at defenddigitalme, we are waiting to hear later this month, what our national Department for Education will do to better protect millions of children’s rights, in the management of national pupil records, after our Data Protection regulator, the ICO’s audit and intervention. Among other sensitive content, the National Pupil Database holds sexual orientation data on almost 3.2 million students’ named records, and religious belief on 3.7 million.

defenddigitalme is a call to action to protect children’s rights to privacy across the education sector in England, and beyond. Data protection has a role to playwithin the broader rule of law to protect and uphold the right to privacy, to prevent state interference in private and family life, and in the protection of the full range of human rights necessary in a democratic society. Fundamental human rights must be universally protected to foster human flourishing, to protect the personal dignity and freedoms of every individual, to promote social progress and better standards of life in larger freedoms.


The award was announced at the conference,Real personal data protection in remote reality,” organized by the Personal Data Protection Office UODO, as part of the celebration of the 15th Data Protection Day on 28th January, 2021 with an award ceremony held on its eve in Warsaw.

Is the Online Harms ‘Dream Ticket’ a British Green Dam?

The legal duty in Online Harms government proposals is still vague.

For some it may sound like the ‘“dream ticket”.  A framework of censorship to be decided by companies, enabled through the IWF and the government in Online Safety laws. And ‘free’ to all. What companies are already doing today in surveillance of all outgoing  and *incoming* communications that is unlawful, made lawful. Literally, the nanny state could decide, what content will be blocked, if, “such software should “absolutely” be pre-installed on all devices for children at point of sale and “…people could run it the other side to measure what people are doing as far as uploading content.”

From Parliamentary discussion it was clear that the government will mandate platforms, “to use automated technology…, including, where proportionate, on private channels,” even when services are encrypted.

No problem, others might say, there’s an app for that. “It doesn’t matter what program the user is typing in, or how it’s encrypted.”

But it was less clear in the consultation outcome updated yesterday,  that closed in July 2019 and still says, “we are consulting on definitions of private communications, and what measures should apply to these services.” (4.8)

Might government really be planning to impose or incentivise surveillance on [children’s] mobile phones at the point of sale in the UK? This same ‘dream ticket’ company was the only company  mentioned by the Secretary of State for DCMS yesterday. After all, it is feasible. In 2009 Chinese state media reported that the Green Dam Youth Escort service, was only installed in 20 million computers in internet cafes and schools.

If government thinks it would have support for such proposals, it  may have overlooked the outrage that people feel about companies prying on our everyday lives. Or has already forgotten the summer 2020 student protests over the ‘mutant algorithm’.

There is conversely already incidental harm and opaque error rates from the profiling UK children’s behaviour while monitoring their online and offline computer activity, logged against thousands of words in opaque keyword libraries. School safeguarding services are already routine in England, and are piggy backed by the Prevent programme. Don’t forget one third of referrals to Prevent come from education and over 70% are not followed through with action.  Your child and mine might already be labelled with ‘extremism’, ‘terrorism’, ‘suicide’ or ‘cyberbullying’ or have had their photos taken by the webcam of their device an unlimited number of times, thanks to some of these ‘safeguarding’ software and services, and the child and parents never know.

Other things that were not clear yesterday, but will matter, is if the ‘harm’ of the Online Harms proposals will be measured by intent, or measured by the response to it. What is harm or hate or not, is contested across different groups online, and weaponised, at scale.

The wording of the Law Commission consultation closing on Friday on communications offences also matters, and asks about intention to harm a likely audience, where harm is defined as any non-trivial emotional, psychological, or physical harm, but should not require proof of actual harm. This together with any changes on hate crime and on intimate images in effect proposes changes on ‘what’ can be said, how, and ‘to whom’ and what is considered ‘harmful’ or ‘hateful’ conduct.  It will undoubtedly have massive implications for the digital environment once all joined up. It matters when ‘culture wars’ online, can catch children in the cross fire.

I’ve been thinking about all this, against the backdrop of the Bell v Tavistock [2020] EWHC 3274 judgement with implications from the consideration of psychological harm, children’s evolving capacity, the right to be heard and their autonomy, a case where a parent involved reportedly has not even told their own child.

We each have a right to respect for our private life, our family life, our home and our correspondence. Children are rights holders in their own right. Yet it appears the government and current changes in lawmaking may soon interfere with that right in a number of ways, while children are used at the heart of everyone’s defence.

In order to find that an interference is “necessary in a democratic society” any interference with rights and freedoms should be necessary and proportionate for each individual, not some sort of ‘collective’ harm that permits a rolling, collective interference.

Will the proposed outcomes prevent children from exercising their views or full range of rights, and restrict online participation? There may be a chilling effect on speech. There is in schools. Sadly these effects may well be welcomed by those who believe not only that some rights are more equal than others, but some children, more than others.

We’ll have to wait for more details. As another MP in debate noted yesterday, “The Secretary of State rightly focused on children, but this is about more than children; it is about the very status of our society ….”

Damage that may last a generation.

Hosted by the Mental Health Foundation, it’s Mental Health Awareness Week until 24th May, 2020. The theme for 2020 is ‘kindness’.

So let’s not comment on the former Education Ministers and MPs, the great-and-the-good and the-recently-resigned, involved in the Mail’s continued hatchet job on teachers. They probably believe that they are standing up for vulnerable children when they talk about the “damage that may last a generation“. Yet the evidence of much of their voting, and policy design to-date, suggests it’s much more about getting people back to work.

Of course there are massive implications for children in families unable to work or living with the stress of financial insecurity on top of limited home schooling. But policy makers should be honest about the return to school as an economic lever, not use children’s vulnerability to pressure professionals to return to full-school early, or make up statistics to up the stakes.

The rush to get back to full-school for the youngest of primary age pupils has been met with understandable resistance, and too few practical facts. Going back to a school in COVID-19 measures for very young children, will take tonnes of adjustment, to the virus, to seeing friends they cannot properly play with, to grief and stress.

When it comes to COVID-19 risk, many countries with similar population density to the UK, locked down earlier and tighter and now have lower rates of community transmission than we do. Or compare where didn’t, Sweden, but that has a population density of 24 people per Km2. The population density for the United Kingdom is 274 people per square kilometre. In Italy, with 201 inhabitants per square kilometre,  you needed a permission slip to leave home.

And that’s leaving aside the unknowns on COVID-19 immunity, or identifying it, or the lack of testing offer to over a million children under-5,  the very group expected to be those who return first to full-school.

Children have rights to education, and to life, survival and development. But the blanket target groups and target date, don’t appear to take the Best Interests of The Child, for each child, into account at all. ‘Won’t someone think of the children?’ may never have been more apt.

Parenting while poor is highly political

What’s the messaging in the debate, even leaving media extremes aside?

The sweeping assumption by many commentators that ‘the poorest children will have learned nothing‘ (BBC Newsnight, May 19) is unfair, but this blind acceptance as fact, a politicisation of parenting while poor, conflated with poor parenting, enables the claimed concern for their vulnerability to pass without question.

Many of these most vulnerable children were not receiving full time education *before* the pandemic but look at how it is told.

It would be more honest in discussion or publishing ‘statistics’ around the growing gap expected if children are out of school, to consider what the ‘excess’ gap will be and why. (Just like measuring excess deaths, not only those people who died and had been tested for COVID-19.) Thousands of vulnerable children were out of school already, due tobudget decisions that had left local authorities unable to fulfil their legal obligation to provide education.’

Pupil Referral Units were labeled “a scandal” in 2012 and only last year the constant “gangs at the gates” narrative was highly political.

“The St Giles Trust research provided more soundbites. Pupils involved in “county lines” are in pupil referral units (PRUs), often doing only an hour each day, and rarely returning into mainstream education.’ (Steve Howell, Schools Week)

Nearly ten years on, there is still lack of adequate support for children in Alternative Provision and a destructive narrative of “us versus them”.

Source: @sarahkendzior

The value of being in school

Schools have remained open for children of key workers and more than half a million pupils labeled as ‘vulnerable’, which includes those classified as “children in need” as well as 270,000 children with an education, health and care (EHC) plan for special educational needs.  Not all of those are ‘at risk’ of domestic violence or abuse or neglect. The reasons why there is low turnout, tend to be conflated.

Assumptions abound about the importance of formal education and the best place for those very young children in Early Years (age 2-5) to be in school at all, despite conflicting UK evidence, that is thin on the ground. Research for the NFER [the same organisation running the upcoming Baseline Test of four year olds still due to begin this year] (Sharp, 2002), found:

“there would appear to be no compelling educational rationale for a statutory school age of five or for the practice of admitting four-year-olds to school reception classes.” And “a late start appears to have no adverse effect on children’s progress.”

Later research from 2008, from the IoE, Research Report No. DCSF-RR061 (Sylva et al, 2008) commissioned before the then ‘new’ UK Government took office in 2010, suggested better outcomes for children who are in excellent Early Years provision, but also pointed out that more often the most vulnerable are not those in the best of provision.

“quality appears to be especially important for disadvantaged groups.”

What will provision quality be like, under Coronavirus measures? How much stress-free space and time for learning will be left at all?

The questions we should be asking are a) What has been learned for the second wave and b) Assume by May 2021 nothing changes. What would ideal schooling look like, and how do we get there?

Attainment is not the only gap

While it is not compulsory to be in any form of education, including home education, till your fifth birthday in England, most children start school at age 4 and turn five in the course of the year. It is one of the youngest starts in Europe.  Many hundreds of thousands of children start formal education in the UK even younger from age 2 or three. Yet is it truly better for children? We are way down the Pisa attainment scores, or comparable regional measures.  There has been little change in those outcomes in 13 years, except to find that our children are measured as being progressively less happy.

“As Education Datalab points out, the PISA 2018 cohort started school around 2008, so their period at school not only lines up with the age of austerity and government cuts, but with the “significant reforms” to GCSEs introduced by Michael Gove while he was Education Secretary.”  [source: Schools Week, 2019]

There’s no doubt that some of the harmful economic effects of Brexit will be attributed to the effects of the pandemic. Similarly, many of the outcomes of ten years of policy that have increased  children’s vulnerability and attainment gap, pre-COVID-19, will no doubt be conflated with harms from this crisis in the next few years.

The risk of the acceptance of misattributing this gap in outcomes, is a willingness to adopt misguided solutions, and deny accountability.

Children’s vulnerability

Many experts in children’s needs, have been in their jobs much longer than most MPs and have told them for years the harm their policies are doing to the very children, those voices now claim to want to protect. Will the MPs look at that evidence and act on it?

More than a third of babies are living below the poverty line in the UK. The common thread in many [UK] families’ lives, as Helen Barnard, deputy director for policy and partnerships for the Joseph Rowntree Foundation described in 2019, is a rising tide of work poverty sweeping across the country.” Now the Coronavirus is hitting those families harder too. The ONS found that in England the death rate  in the most deprived areas is 118% higher than in the least deprived.

Charities speaking out this week, said that in the decade since 2010, local authority spending on early intervention services dropped by 46% but has risen on late intervention, from 58% to 78% of spending on children and young people’s services over the same period.

If those advocating for a return to school, for a month before the summer, really want to reduce children’s vulnerability, they might sort out CAMHs for simultaneous support of the return to school, and address those areas in which government must first do no harm. Fix these things that increase the “damage that may last a generation“.


Case studies in damage that may last

Adoption and Children (Coronavirus) (Amendment) Regulations 2020’

Source: Children’s Commissoner (April 2020)

“These regulations make significant temporary changes to the protections given in law to some of the most vulnerable children in the country – those living in care.” ” I would like to see all the regulations revoked, as I do not believe that there is sufficient justification to introduce them. This crisis must not remove protections from extremely vulnerable children, particularly as they are even more vulnerable at this time. As an urgent priority it is essential that the most concerning changes detailed above are reversed.”

CAMHS: Mental health support

Source: Local Government Association CAMHS Facts and Figures

“Specialist services are turning away one in four of the children referred to them by their GPs or teachers for treatment. More than 338,000 children were referred to CAMHS in 2017, but less than a third received treatment within the year. Around 75 per cent of young people experiencing a mental health problem are forced to wait so long their condition gets worse or are unable to access any treatment at all.”

“Only 6.7 per cent of mental health spending goes to children and adolescent mental health services (CAMHS). Government funding for the Early Intervention Grant has been cut by almost £500 million since 2013. It is projected to drop by a further £183 million by 2020.

“Public health funding, which funds school nurses and public mental health services, has been reduced by £600 million from 2015/16 to 2019/20.”

Child benefit two-child limit

Source: May 5, Child Poverty Action Group
“You could not design a policy better to increase child poverty than this one.” source: HC51 House of Commons Work and Pensions Committee
The two-child limit Third Report of Session 2019 (PDF, 1 MB)

“Around sixty thousand families forced to claim universal credit since mid-March because of COVID-19 will discover that they will not get the support their family needs because of the controversial ‘two-child policy”.

Housing benefit

Source: the Poverty and Social Exclusion in the United Kingdom research project funded by the Economic and Social Research Council.

“The cuts [introduced from 2010 to the 2012 budget] in housing benefit will adversely affect some of the most disadvantaged groups in society and are likely to lead to an increase in homelessness, warns the homeless charity Crisis.”

Legal Aid for all children

Source: The Children’s Society, Cut Off From Justice, 2017

“The enactment of the Legal Aid, Punishment and Sentencing of Offenders Act 2012 (LASPO) has had widespread consequences for the provision of legal aid in the UK. One key feature of the new scheme, of particular importance to The Children’s Society, were the changes made to the eligibility criteria around legal aid for immigration cases. These changes saw unaccompanied and separated children removed from scope for legal aid unless their claim is for asylum, or if they have been identified as victims of child trafficking.”

“To fulfill its obligations under the UNCRC, the Government should reinstate legal aid for all unaccompanied and separated migrant children in matters of immigration by bringing it back within ‘scope’ under the Legal Aid, Sentencing and Punishment of Offenders Act 2012. Separated and unaccompanied children are super-vulnerable.”

Library services

Source: CIPFA’s annual library survey 2018

“the number of public libraries and paid staff fall every year since 2010, with spending reduced by 12% in Britain in the last four years.” “We can view libraries as a bit of a canary in the coal mine for what is happening across the local government sector…” “There really needs to be some honest conversations about the direction of travel of our councils and what their role is, as the funding gap will continue to exacerbate these issues.”

No recourse to public funds: FSM and more

source: NRPF Network
“No recourse to public funds (NRPF) is a condition imposed on someone due to their immigration status. Section 115 Immigration and Asylum Act 1999 states that a person will have ‘no recourse to public funds’ if they are ‘subject to immigration control’.”

“children only get the opportunity to apply for free school meals if their parents already receive certain benefits. This means that families who cannot access these benefits– because they have what is known as “no recourse to public funds” as a part of their immigration status– are left out from free school meal provision in England.”

Sure Start

Source: Institute for Fiscal Studies (2019)

“the reduction in hospitalisations at ages 5–11 saves the NHS approximately £5 million, about 0.4% of average annual spending on Sure Start. But the types of hospitalisations avoided – especially those for injuries – also have big lifetime costs both for the individual and the public purse”.

Youth Services

Source: Barnardo’s (2019) New research draws link between youth service cuts and rising knife crime.

“Figures obtained by the All-Party Parliamentary Group (APPG) on Knife Crime show the average council has cut real-terms spending on youth services by 40% over the past three years. Some local authorities have reduced their spending – which funds services such as youth clubs and youth workers – by 91%.”

Barnardo’s Chief Executive Javed Khan said:

“These figures are alarming but sadly unsurprising. Taking away youth workers and safe spaces in the community contributes to a ‘poverty of hope’ among young people who see little or no chance of a positive future.”

Thoughts on the Online Harms White Paper (I)

“Whatever the social issue we want to grasp – the answer should always begin with family.”

Not my words, but David Cameron’s. Just five years ago, Conservative policy was all about “putting families at the centre of domestic policy-making.”

Debate on the Online Harms White Paper, thanks in part to media framing of its own departmental making, is almost all about children. But I struggle with the debate that leaves out our role as parents almost entirely, other than as bereft or helpless victims ourselves.

I am conscious wearing my other hat of defenddigitalme, that not all families are the same, and not all children have families. Yet it seems counter to conservative values,  for a party that places the family traditionally at the centre of policy, to leave out or abdicate parents of responsibility for their children’s actions and care online.

Parental responsibility cannot be outsourced to tech companies, or accept it’s too hard to police our children’s phones. If we as parents are concerned about harms, it is our responsibility to enable access to that which is not, and be aware and educate ourselves and our children on what is. We are aware of what they read in books. I cast an eye over what they borrow or buy. I play a supervisory role.

Brutal as it may be, the Internet is not responsible for suicide. It’s just not that simple. We cannot bring children back from the dead. We certainly can as society and policy makers, try and create the conditions that harms are not normalised, and do not become more common.  And seek to reduce risk. But few would suggest social media is a single source of children’s mental health issues.

What policy makers are trying to regulate is in essence, not a single source of online harms but 2.1 billion users’ online behaviours.

It follows that to see social media as a single source of attributable fault per se, is equally misplaced. A one-size-fits-all solution is going to be flawed, but everyone seems to have accepted its inevitability.

So how will we make the least bad law?

If we are to have sound law that can be applied around what is lawful,  we must reduce the substance of debate by removing what is already unlawful and has appropriate remedy and enforcement.

Debate must also try to be free from emotive content and language.

I strongly suspect the language around ‘our way of life’ and ‘values’ in the White Paper comes from the Home Office. So while it sounds fair and just, we must remember reality in the background of TOEIC, of Windrush, of children removed from school because their national records are being misused beyond educational purposes. The Home Office is no friend of child rights, and does not foster the societal values that break down discrimination and harm. It instead creates harms of its own making, and division by design.

I’m going to quote Graham Smith, for I cannot word it better.

“Harms to society, feature heavily in the White Paper, for example: content or activity that:

“threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration.”

Similarly:

“undermine our democratic values and debate”;

“encouraging us to make decisions that could damage our health, undermining our respect and tolerance for each other and confusing our understanding of what is happening in the wider world.”

This kind of prose may befit the soapbox or an election manifesto, but has no place in or near legislation.”

[Cyberleagle, April 18, 2019,Users Behaving Badly – the Online Harms White Paper]

My key concern in this area is that through a feeling of ‘it is all awful’ stems the sense that ‘all regulation will be better than now’, and  comes with a real risk of increasing current practices that would not be better than now, and in fact need fixing.

More monitoring

The first, is today’s general monitoring of school children’s Internet content for risk and harms, which creates unintended consequences and very real harms of its own — at the moment, without oversight.

In yesterday’s House of Lords debate, Lord Haskel, said,

“This is the practicality of monitoring the internet. When the duty of care required by the White Paper becomes law, companies and regulators will have to do a lot more of it. ” [April 30, HOL]

The Brennan Centre yesterday published its research on the spend by US schools purchasing social media monitoring software from 2013-18, and highlighted some of the issues:

Aside from anecdotes promoted by the companies that sell this software, there is no proof that these surveillance tools work [compared with other practices]. But there are plenty of risks. In any context, social media is ripe for misinterpretation and misuse.” [Brennan Centre for Justice, April 30, 209]

That monitoring software focuses on two things —

a) seeing children through the lens of terrorism and extremism, and b) harms caused by them to others, or as victims of harms by others, or self-harm.

It is the near same list of ‘harms’ topics that the White Paper covers. Co-driven by the same department interested in it in schools — the Home Office.

These concerns are set in the context of the direction of travel of law and policy making, its own loosening of accountability and process.

It was preceded by a House of Commons discussion on Social Media and Health, lead by the former Minister for Digital, Culture, Media and Sport who seems to feel more at home in that sphere, than in health.

His unilateral award of funds to the Samaritans for work with Google and Facebook on a duty of care, while the very same is still under public consultation, is surprising to say the least.

But it was his response to this question, which points to the slippery slope such regulations may lead. The Freedom of Speech champions should be most concerned not even by what is potentially in any legislation ahead, but in the direction of travel and debate around it.

“Will he look at whether tech giants such as Amazon can be brought into the remit of the Online Harms White Paper?

He replied, that “Amazon sells physical goods for the most part and surely has a duty of care to those who buy them, in the same way that a shop has a responsibility for what it sells. My hon. Friend makes an important point, which I will follow up.”

Mixed messages

The Center for Democracy and Technology recommended in its 2017 report, Mixed Messages? The Limits of Automated Social Media Content Analysis, that the use of automated content analysis tools to detect or remove illegal content should never be mandated in law.

Debate so far has demonstrated broad gaps between what is wanted, in knowledge, and what is possible. If behaviours are to be stopped because they are undesirable rather than unlawful, we open up a whole can of worms if not done with the greatest attention to  detail.

Lord Stevenson and Lord McNally both suggested that pre-legislative scrutiny of the Bill, and more discussion would be positive. Let’s hope it happens.

Here’s my personal first reflections on the Online Harms White Paper discussion so far.

Six suggestions:

Suggestion one: 

The Law Commission Review, mentioned in the House of Lords debate,  may provide what I have been thinking of crowd sourcing and now may not need to. A list of laws that the Online Harms White Paper related discussion reaches into, so that we can compare what is needed in debate versus what is being sucked in. We should aim to curtail emotive discussion of broad risk and threat that people experience online. This would enable the themes which are already covered in law to be avoided, and focus on the gaps.  It would make for much tighter and more effective legislation. For example, the Crown Prosecution Service offers Guidelines on prosecuting cases involving communications sent via social media, but a wider list of law is needed.

Suggestion two:
After (1) defining what legislation is lacking, definitions must be very clear, narrow, and consistent across other legislation. Not for the regulator to determine ad-hoc and alone.

Suggestion three:
If children’s rights are at to be so central in discussion on this paper, then their wider rights must including privacy and participation, access to information and freedom of speech must be included in debate. This should include academic research-based evidence of children’s experience online when making the regulations.

Suggestion four:
Internet surveillance software in schools should be publicly scrutinised. A review should establish the efficacy, boundaries and oversight of policy and practice regards Internet monitoring for harms and not embed even more, without it. Boundaries should be put into legislation for clarity and consistency.

Suggestion five:
Terrorist activity or child sexual exploitation and abuse (CSEA) online are already unlawful and should not need additional Home Office powers. Great caution must be exercised here.

Suggestion six: 
Legislation could and should encapsulate accountability and oversight for micro-targeting and algorithmic abuse.


More detail behind my thinking, follows below, after the break. [Structure rearranged on May 14, 2019]


Continue reading Thoughts on the Online Harms White Paper (I)

Women Leading in AI — Challenging the unaccountable and the inevitable

Notes [and my thoughts] from the Women Leading in AI launch event of the Ten Principles of Responsible AI report and recommendations, February 6, 2019.

Speakers included Ivana Bartoletti (GemServ), Jo Stevens MP, Professor Joanna J Bryson, Lord Tim Clement-Jones, Roger Taylor (Centre for Data Ethics and Innovation, Chair), Sue Daley (techUK), Reema Patel, Nuffield Foundation and Ada Lovelace Institute.

Challenging the unaccountable and the ‘inevitable’ is the title of the conclusion of the Women Leading in AI report Ten Principles of Responsible AI, launched this week, and this makes me hopeful.

“There is nothing inevitable about how we choose to use this disruptive technology. […] And there is no excuse for failing to set clear rules so that it remains accountable, fosters our civic values and allows humanity to be stronger and better.”

Ivana Bartoletti, co-founder of Women Leading in AI, began the event, hosted at the House of Commons by Jo Stevens, MP for Cardiff Central, and spoke brilliantly of why it matters right now.

Everyone’s talking about ethics, she said, but it has limitations. I agree with that. This was by contrast very much a call to action.

It was nearly impossible not to cheer, as she set out without any of the usual bullshit, the reasons why we need to stop “churning out algorithms which discriminate against women and minorities.”

Professor Joanna J Bryson took up multiple issues, such as why

  • innovation, ‘flashes in the pan’ are not sustainable and not what we’re looking for things in that work for us [society].
  • The power dynamics of data, noting Facebook, Google et al are global assets, and are also global problems, and flagged the UK consultation on taxation open now.
  • And that it is critical that we do not have another nation with access to all of our data.

She challenged the audience to think about the fact that inequality is higher now than it has been since World War I. That the rich are getting richer and that imbalance of not only wealth, but of the control individuals have in their own lives, is failing us all.

This big picture thinking while zooming in on detailed social, cultural, political and tech issues, fascinated me most that evening. It frustrated the man next to me apparently, who said to me at the end, ‘but they haven’t addressed anything on the technology.’

[I wondered if that summed up neatly, some of why fixing AI cannot be a male dominated debate. Because many of these issues for AI, are not of the technology, but of people and power.] 

Jo Stevens, MP for Cardiff Central, hosted the event and was candid about politicians’ level of knowledge and the need to catch up on some of what matters in the tech sector.

We grapple with the speed of tech, she said. We’re slow at doing things and tech moves quickly. It means that we have to learn quickly.

While discussing how regulation is not something AI tech companies should fear, she suggested that a constructive framework whilst protecting society against some of the problems we see is necessary and just, because self-regulation has failed.

She talked about their enquiry which began about “fake news” and disinformation, but has grown to include:

  • wider behavioural economics,
  • how it affects democracy.
  • understanding the power of data.
  • disappointment with social media companies, who understand the power they have, and fail to be accountable.

She wants to see something that changes the way big business works, in the way that employment regulation challenged exploitation of the workforce and unsafe practices in the past.

The bias (conscious or unconscious) and power imbalance has some similarity with the effects on marginalised communities — women, BAME, disabilities — and she was looking forward to see the proposed solutions, and welcomed the principles.

Lord Clement-Jones, as Chair of the Select Committee on Artificial Intelligence, picked up the values they had highlighted in the March 2018 report, Artificial Intelligence, AI in the UK: ready, willing and able?

Right now there are so many different bodies, groups in parliament and others looking at this [AI / Internet / The Digital World] he said, so it was good that the topic is timely, front and centre with a focus on women, diversity and bias.

He highlighted, the importance of maintaining public trust. How do you understand bias? How do you know how algorithms are trained and understand the issues? He fessed up to being a big fan of DotEveryone and their drive for better ‘digital understanding’.

[Though sometimes this point is over complicated by suggesting individuals must understand how the AI works, the consensus of the evening was common sensed — and aligned with the Working Party 29 guidance — that data controllers must ensure they explain clearly and simply to individuals, how the profiling or automated decision-making process works, and what its effect is for them.]

The way forward he said includes:

  • Designing ethics into algorithms up front.
  • Data audits need to be diverse in order to embody fairness and diversity in the AI.
  • Questions of the job market and re-skilling.
  • The enforcement of ethical frameworks.

He also asked how far bodies will act, in different debates. Deciding who decides on that is still a debate to be had.

For example, aware of the social credit agenda and scoring in China, we should avoid the same issues. He also agreed with Joanna, that international cooperation is vital, and said it is important that we are not disadvantaged in this global technology. He expected that we [the Government Office for AI] will soon promote a common set of AI ethics, at the G20.

Facial recognition and AI are examples of areas that require regulation for safe use of the tech and to weed out those using it for the wrong purposes, he suggested.

However, on regulation he held back. We need to be careful about too many regulators he said. We’ve got the ICO, FCA, CMA, OFCOM, you name it, we’ve already got it, and they risk tripping over one another. [What I thought as CDEI was created para 31.]

We [the Lords Committee] didn’t suggest yet another regulator for AI, he said and instead the CDEI should grapple with those issues and encourage ethical design in micro-targeting for example.

Roger Taylor (Chair of the CDEI), — after saying it felt as if the WLinAI report was like someone had left their homework on his desk,  supported the concept of the WLinAI principles are important, and  agreed it was time for practical things, and what needs done.

Can our existing regulators do their job, and cover AI? he asked, suggesting new regulators will not be necessary. Bias he rightly recognised, already exists in our laws and bodies with public obligations, and in how AI is already operating;

  • CVs sorting. [problematic IMO > See Amazon, US teachers]
  • Policing.
  • Creditworthiness.

What evidence is needed, what process is required, what is needed to assure that we know how it is actually operating? Who gets to decide to know if this is fair or not? While these are complex decisions, they are ultimately not for technicians, but a decision for society, he said.

[So far so good.]

Then he made some statements which were rather more ambiguous. The standards expected of the police will not be the same as those for marketeers micro targeting adverts at you, for example.

[I wondered how and why.]

Start up industries pay more to Google and Facebook than in taxes he said.

[I wondered how and why.]

When we think about a knowledge economy, the output of our most valuable companies is increasingly ‘what is our collective truth? Do you have this diagnosis or not? Are you a good credit risk or not? Even who you think you are — your identity will be controlled by machines.’

What can we do as one country [to influence these questions on AI], in what is a global industry? He believes, a huge amount. We are active in the financial sector, the health service, education, and social care — and while we are at the mercy of large corporations, even large corporations obey the law, he said.

[Hmm, I thought, considering the Google DeepMind-Royal Free agreement that didn’t, and venture capitalists not renowned for their ethics, and yet advise on some of the current data / tech / AI boards. I am sceptical of corporate capture in UK policy making.]

The power to use systems to nudge our decisions, he suggested, is one that needs careful thought. The desire to use the tech to help make decisions is inbuilt into what is actually wrong with the technology that enables us to do so. [With this I strongly agree, and there is too little protection from nudge in data protection law.]

The real question here is, “What is OK to be owned in that kind of economy?” he asked.

This was arguably the neatest and most important question of the evening, and I vigorously agreed with him asking it, but then I worry about his conclusion in passing, that he was, “very keen to hear from anyone attempting to use AI effectively, and encountering difficulties because of regulatory structures.

[And unpopular or contradictory a view as it may be, I find it deeply ethically problematic for the Chair of the CDEI to be held by someone who had a joint-venture that commercially exploited confidential data from the NHS without public knowledge, and its sale to the Department of Health was described by the Public Accounts Committee, as a “hole and corner deal”. That was the route towards care.data, that his co-founder later led for NHS England. The company was then bought by Telstra, where Mr Kelsey went next on leaving NHS Engalnd. The whole commodification of confidentiality of public data, without regard for public trust, is still a barrier to sustainable UK data policy.]

Sue Daley (Tech UK) agreed this year needs to be the year we see action, and the report is a call to action on issues that warrant further discussion.

  • Business wants to do the right thing, and we need to promote it.
  • We need two things — confidence and vigilance.
  • We’re not starting from scratch, and talked about GDPR as the floor not the ceiling. A starting point.

[I’m not quite sure what she was after here, but perhaps it was the suggestion that data regulation is fundamental in AI regulation, with which I would agree.]

What is the gap that needs filled she asked? Gap analysis is what we need next and avoid duplication of effort —need to avoid complexity and duplicity of work with other bodies. If we can answer some of the big, profound questions need to be addressed to position the UK as the place where companies want to come to.

Sue was the only speaker that went on to talk about the education system that needs to frame what skills are needed for a future world for a generation, ‘to thrive in the world we are building for them.’

[The Silicon Valley driven entrepreneur narrative that the education system is broken, is not an uncontroversial position.]

She finished with the hope that young people watching BBC icons the night before would see, Alan Turing [winner of the title] and say yes, I want to be part of that.

Listening to Reema Patel, representative of the Ada Lovelace Institute, was the reason I didn’t leave early and missed my evening class. Everything she said resonated, and was some of the best I have heard in the recent UK debate on AI.

  • Civic engagement, the role of the public is as yet unclear with not one homogeneous, but many publics.
  • The sense of disempowerment is important, with disconnect between policy and decisions made about people’s lives.
  • Transparency and literacy are key.
  • Accountability is vague but vital.
  • What does the social contract look like on people using data?
  • Data may not only be about an individual and under their own responsibility, but about others and what does that mean for data rights, data stewardship and articulation of how they connect with one another, which is lacking in the debate.
  • Legitimacy; If people don’t believe it is working for them, it won’t work at all.
  • Ensuring tech design is responsive to societal values.

2018 was a terrible year she thought. Let’s make 2019 better. [Yes!]


Comments from the floor and questions included Professor Noel Sharkey, who spoke about the reasons why it is urgent to act especially where technology is unfair and unsafe and already in use. He pointed to Compass (Durham police), and predictive policing using AI and facial recognition, with 5% accuracy, and that the Met was not taking these flaws seriously. Liberty produced a strong report on it out this week.

Caroline, from Women in AI echoed my own comments on the need to get urgent review in place of these technologies used with children in education and social care. [in particular where used for prediction of child abuse and interventions in family life].

Joanna J Bryson added to the conversation on accountability, to say people are not following existing software and audit protocols,  someone just needs to go and see if people did the right thing.

The basic question of accountability, is to ask if any flaw is the fault of a corporation, of due diligence, or of the users of the tool? Telling people that this is the same problem as any other software, makes it much easier to find solutions to accountability.

Tim Clement-Jones asked, on how many fronts can we fight on at the same time? If government has appeared to exempt itself from some of these issues, and created a weak framework for itself on handing data, in the Data Protection Act — critically he also asked, is the ICO adequately enforcing on government and public accountability, at local and national levels?

Sue Daley also reminded us that politicians need not know everything, but need to know what the right questions are to be asking? What are the effects that this has on my constituents, in employment, my family? And while she also suggested that not using the technology could be unethical, a participant countered that it’s not the worst the thing to have to slow technology down and ensure it is safe before we all go along with it.

My takeaways of the evening, included that there is a very large body of women, of whom attendees were only a small part, who are thinking, building and engineering solutions to some of these societal issues embedded in policy, practice and technology. They need heard.

It was genuinely electric and empowering, to be in a room dominated by women, women reflecting diversity of a variety of publics, ages, and backgrounds, and who listened to one another. It was certainly something out of the ordinary.

There was a subtle but tangible tension on whether or not  regulation beyond what we have today is needed.

While regulating the human behaviour that becomes encoded in AI, we need to ensure ethics of human behaviour, reasonable expectations and fairness are not conflated with the technology [ie a question of, is AI good or bad] but how it is designed, trained, employed, audited, and assess whether it should be used at all.

This was the most effective group challenge I have heard to date, counter the usual assumed inevitability of a mythical omnipotence. Perhaps Julia Powles, this is the beginnings of a robust, bold, imaginative response.

Why there’s not more women or people from minorities working in the sector, was a really interesting if short, part of the discussion. Why should young women and minorities want to go into an environment that they can see is hostile, in which they may not be heard, and we still hold *them* responsible for making work work?

And while there were many voices lamenting the skills and education gaps, there were probably fewer who might see the solution more simply, as I do. Schools are foreshortening Key Stage 3 by a year, replacing a breadth of subjects, with an earlier compulsory 3 year GCSE curriculum which includes RE, and PSHE, but means that at 12, many children are having to choose to do GCSE courses in computer science / coding, or a consumer-style iMedia, or no IT at all, for the rest of their school life. This either-or content, is incredibly short-sighted and surely some blend of non-examined digital skills should be offered through to 16 to all, at least in parallel importance with RE or PSHE.

I also still wonder, about all that incredibly bright and engaged people are not talking about and solving, and missing in policy making, while caught up in AI. We need to keep thinking broadly, and keep human rights at the centre of our thinking on machines. Anaïs Nin wrote over 70 years ago about the risks of growth in technology to expand our potential for connectivity through machines, but diminish our genuine connectedness as people.

“I don’t think the [American] obsession with politics and economics has improved anything. I am tired of this constant drafting of everyone, to think only of present day events”.

And as I wrote about nearly 3 years ago, we still seem to have no vision for sustainable public policy on data, or establishing a social contract for its use as Reema said, which underpins the UK AI debate. Meanwhile, the current changing national public policies in England on identity and technology, are becoming catastrophic.

Challenging the unaccountable and the ‘inevitable’ in today’s technology and AI debate, is an urgent call to action.

I look forward to hearing how Women Leading in AI plan to make it happen.


References:

Women Leading in AI website: http://womenleadinginai.org/
WLiAI Report: 10 Principles of Responsible AI
@WLinAI #WLinAI

image credits 
post: creative commons Mark Dodds/Flickr
event photo:  / GemServ