Policing thoughts, proactive technology, and the Online Safety Bill

Former counter-terrorism police chief attacks Rishi Sunak’s Prevent plans“, reads a headline in today’s Guardian. “Former counter-terrorism chief Sir Peter Fahy […] said: “The widening of Prevent could damage its credibility and reputation. It makes it more about people’s thoughts and opinions. Fahy said: “The danger is the perception it creates that teachers and health workers are involved in state surveillance.”

This article leaves out that today’s reality is already far ahead of proposals or perception. School children and staff are already surveilled in these ways. Not only are things monitored that people think type or read or search for online and offline in the digital environment, but copies may be collected, retained by companies and interventions made.

The products don’t only permit monitoring of trends on aggregated data in overviews of student activity but the behaviours of individual students. And these can be deeply intrusive and sensitive when you are talking about self harm, abuse, and terrorism.

(For more on the safety tech sector, often using AI in proactive monitoring, see my previous post (May 2021) The Rise of Safety Tech.)

Intrusion through inference and interventions

From 1 July 2015 all schools have been subject to the Prevent duty under section 26 of the Counter-Terrorism and Security Act 2015, in the exercise of their functions, to have “due regard to the need to prevent people from being drawn into terrorism”.  While these products are about monitoring far more than the remit of Prevent,  many companies actively market online filtering, blocking and monitoring safety products as a way of meeting that in the digital environment. Such as, “Lightspeed Filter™ helps you meet all of the Prevent Duty’s online regulations…

Despite there being no obligation to date, to fulfil this duty through technology, some companies’ way of selling such tools could be interpreted as threatening if schools don’t use it. Like this example:

“Failure to comply with the requirements may result in intervention from the Prevent Oversight Board, prompt an Ofsted inspection or incur loss of funding.”

Such products may create and send real-time alerts to company or school staff when children attempt to reach sites or type “flagged words” related to radicalisation or extremism on any online platform.

Under the auspices of the safeguarding-in-schools data sharing and web monitoring in the Prevent programme children may be labelled with terrorism or extremism labels, data which may be passed on to others or stored outside the UK without their knowledge. The drift in what is considered significant, has been from terrorism into now more vague and broad terms of extremism and radicalisation. Away from some assessment of intent and capability of action, into interception and interventions for potentially insignificant potential vulnerabilities and inferred assumptions of disposition towards such ideas. This is not potentially going to police thoughts as suggested by Fahy of Sunak’s views. It is already doing so. Policing thoughts in the developing child and holding them accountable for it like this in ways that are unforeseeable, is inappropriate and requires thorough investigation into its effects on children, including mental health.

But it’s important to understand that these libraries of thousands of words, ever changing and in multiple languages, and what the systems are looking for and flag, often claiming to do it using Artificial Intelligence, go far beyond Prevent. ‘Legal but harmful’ is their bread and butter. Self harm, harm to or from others.

While companies have no obligations to publish how the monitoring or flagging operates, what the words or phrases or blocked websites are, their error rates (positive and negative) or the effects on children or school staff and their behaviour as a result, these companies have a great deal of influence what gets inferred from what children do online, and who decides what to act on.

Why does it matter?

Schools have normalized the premise that systems they introduce should monitor activity outside of the school network, and hours. And that strangers or their private companies’ automated systems should be involved in inferring or deciding what children are ‘up to’ before the school staff who know the children in front of them.

In a defenddigitalme report, The State of Data 2020, we included a case study on one company that has since been bought out.  And bought again. As of August 2018 eSafe was monitoring approximately one million school children plus staff across the UK. This case study they used in their public marketing raised all sorts of questions on professional  confidentiality and school boundaries, personal privacy, ethics, and companies’ role and technical capability, as well as the lack of any safety tech accountability.

“A female student had been writing an emotionally charged letter to her Mum using Microsoft Word, in which she revealed she’d been raped. Despite the device used being offline, eSafe picked this up and alerted John and his care team who were able to quickly intervene.”

Their then CEO  had told the House of Lords 2016 Communication Committee enquiry on the Children and the Internet, how the products are not only monitoring children in school or school hours:

“Bearing in mind we are doing this throughout the year, the behaviours we detect are not confined to the school bell starting in the morning and ringing in the afternoon, clearly; it is 24/7 and it is every day of the year. Lots of our incidents are escalated through activity on evenings, weekends and school holidays.”

Similar products offer a photo capturing feature of users (pupils while using the device being monitored) described as “common across most solutions in the sector” by this company:

When a critical safeguarding keyword is copied, typed or searched for across the school network, schools can turn on NetSupport DNA’s webcams capture feature (this feature is turned-off by default) to capture an image of the user (not a recording) who has triggered the keyword.

How many webcam photos have been taken of children by school staff or others through those systems, and for what purposes, kept by whom? In the U.S. in 2010, Lower Merion School District, Philadelphia settled a lawsuit for using laptop webcams to take photos of students.  Thousands of photos had been taken even at home, out of hours, without their knowledge.

Who decides what does and does not trigger interventions across different products? In the month of December 2017 alone, eSafe claims they added 2254 words to their threat libraries.

Famously, Impero’s system even included the word “biscuit” which they say is a term used to define a gun. Their system was used by more than “half a million students and staff in the UK” in 2018. And students had better not talk about “taking a wonderful bath.” Currently there is no understanding or oversight of the accuracy of this kind of software and black-box decision-making is often trusted without openness to human question or correction.

Aside from how the range of tools that are all different work, there are very basic questions about whether such policies and tools help or harm children in various ways at all. The UN Special Rapporteur’s 2014 report on children’s rights and freedom of expression stated:

“The result of vague and broad definitions of harmful information, for example in determining how to set Internet filters, can prevent children from gaining access to information that can support them to make informed choices, including honest, objective and age-appropriate information about issues such as sex education and drug use. This may exacerbate rather than diminish children’s vulnerability to risk.” (2014)

U.S. safety tech creates harms

Today in the U.S. the CDT published a report on school monitoring systems there, many of which are also used over here. The report revealed that 13 percent of students knew someone who had been outed as a result of student-monitoring software. Another conclusion the CDT draws, is that monitoring is used for discipline more often than for student safety.

We don’t have that same research for the UK, but we’ve seen IT staff openly admit to using the webcam feature to take photos of young boys who are “mucking about” on the school library computer.

The Online Safety Bill scales up problems like this

The Online Safety Bill seeks to expand how such ‘behavioural identification technology’ can be expanded outside schools.

“Proactive technology include content moderation technology, user profiling technology or behaviour identification technology which utilises artificial intelligence or machine learning.” (p151 Online Safety Bill, August 3, 2022)

The “proactive technology requirement” is as yet rather open ended, left to OFCOM in Codes of Practice but the scope creep of such AI-based tools has become ever more intrusive in education. Legal but harmful is decided by companies and the IWF and any number of opaque third parties whose process and decision-making we know little about. It’s important not to conflate filtering, blocking lists of ‘unsuitable’ websites that can be accessed in schools, with monitoring and tracking individual behaviours.

‘Technological developments that have the capacity to interfere with our freedom of thought fall clearly within the scope of “morally unacceptable harm,”‘ according to Algere (2017), and yet this individual interference is at the very core of school safeguarding tech and policy by design.

In 2018, the ‘lawful but harmful’ list of activities in the Online Harms White paper was nearly identical with those terms used by school Safety Tech companies. The Bill now appears to be trying to create a new legitimate basis for these practices, more about underpinning a developing market, than supporting children’s safety or rights.

Chilling speech is itself controlling content

While a lot of debate about the Bill has been the free speech impacts of content removal, there has been less about what is unwritten but how it will operate to prevent speech and participation in the digital environment for children. The chilling effect of surveillance on access and participation online is well documented. Younger people and women are more likely to be negatively affected (Penney, 2017). The chilling effect on thought and opinion is worsened in these types of tools that trigger an alert even when what is typed is quickly deleted or remains unsent or shared. Thoughts are no longer private.

The ability to use end-to-end encryption on private messaging platforms is simply worked around by these kinds of tools, trading security for claims of children’s safety. Anything on screen may be read in the clear by some systems, even capturing passwords and bank details.

Graham Smith has written, “It may seem like overwrought hyperbole to suggest that the [Online Harms] Bill lays waste to several hundred years of fundamental procedural protections for speech. But consider that the presumption against prior restraint appeared in Blackstone’s Commentaries (1769). It endures today in human rights law. That presumption is overturned by legal duties that require proactive monitoring and removal before an independent tribunal has made any determination of illegality.”

More than this, there is no determination of illegality in legal but harmful activity. It’s opinion. The government is prone to argue that, “nothing in the Bill says X…” but you need to understand this context of how such proactive behavioural monitoring tools work is through threat and the resultant chilling effect to impose unwritten control. This Bill does not create a safer digital environment, it creates threat models for users and companies, to control how we think and behave.

What do children and parents think?

Young people’s own views that don’t fit the online harms narrative have been ignored by Westminster scrutiny Committees. A 2019 survey by the Australian e-safety commissioner found that over half (57%) of child respondents were uncomfortable with background monitoring processes, and 43 %were unsure about these tools’ effectiveness in ensuring online safety.

And what of the role of parents? Article 3(2) of the UNCRC says: “States Parties undertake to ensure the child such protection and care as is necessary for his or her wellbeing, taking into account the rights and duties of his or her parents, legal guardians, or other individuals  legally responsible for him or her, and, to this end, shall take all appropriate legislative and administrative measures.” (my emphasis)

In 2018, 84% of 1,004 parents in England who we polled through Survation, agreed that children and guardians should be informed how this monitoring activity works and wanted to know what the keywords were. (We didn’t ask if it should happen at all or not.)

The wide ranging nature [of general monitoring] rather than targeted and proportionate interference has been judged to be in breach of law and a serious interference with rights, previously. Neither policy makers nor companies should assume parents want safety tech companies to remove autonomy, or make inferences about our children’s lives. Parents if asked, reject the secrecy in which it happens today and demand transparency and accountability. Teachers can feel anxious talking about it at all. There’s no clear routes for error corrections, in fact it’s not done because some claim in building up profiles staff should not delete anything and ignore claims of errors, in case a pattern of behaviour is missed. But there’s no independent assessments available to evidence these tools work or are worth the costs. There are no routes for redress or responsibility taken for tech-made mistakes. None of which makes children safer online.

Before broadening out where such monitoring tools are used, their use and effects on school children need to be understood and openly debated. Policy makers may justify turning a blind eye to harms created by one set of technology providers while claiming that only the other tech providers are the problem, because it suits political agendas or industry aims, but children’s rights and their wellbeing should not be sacrificed in doing so.  Opaque, unlawful and unsafe practice must stop. A quid pro quo for getting access to millions of children’s intimate behaviour, should be transparent access to their product workings, and accepting standards on universal safe accountable practices. Families need to know what’s recorded. To have routes for redress when a daughter researching ‘cliff walks’ gets flagged as a suicide risk or an environmentally interested teenage son searching for information on ‘black rhinos’ is asked about his potential gang membership. The tools sold as solutions to online harms, shouldn’t create more harm like these reported real-life case studies.

Teachers are ‘involved in state surveillance’ as Fahy put it, through Prevent. Sunak was wrong to point away from the threats of the far right in his comments. But the far broader unspoken surveillance of children’s personal lives, behaviours and thoughts through general monitoring in schools, and what will be imposed through the Online Safety Bill more broadly, should concern us far more than was said.

On #IWD2022 gender bias in #edTech

I’m a mother of three girls at secondary school. For international women’s day 2022 I’ve been thinking about the role of school technology in my life.

Could some of it be improved to stop baking-in gender discrimination norms to home-school relationships?

Families come in all shapes and sizes and not every family has defined Mum and Dad roles. I wonder if edTech could be better at supporting families if it offered the choice of a multi-parent-per-child relationship by-default?

School-home communications rarely come home in school bags anymore, but digitally, and routinely sent to one-parent-per-child. If something needs actioned, it’s typically going to one parent, not both. The design of digital tools can lock-in the responsibility for action to a single nominated person. Schools send the edTech company the ‘pupil parent contact’ email, but, at least in my experience, don’t ever ask what that should be after it’s been collected once. (And don’t do a good job of communicating data rights each time before doing so either, but that’s another story.)

Whether it’s about learning updates with report cards about the child, or weekly newsletters, changes of school clubs, closures, events or other ‘things you should know’ I filter emails I get daily from a number of different email accounts for relevance, and forward them on to Dad.

To administer cashless payments to school for contributions to art, cooking, science and technology lessons, school trips, other extras or to manage my child’s lunch money, there is a single email log-in and password for a parent role allocated to the child’s account.

And it might be just my own unrepresentative circle of friends, but it’s usually Mum who’s on the receiving end of demands at all hours.

In case of illness, work commitments, otherwise being unable to carry on as usual, it’s no longer as easy for a second designated parent role to automatically pick up or share the responsibilities.

One common cashless payment system’s approach does permit more than one parent role, but it’s manual and awkward to set up. “For a second parent to have access it is necessary for the school to send a second letter with a second temporary username and password combo to activate a second account. In short, the only way to do this is to ask your school.”

Some messaging services allow a school-to-multiple-parent email, but the message itself often forms an individual not group thread with the teacher, i.e designed for a class not a family.

Some might suggest it is easy enough to set up automatic email forwarding, but again this pushes back the onus onto the parent and doesn’t solve the problem of only one person able to perform transactions.

I wonder if one-way communications tools offered a second email address by default what difference it would make to overall parental engagement?

What if for financial management edTech permitted an option to have a ‘temporary re-route’ to another email address, or default second role with notification to the other something had been paid?

Why can’t one parent, once confirmed with secure access to the child-parent account, add a second parent role? These need not be the parent, but another relation managing the outgoing money. You can only make outgoing payments to the school, or withdraw money to the same single bank account it comes from, so fraud isn’t likely.

I wonder what research would look like at each of these tools, to assess whether there is a gender divide built into default admin?

What could it improve in work-life balance for staff and families, if emails were restricted to send or receive in preferred time windows?

Technology can be amazing and genuinely make life easier for some. But not everyone fits the default and I believe the defaults are rarely built to best suit users, but rather the institutions that procure them. In many cases edTech aren’t working well for the parents that make up their main user base.

If I were designing these, they’d be school not third-party cloud based, and distributed systems, centred on the child. I think we can do better, not only for women, but everyone.


PS When my children come home from school today, I’ll be showing them the Gender Pay Gap Bot @PayGapApp thread with explanations of mode, mean and median and worth a look.

Man or machine: who shapes my child? #WorldChildrensDay 2021

A reflection for World Children’s Day 2021. In ten years’ time my three children will be in their twenties. What will they and the world around them have become? What will shape them in the years in between?


Today when people talk about AI, we hear fears of consciousness in AI. We see, I, Robot.  The reality of any AI that will touch their lives in the next ten years is very different. The definition may be contested but artificial intelligence in schools already involves automated decision making at speed and scale, without compassion or conscience, but with outcomes that affect children’s lives for a long time.

The guidance of today—in policy documents, and well intentioned toolkits and guidelines and oh yes yet another ‘ethics’ framework— is all fairly same-y in terms of the issues identified.

Bias in training data. Discrimination in outcomes. Inequitable access or treatment. Lack of understandability or transparency of decision-making. Lack of routes for redress. More rarely thoughts on exclusion, disability and accessible design, and the digital divide. In seeking to fill it, the call can conclude with a cry to ensure ‘AI for all’.

Most of these issues fail to address the key questions in my mind, with regards to AI in education.

Who gets to shape a child’s life and the environment they grow up in? The special case of children is often used for special pleading in government tech issues. Despite this, in policy discussion and documents, govt. fails over and over again to address children as human beings.

Children are still developing. Physically, emotionally, their sense of fairness and justice, of humor, of politics and who they are.

AI is shaping children in ways that schools and parents cannot see.  And the issues go beyond limited agency and autonomy. Beyond the UNCRC articles 8 and 18, the role of the parent and lost boundaries between schools and home, and 23 and 29. (See at the end in detail).

Concerns about accessibility published on AI are often about the individual and inclusion, in terms of design to be able to participate. But once they can participate, where is the independent measurement and evaluation of impact on their educational progress, or physical and mental development? What is their effect?

From overhyped like Edgenuity, to the oversold like ClassCharts (that didn’t actually have any AI in it but it still won Bett Show Awards), frameworks often mention but still have no meaningful solutions for the products that don’t work and fail.

But what about the harms from products that work as intended? These can fail human dignity or create a chilling effect, like exam proctoring tech. Those safety tech that infer things and cause staff to intervene even if the child was only chatting about ‘a terraced house.’ Punitive systems that keep profiles of behaviour points long after a teacher would have let it go. What about those shaping the developing child’s emotions and state of mind by design and claim to operate within data protection law? Those who measure and track mental health or make predictions for interventions by school staff?

Brain headbands to transfer neurosignals aren’t biometric data in data protection terms if not used to or able to uniquely identify a child.

“Wellbeing” apps are not being regulated as medical devices and yet are designed to profile and influence mental health and mood and schools adopt them at scale.

If AI is being used to deliver a child’s education, but only in the English language, what risk does this tech-colonialism create in evangelising  children in non-native English speaking families through AI, not only in access to teaching, but on reshaping culture and identity?

At the institutional level, concerns are only addressed after the fact. But how should they be assessed as part of procurement when many AI are marketed as , it never stops “learning about your child”? Tech needs full life-cycle oversight, but what companies claim their products do is often only assessed to pass accreditation at a single point in time.

But the biggest gap in governance is not going to be fixed by audits or accreditation of algorithmic fairness. It is the failure to recognize the redistribution of not only agency but authority; from individuals to companies (teacher doesn’t decide what you do next, the computer does). From public interest institutions to companies (company X determines the curriculum content, not the school). And from State to companies (accountability for outcomes has fallen through the gap in outsourcing activity to the AI company). We are automating authority, and with it the shirking of responsibility, the liability for the machine’s flaws, and accepting it is the only way, thanks to our automation bias. Accountability must be human, but whose?

Around the world the rush to regulate AI, or related tech in Online Harms, or Digital Services, or Biometrics law, is going to embed, not redistribute power, through regulatory capitalism.

We have regulatory capture including on government boards and bodies that shape the agenda; unrealistic expectations of competition shaping the market; and we’re ignoring transnational colonialisation of whole schools or even regions and countries shaping the delivery of education at scale.

We’re not regulating the questions: Who does the AI serve and how do we deal with conflicts of interest between child’s rights, family, school staff, the institution or State, and the company’s wants? Where do we draw the line between public interest, private interests, and who decides what are the best interests of each child?

We’re not managing what the implications are of the datafied child being mined and analysed in order to train companies’ AI. Is it ethical or desirable to use children’s behaviour as sources of business intelligence, to donate free labour in school systems performed for companies to profit from, without any choice (see UNCRC Art 32)?

We’re barely aware as parents, if a company will decide how a child is tested in a certain way, asked certain questions about their mental health, given nudges to ‘improve’ their performance or mood.  It’s not a question of ‘is it in the best interests of a child’, but rather, who designs it and can schools assess compatibility with a child’s fundamental rights and freedoms to develop free from interference?

It’s not about protection of ‘the data’ although data protection should be about the protection of the person, not only enabling data flows for business.

It’s about protection from strangers engineering a child’s development in closed systems.

It is about child protection from unknown and unlimited number of persons interfering with who they will become.

Today’s laws and debate are too often about regulating someone else’s opinion; how it should be done, not if it should be done at all.

It is rare we read any challenge of the ‘inevitability’ of AI [in education] narrative.

Who do I ask my top two questions on AI in education:
(a) who gets and grants permission to shape my developing child, and
(b) what happens to the duty of care in loco parentis as schools outsource authority to an algorithm?


UNCRC

Article 8

1. States Parties undertake to respect the right of the child to preserve his or her identity, including nationality, name and family relations as recognised by law without unlawful interference.

Article 18

1. States Parties shall use their best efforts to ensure recognition of the principle that both parents have common responsibilities for the upbringing and development of the child. Parents or, as the case may be, legal guardians, have the primary responsibility for the upbringing and development of the child. The best interests of the child will be their basic concern.

Article 29

1. States Parties agree that the education of the child shall be directed to:

(a) The development of the child’s personality, talents and mental and physical abilities to their fullest potential;

(c) The development of respect for the child’s parents, his or her own cultural identity, language and values, for the national values of the country in which the child is living, the country from which he or she may originate, and for civilizations different from his or her own;

Article 30

In those States in which ethnic, religious or linguistic minorities or persons of indigenous origin exist, a child belonging to such a minority or who is indigenous shall not be denied the right, in community with other members of his or her group, to enjoy his or her own culture

 

Data-Driven Responses to COVID-19: Lessons Learned OMDDAC event

A slightly longer version of a talk I gave at the launch event of the OMDDAC Data-Driven Responses to COVID-19: Lessons Learned report on October 13, 2021. I was asked to respond to the findings presented on Young People, Covid-19 and Data-Driven Decision-Making by Dr Claire Bessant at Northumbria Law School.

[ ] indicates text I omitted for reasons of time, on the day.

Their final report is now available to download from the website.

You can also watch the full event here via YouTube. The part on young people presented by Claire and that I follow, is at the start.

—————————————————–

I’m really pleased to congratulate Claire and her colleagues today at OMDDAC and hope that policy makers will recognise the value of this work and it will influence change.

I will reiterate three things they found or included in their work.

  1. Young people want to be heard.
  2. Young people’s views on data and trust, include concerns about conflated data purposes

and

3. The concept of being, “data driven under COVID conditions”.

This OMDDAC work together with Investing in Children,  is very timely as a rapid response, but I think it is also important to set it in context, and recognize that some of its significance is that it reflects a continuum of similar findings over time, largely unaffected by the pandemic.

Claire’s work comprehensively backs up the consistent findings of over ten years of public engagement, including with young people.

The 2010 study with young people conducted by The Royal Academy of Engineering supported by three Research Councils and Wellcome, discussed attitudes towards the use of medical records and concluded: These questions and concerns must be addressed by policy makers, regulators, developers and engineers before progressing with the design, and implementation of record keeping systems and the linking of any databases.

In 2014, the House of Commons Science and Technology Committee in their report, Responsible Use of Data, said the Government has a clear responsibility to explain to the public how personal data is being used

The same Committee’s Big Data Dilemma 2015-16 report, (p9) concluded “data (some collected many years before and no longer with a clear consent trail) […] is unsatisfactory left unaddressed by Government and without a clear public-policy position.

Or see

2014, The Royal Statistical Society and Ipsos Mori work on the data trust deficit with lessons for policymakers, 2019  DotEveryone’s work on Public Attitudes or the 2020 The ICO Annual Track survey results.

There is also a growing body of literature to demonstrate what the implications are being a ‘data driven’ society, for the datafied child, as described by Deborah Lupton and Ben Williamson in their own research in 2017.

[This year our own work with young people, published in our report on data metaphors “the words we use in data policy”, found that young people want institutions to stop treating data about them as a commodity and start respecting data as extracts from the stories of their lives.]

The UK government and policy makers, are simply ignoring the inconvenient truth that legislation and governance frameworks such as the UN General Comment no 25 on Children in the Digital Environment, that exist today, demand people know what is done with data about them, and it must be applied to address children’s right to be heard and to enable them to exercise their data rights.

The public perceptions study within this new OMDDAC work, shows that it’s not only the views of children and young people that are being ignored, but adults too.

And perhaps it is worth reflecting here, that often people don’t tend to think about all this in terms of data rights and data protection, but rather human rights and protections for the human being from the use of data that gives other people power over our lives.

This project, found young people’s trust in use of their confidential personal data was affected by understanding who would use the data and why, and how people will be protected from prejudice and discrimination.

We could build easy-reporting mechanisms at public points of contact with state institutions; in education, in social care, in welfare and policing, to produce reports on demand of the information you hold about me and enable corrections. It would benefit institutions by having more accurate data, and make them more trustworthy if people can see here’s what you hold on me and here’s what you did with it.

Instead, we’re going in the opposite direction. New government proposals suggest making that process harder, by charging for Subject Access Requests.

This research shows that current policy is not what young people want. People want the ability to choose between granular levels of control in the data that is being shared. They value having autonomy and control, knowing who will have access, maintaining records accuracy, how people will be kept informed of changes, who will maintain and regulate the database, data security, anonymisation, and to have their views listened to.

Young people also fear the power of data to speak for them, that the data about them are taken at face value, listened to by those in authority more than the child in their own voice.

What do these findings mean for public policy? Without respect for what people want; for the fundamental human rights and freedoms for all, there is no social license for data policies.

Whether it’s confidential GP records or the school census expansion in 2016, when public trust collapses so does your data collection.

Yet the government stubbornly refuses to learn and seems to believe it’s all a communications issue, a bit like the ‘Yes Minister’ English approach to foreigners when they don’t understand: just shout louder.

No, this research shows data policy failures are not fixed by, “communicate the benefits”.

Nor is it fixed by changing Data Protection law. As a comment in the report says, UK data protection law offers a “how-to” not a “don’t-do”.

Data protection law is designed to be enabling of data flows. But that can mean that when state data processing rightly often avoids using the lawful basis of consent in data protection terms, the data use is not consensual.

[For the sake of time, I didn’t include this thought in the next two paragraphs in the talk, but I think it is important to mention that in our own work we find that this contradiction is not lost on young people. — Against the backdrop of the efforts after the MeToo movement and lots said by Ministers in Education and at the DCMS about the Everyone’s Invited work earlier this year to champion consent in relationships, sex and health education (RSHE) curriculum; adults in authority keep saying consent matters, but don’t demonstrate it, and when it comes to data, use people’s data in ways they do not want.

The report picks up that young people, and disproportionately those communities that experience harm from authorities, mistrust data sharing with the police. This is now set against the backdrop of not only the recent, Wayne Couzens case, but a series of very public misuses of police power, including COVID powers.]

The data powers used, “Under COVID conditions” are now being used as a cover for the attack on data protections in the future. The DCMS consultation on changing UK Data Protection law, open until November 19th, suggests that similarly reduced protections on data distribution in the emergency, should become the norm. While DP law is written expressly to permit things that are out of the ordinary in extraordinary circumstances, they are limited in time. The government is proposing that some things that were found convenient to do under COVID, now become commonplace.

But it includes things such as removing Article 22 from the UK GDPR with its protections for people in processes involving automated decision making.

Young people were those who felt first hand the risks and harms of those processes in the summer of 2020, and the “mutant algorithm” is something this Observatory Report work also addressed in their research. Again, it found young people felt left out of those decisions about them despite being the group that would feel its negative effects.

[Data protection law may be enabling increased lawful data distribution across the public sector, but it is not offering people, including young people, the protections they expect of their human right to privacy. We are on a dangerous trajectory for public interest research and for society, if the “new direction” this government goes in, for data and digital policy and practice, goes against prevailing public attitudes and undermines fundamental human rights and freedoms.]

The risks and benefits of the power obtained from the use of admin data are felt disproportionately across different communities including children, who are not a one size fits all, homogenous group.

[While views across groups will differ — and we must be careful to understand any popular context at any point in time on a single issue and unconscious bias in and between groups — policy must recognise where there are consistent findings across this research with that which has gone before it. There are red lines about data re-uses, especially on conflated purposes using the same data once collected by different people, like commercial re-use or sharing (health) data with police.]

The golden thread that runs through time and across different sectors’ data use, are the legal frameworks underpinned by democratic mandates, that uphold our human rights.

I hope the powers-at-be in the DCMS consultation, and wider policy makers in data and digital policy, take this work seriously and not only listen, but act on its recommendations.


2024 updates: opening paragraph edited to add current links.
A chapter written by Rachel Allsopp and Claire bessant discussing OMDDAC’s research with children will also be published on 21st May 2024 in Governance, democracy and ethics in crisis-decision-making: The pandemic and beyond (Manchester University Press) as part of its Pandemic and Beyond series https://manchesteruniversitypress.co.uk/9781526180049/ and an article discussing the research in the open access European Journal of Law and Technology is available here https://www.ejlt.org/index.php/ejlt/article/view/872.

Facebook View and Ray-Ban glasses: here’s looking at your kid

Ray-Ban (EssilorLuxxotica) is selling glasses with ‘Facebook View’. The questions have already been asked whether they  can be lawful in Europe, including in the UK, in particular in regards to enabling the processing of children’s personal data without consent.

The Italian data authority has asked the company to explain via the Irish regulator:

  • the legal basis on which Facebook processes personal data;
  • the measures in place to protect people recorded by the glasses, children in particular,
  • questions of anonymisation of the data collected; and
  • the voice assistant connected to the microphone in the glasses.

While the first questions in Europe may be bound to data protection law and privacy, there are also questions of why Facebook has gone ahead despite Google Glass that was removed from the market in 2013. You can see a pair displayed in a surveillance exhibit at the Victoria and Albert museum (September 2021).

We can’t wait to see the world from your perspective“, says Ray-ban Chief Wearables Officer Rocco Basilico in the promotional video together with Mark Zuckerberg.  I bet. But not as much as Facebook.

With cameras and microphones built-in, up to around 30 videos or 500 photos can be stored on the glasses, and shared with Facebook companion app. While the teensy light on a corner is supposed to be an indicator that recording is in progress, the glasses look much like any other and indistinguishable in the Ray-ban range. You can even buy them as prescription glasses, which intrigues me as to how that recording looks on playback, or shared via the companion apps.

While the Data Policy doesn’t explicitly mention Facebook View in the wording on how it uses data to “personalise and improve our Products,” and the privacy policy is vague on Facebook View, it seems pretty clear that Facebook will use the video capture to enhance its product development in augmented reality.

We believe this is an important step on the road to developing the ultimate augmented reality glasses“, says Mark Zuckerberg.(05:46)

The company needs a lawful basis to be able to process the data it receives for those purposes. It determines those purposes, and is therefore a data controller for that processing.

In the supplemental policy the company says that “Facebook View is intended solely for users who are 13 or older.” Data Protection law does not care about the age of the product user, but it does regulate under what basis a child’s data may be processed and that may be the user, setting up an account. It is also concerned about the data of the children who are recorded. By recognising  the legal limitations on who can be an account owner, it has a bit of a self-own here on what the law says on children’s data.

Personal privacy may have weak protection in data protection laws that offer the wearer exemptions for domestic** or journalistic purposes, but neither the user nor the company can avoid the fact that processing video and audio recordings may be without (a) adequately informing people whose data is processed or (b) appropriate purpose limitation for any processing that Facebook the company performs, across all of its front end apps and platforms or back-end processes.

I’ve asked Facebook how I would, as a parent or child, be able to get a wearer to destroy a child’s images and video or voice recorded in a public space, to which I did not consent. How would I get to see that content once held by Facebook, or request its processing be restricted by the company, or user, or the data destroyed?

Testing the Facebook ‘contact our DPO’ process as if I were a regular user, fails. It has sent me round the houses via automated forms.

Facebook is clearly wrong here on privacy grounds but if you can afford the best in the world on privacy law, why would you go ahead anyway? Might they believe after nearly twenty years of privacy invasive practice and a booming bottom line, that there is no risk to reputation, no risk to their business model, and no real risk to the company from regulation?

It’s an interesting partnership since Ray-Ban has no history in understanding privacy. Facebook has a well known controversial one.  Reputational risk shared, will not be reputational risk halved. And EssilorLuxottica has a share price to consider.  I wonder if they carried out any due diligence risk assessment for their investors?

If and when enforcement catches up and the product is withdrawn, regulators must act as the FTC did on the development of the product (in that case algorithms) from “ill gotten data”. (In the Matter of Everalbum and Paravision Commission File No. 1923172).

Destroy the data, destroy the knowledge gained, and remove it from any product development to  date.  All “Affected Work Product.”

Otherwise any penalty Facebook will get from this debacle, will be just the cost of doing business to have bought itself a very nice training dataset for its AR product development.

Ray-Ban of course, will take all the reputational hit if found enabling strangers to take covert video of our kids. No one expects any better from Facebook.  After all, we all know, Facebook takes your privacy, seriously.


Reference:  Rynes: On why your ring video doorbell may make you a controller under GDPR.

https://medium.com/golden-data/rynes-e78f09e34c52 (Golden Data, 2019)

Judgment of the Court (Fourth Chamber), 11 December 2014 František Ryneš v Úřad pro ochranu osobních údajů Case C‑212/13. Case file

exhibits from the Victoria and Albert museum (September 2021)

When the gold standard no longer exists: data protection and trust

Last week the DCMS announced that consultation on changes to Data Protection laws is coming soon.

  • UK announces intention for new multi-billion pound global data partnerships with the US, Australia and Republic of Korea
  • International privacy expert John Edwards named as preferred new Information Commissioner to oversee shake-up
  • Consultation to be launched shortly to look at ways to increase trade and innovation through data regime.

The Telegraph reported, Mr Dowden argues that combined, they will enable Britain to set the “gold standard” in data regulation, “but do so in a way that is as light touch as possible”.

It’s an interesting mixture of metaphors. What is a gold standard? What is light touch? These rely on assumptions in the reader to assume meaning, but don’t convey any actual content. Whether there will be substantive changes or not, we need to wait for the full announcement this month.

Oliver Dowden’s recent briefing to the Telegraph (August 25) was not the first trailer for changes that are yet to be announced. He wrote in the FT in February this year, that, “the UK has an opportunity to be at the forefront of global, data-driven growth,” and it looks like he has tried to co-opt the rights’ framing as his own.  …”the beginning of a new era in the UK — one  where we start asking ourselves not just whether we have the right to use data, but whether,  given its potential for good, we have the right not to.”

There was nothing more on that in this week’s announcement, but the focus was on international trade. The Government says it is prioritising six international agreements with “the US, Australia, Colombia, Singapore, South Korea and Dubaibut in the future it also intends to target the world’s fastest growing economies, among them, India, Brazil, Kenya and Indonesia.” (my bold)

Notably absent from the ‘fastest growing’ among them mentions’ list is China. What those included in the list have in common, is that they are countries not especially renowned for protecting human rights.

Human rights like privacy. The GDPR and in turn the UK-GDPR recognised that rights matter.  Data Protection is not designed in other regimes to be about prioritising the protection of rights but harmonisation of data in trade, and that may be where we are headed. If so, it would be out of step with how the digital environment has changed since those older laws were seen as satisfactory. But weren’t.  And the reason why the EU countries moved towards both better harmonisation *and* rights protection.

At the same time, while data protection laws increasingly align towards a high interoperable and global standard, data sovereignty and protectionism is growing too where transfers to the US remain unprotected from government surveillance.

Some countries are establishing stricter rules on the cross-border transfer of personal information, in the name of digital sovereignty, security or business growth. such as Hessen’s decision on Microsoft and “bring the data home” moves to German-based data centres.

In the big focus on data-for-trade post-Brexit fire sale,  the DCMS appears to be ignoring these risks of data distribution, despite having a good domestic case study on its doorstep in 2020. The Department for Education has been giving data away sensitive pupil data since 2012. Millions of people, including my own children, have no idea where it’s gone. The lack of respect for current law makes me wonder how I will trust that our own government, and those others we trade with, will respect our rights and risks in future trade deals.

Dowden complains in the Telegraph about the ICO that, “you don’t know if you have done something wrong until after you’ve done it”.  Isn’t that the way that enforcement usually works? Should the 2019-20 ICO audit have turned a blind eye to  the Department for Education lack of prioritisation of the rights of the named records of over 21 million pupils? Don’t forget even gambling companies had access to learners’ records of which the Department for Education claimed to be unaware. To be ignorant of law that applies to you, is a choice.

Dowden claims the changes will enable Britain to set the “gold standard” in data regulation. It’s an ironic analogy to use, since the gold standard while once a measure of global trust between countries, isn’t used by any country today. Our government sold off our physical gold over 20 years ago, after being the centre of the global gold market for over 300 years. The gold standard is a meaningless thing of the past that sounds good. A true international gold standard existed for fewer than 50 years (1871 to 1914). Why did we even need it? Because we needed a consistent trusted measure of monetary value, backed by trust in a commodity. “We have gold because we cannot trust governments,” President Herbert Hoover famously said in 1933 in his statement to Franklin D. Roosevelt. The gold standard was all about trust.

At defenddigitalme we’ve very recently been talking with young people about politicians’ use of language in debating national data policy.  Specifically, data metaphors. They object to being used as the new “oil” to “power 21st century Britain” as Dowden described it.

A sustainable national data strategy must respect human rights to be in step with what young people want. It must not go back to old-fashioned data laws only  shaped by trade and not also by human rights; laws that are not fit for purpose even in the current digital environment. Any national strategy must be forward-thinking. It otherwise wastes time in what should be an urgent debate.

In fact, such a strategy is the wrong end of the telescope from which to look at personal data at all— government should be focussing on the delivery of quality public services to support people’s interactions with the State and managing the administrative data that comes out of digital services as a by-product and externality. Accuracy. Interoperability. Registers. Audit. Rights’ management infrastructure. Admin data quality is quietly ignored while we package it up hoping no one will notice it’s really. not. good.

Perhaps Dowden is doing nothing innovative at all. If these deals are to be about admin data given away in international trade deals he is simply continuing a long tradition of selling off the family silver. The government may have got to the point where there is little left to sell. The question now would be whose family does it come from?

To use another bad metaphor, Dowden is playing with fire here if they don’t fix the issue of the future of trust. Oil and fire don’t mix well. Increased data transfers—without meaningful safeguards including minimized data collection to start with—will increase risk, and transfer that risk to you and me.

Risks of a lifetime of identity fraud are not just minor personal externalities in short term trade. They affect nation state security. Digital statecraft. Knowledge of your public services is business intelligence. Loss of trust in data collection creates lasting collective harm to data quality, with additional risk and harm as a result passed on to public health programmes and public interest research.

I’ll wait and see what the details of the plans are when announced. We might find it does little more than package up recommendations on Codes of Practice, Binding Corporate Contracts and other guidance that the EDPB has issued in the last 12 months. But whatever it looks like, so far we are yet to see any intention to put in place the necessary infrastructure of rights management that admin data requires. While we need data registers, those we had have been axed. Few new ones under the Digital Economy Act replaced them. Transparency and controls for people to exercise rights are needed if the government wants our personal data to be part of new deals.

 

img: René Magritte The False Mirror Paris 1929

=========

Join me at the upcoming lunchtime online event, on September 17th from 13:00 to talk about the effect of policy makers’ language in the context of the National Data Strategy: ODI Fridays: Data is not an avocado – why it matters to Gen Z https://theodi.org/event/odi-fridays-data-is-not-an-avocado-why-it-matters-to-gen-z/

Data Protection law is being set up as a patsy.

After Dominic Cummings’ marathon session at the Select Committee, the Times published an article on,”The heroes and villains of the pandemic, according to Dominic Cummings”

One of Dom’s villains left out, was data protection law. He claimed, “if someone somewhere in the system didn’t say, ‘ignore GDPR’ thousands of people were going to die,” and that “no one even knew if that itself was legal—it almost definitely wasn’t.”

Thousands of people have died since that event he recalled from March 2020, but as a result of Ministers’ decisions, not data laws.

Data protection laws are *not* barriers, but permissive laws to *enable* use of personal data within a set of standards and safeguards designed to protect people. The opposite of what its detractors would have us believe.

The starting point is fundamental human rights. Common law confidentially. But the GDPR and its related parts on public health, are in fact specifically designed to enable data processing that overrules those principles for pandemic response purposes . In recognition of emergency needs for a limited time period, data protection laws permit interference with our fundamental rights and freedoms, including overriding privacy.

We need that protection of our privacy sometimes from government itself. And sometimes from those who see themselves as “the good guys” and above the law.

The Department of Health appears to have no plan to tell people about care.data 2,  the latest attempt at an NHS data grab, despite the fact that data protection laws require that they do. From September 1st (delayed to enable it to be done right, thanks to campaign efforts from medConfidential et supporters) all our GP medical records will be copied into a new national database for re-use, unless we actively opt out.

It’s groundhog day for the Department of Health. It is baffling why the government cannot understand or accept the need to do the right thing, and instead is repeating the same mistake of recent memory, all over again. Why the rush without due process and steamrollering any respect for the rule of law?

Were it not so serious, it might amuse me that some academic researchers appear to fail to acknowledge this matters, and they are getting irate on Twitter that *privacy* or ‘campaigners’ will prevent them getting hold of the data they appear to feel entitled to. Blame the people that designed a policy that will breach human rights and the law, not the people who want your rights upheld. And to blame the right itself is just, frankly, bizarre.

Such rants prompt me to recall the time when early on in my lay role on the Administrative Data Research Network approvals panel, a Director attending the meeting *as a guest* became so apoplectic with rage, that his face was nearly purple. He screamed, literally, at the panel of over ten well respected academics and experts in research and / or data because he believed the questions being asked over privacy and ethics principles in designing governance documents were unnecessary.

Or I might recall the request at my final meeting two years later in 2017 by another then Director, for access to highly sensitive and linked children’s health and education data to do (what I believed was valuable) public interest research involving the personal data of children with Down Syndrome. But the request came through the process with no ethical review. A necessary step before it should even have reached the panel for discussion.

I was left feeling from those two experiences, that both considered themselves and their work to be in effect “above the law” and expected special treatment, and a free pass without challenge. And that it had not improved over the two years.

If anyone in the research community cannot support due process, law, and human rights when it comes to admin data access, research using highly sensitive data about people’s lives with potential for significant community and personal impacts, then you are part of the problem.  There was extensive public outreach in 2012-13 across the UK about the use of personal if de-identified data in safe settings. And in 2014 the same concerns and red-lines were raised by hundreds of people in person, almost universally with the same reactions at a range of care.data public engagement events. Feedback which institutions say matters, but continue to ignore.

It seems nothing has changed since I wrote,

“The commercial intermediaries still need to be told, don’t pee in the pool. It spoils it, for everyone else.”

We could also look back to when Michael Gove as Secretary of State for Education, changed the law in 2012 to permit pupil level, identifying and sensitive personal data to be given away to third parties. Journalists. Charities. Commercial companies, even included an online tutoring business, pre-pandemic and an agency making heat maps of school catchment areas from identifying pupil data for estate agents — notably, without any SEND pupils’ data. (Cummings was coincidentally a Gove SpAd at the Department for Education.)  As a direct result of that decision to give away pupils’ personal data in 2012, (in effect ‘re-engineering’ how the education sector was structured and the roles of the local authority and non-state providers and creating a market for pupil data)  an ICO audit of the DfE in February 2020 found unlawful practice and made 139 recommendations for change. We’re still waiting to see if and how it will be fixed.  At the moment it’s business as usual. Literally. The ICO don’t appear even to have stopped further data distribution until made lawful.

In April 2021, in answer to a written Parliamentary Question Nick Gibb, Schools Minister, made a commitment to “publish an update to the audit in June 2021 and further details regarding the release mechanism of the full audit report will be contained in this update.”  Will they promote openess, transparency, accountablity,or continue to skulk from publishing the whole truth?

Children have lost control of their digital footprint in state education by their fifth birthday.  The majority of parents polled in 2018 do not know the National Pupil Database even exists. 69% of over 1,004 parents asked, replied that they had not been informed that the Department for Education may give away children’s data to third-parties at all.

Thousands of companies continue to exploit children’s school records, without opt-in or opt-out, including special educational needs, ethnicity, and other sensitive data at pupil level.

Data protection law alone is in fact so enabling of data flow, that it is inadequate to protect children’s rights and freedoms across the state education sector in England; whether from public interest, charity or commercial research interventions without opt in or out, without parental knowledge. We shouldn’t need to understand our rights or to be proactive, in order to have them protected by default but data protection law and the ICO in particular have been captured by the siren call of data as a source of ‘innovation’ and economic growth.

Throughout 2018 and questions over Vote Leave data uses, Cummings claimed to know GDPR well. It was everyone else who didn’t. On his blog that July he suggested, “MPs haven’t even bothered to understand GDPR, which they mis-explain badly,” and in April he wrote,  The GDPR legislation is horrific. One of the many advantages of Brexit is we will soon be able to bin such idiotic laws.” He lambasted the Charter of Fundamental Rights the protections of which the government went on to take away from us under European Union Withdrawal Act.

But suddenly, come 2020/21 he is suggesting he didn’t know the law that well after all, “no one even knew if that itself was legal—it almost definitely wasn’t.”

Data Protection law is being set up as a patsy, while our confidentiality is commodified. The problem is not the law. The problem is those in power who fail to respect it, those who believe themselves to be above it, and who feel an entitlement to exploit that for their own aims.


Added 21/06/2021: Today I again came across a statement that I thought worth mentioning, from the Explanatory Notes for the Data Protection Bill from 2017:

Accordingly, Parliament passed the Data Protection Act 1984 and ratified the Convention in 1985, partly to ensure the free movement of data. The Data Protection Act 1984 contained principles which were taken almost directly from Convention 108 – including that personal data shall be obtained and processed fairly and lawfully and held only for specified purposes.”

The Data Protection Directive (95/46/EC) (“the 1995 Directive”) provides the current basis for the UK’s data protection regime. The 1995 Directive stemmed from the European Commission’s concern that a number of Member States had not introduced national law related to Convention 108 which led to concern that barriers may be erected to data flows. In addition, there was a considerable divergence in the data protection laws between Member States. The focus of the 1995 Directive was to protect the right to privacy with respect to the processing of personal data and to ensure the free flow of personal data between Member States. “

Views on a National AI strategy

Today was the APPG AI Evidence Meeting – The National AI Strategy: How should it look? Here’s some of my personal views and takeaways.

Have the Regulators the skills and competency to hold organisations to account for what they are doing? asked Roger Taylor, the former Chair of Ofqual the exams regulator, as he began the panel discussion, chaired by Lord Clement-Jones.

A good question was followed by another.

What are we trying to do with AI? asked Andrew Strait, Associate Director of Research Partnerships at Ada Lovelace Institute and formerly of DeepMind and Google. The goal of a strategy should not be to have more AI for the sake of having more AI, he said, but an articulation of values and goals. (I’d suggest the government may be in fact in favour of exactly that, more AI for its own sake where its appplication is seen as a growth market.) And interestingly he suggested that the Scottish strategy has more values-based model, such as fairness. [I had, it seems, wrongly assumed that a *national* AI strategy to come, would include all of the UK.]

The arguments on fairness are well worn in AI discussion and getting old. And yet they still too often fail to ask whether these tools are accurate or even work at all. Look at the education sector and one company’s product, ClassCharts, that claimed AI as its USP for years, but the ICO found in 2020 that the company didn’t actually use any AI at all. If company claims are not honest, or not accurate, then they’re not fair to anyone, never mind across everyone.

Fairness is still too often thought of in terms of explainability of a computer algorithm, not the entire process it operates in. As I wrote back in 2019, “yes we need fairness accountability and transparency. But we need those human qualities to reach across thinking beyond computer code. We need to restore humanity to automated systems and it has to be re-instated across whole processes.”

Strait went on to say that safe and effective AI would be something people can trust. And he asked the important question: who gets to define what a harm is? Rightly identifying that the harm identified by a developer of a tool, may be very different from those people affected by it. (No one on the panel attempted to define or limit what AI is, in these discussions.) He suggested that the carbon footprint from AI may counteract the benefit it would have to apply AI in the pursuit of climate-change goals. “The world we want to create with AI” was a very interesting position and I’d have liked to hear him address what he meant by that, who is “we”, and any assumptions within it.

Lord Clement-Jones asked him about some of the work that Ada Lovelace had done on harms such as facial recognition, and also asked whether some sector technologies are so high risk that they must be regulated?  Strait suggested that we lack adequate understanding of what harms are — I’d suggest academia and civil society have done plenty of work on identifying those, they’ve just been too often  ignored until after the harm is done and there are legal challenges. Strait also suggested he thought the Online Harms agenda was ‘a fantastic example’ of both horizontal and vertical regulation. [Hmm, let’s see. Many people would contest that, and we’ll see what the Queen’s Speech brings.]

Maria Axente then went on to talk about children and AI.  Her focus was on big platforms but also mentioned a range of other application areas. She spoke of the data governance work going on at UNICEF. She included the needs for driving awareness of the risks for children and AI, and digital literacy. The potential for limitations on child  development, the exacerbation of the digital divide,  and risks in public spaces but also hoped for opportunities. She suggested that the AI strategy may therefore be the place for including children.

This of course was something I would want to discuss at more length, but in summary the last decade of Westminster policy affecting children, even the Children’s Commissioner most recent Big Ask survey, bypass the question of children’s *rights* completely. If the national AI strategy by contrast would address rights, [the foundation upon which data laws are built] and create the mechanisms in public sector interactions with children that would enable them to be told if and how their data is being used (in AI systems or otherwise) and be able to exercise the choices that public engagement time and time again says is what people want, then that would be a *huge* and positive step forward to effective data practice across the public sector and for use of AI. Otherwise I see a risk that a strategy on AI and children will ignore children as rights holders across a full range of rights in the digital environment, and focus only on the role of AI in child protection, a key DCMS export aim, and ignore the invasive nature of safety tech tools, and its harms.

Next Dr Jim Weatherall from Astra Zeneca tied together  leveraging “the UK unique strengths of the NHS” and “data collected there” wanting a close knitting together of the national AI strategy and the national data strategy, so that healthcare, life sciences and biomedical sector can become “an international renowned asset.”  He’d like to see students doing data science modules in studies and international access to talent to work for AZ.

Lord Clement-Jones then asked him how to engender public trust in data use. Weatherall said a number of false starts in the past are hindering progress, but that he saw the way forward was data trusts and citizen juries.

His answer ignores the most obvious solution: respect existing law and human rights, using data only in ways that people want and give their permission to do so. Then show them that you did that, and nothing more. In short, what medConfidential first proposed in 2014, the creation of data usage reports.

The infrastructure for managing personal data controls in the public sector, as well as its private partners, must be the basic building block for any national AI strategy.  Views from public engagement work, polls, and outreach has not changed significantly since those done in 2013-14, but ask for the same over and over again. Respect for ‘red lines’ and to have control and choice. Won’t government please make it happen?

If the government fails to put in place those foundations, whatever strategy it builds will fall in the same ways they have done to date, like care.data did by assuming it was acceptable to use data in the way that the government wanted, without a social licence, in the name of “innovation”. Aims that were championed by companies such as Dr Foster, that profited from reusing personal data from the public sector, in a “hole and corner deal” as described by the chairman of the House of Commons committee of public accounts in 2006. Such deals put industry and “innovation” ahead of what the public want in terms of ‘red lines’ for acceptable re-uses of their own personal data and for data re-used in the public interest vs for commercial profit.  And “The Department of Health failed in its duty to be open to parliament and the taxpayer.” That openness and accountability are still missing nearly ten years on in the scope creep of national datasets and commercial reuse, and in expanding data policies and research programmes.

I disagree with the suggestion made that Data Trusts will somehow be more empowering to everyone than mechanisms we have today for data management. I believe Data Trusts will further stratify those who are included and those excluded, and benefit those who have capacity to be able to participate, and disadvantage those who cannot choose. They are also a figleaf of acceptability that don’t solve the core challenge . Citizen juries cannot do more than give a straw poll. Every person whose data is used has entitlement to rights in law, and the views of a jury or Trust cannot speak for everyone or override those rights protected in law.

Tabitha Goldstaub spoke next and outlined some of what AI Council Roadmap had published. She suggested looking at removing barriers to best support the AI start-up community.

As I wrote when the roadmap report was published, there are basics missing in government’s own practice that could be solved. It had an ambition to, “Lead the development of data governance options and its uses. The UK should lead in developing appropriate standards to frame the future governance of data,” but the Roadmap largely ignored the governance infrastructures that already exist. One can only read into that a desire to change and redesign what those standards are.

I believe that there should be no need to change the governance of data but instead make today’s rights able to be exercised and deliver enforcement to make existing governance actionable. Any genuine “barriers” to data use in data protection law,  are designed as protections for people; the people the public sector, its staff and these arms length bodies are supposed to serve.

Blaming AI and algorithms, blaming lack of clarity in the law, blaming “barriers” is often avoidance of one thing. Human accountability. Accountability for ignorance of the law or lack of consistent application. Accountability for bad policy, bad data and bad applications of tools is a human responsibility. Systems you choose to apply to human lives affect people, sometimes forever and in the most harmful ways, so those human decisions must be accountable.

I believe that some simple changes in practice when it comes to public administrative data could bring huge steps forward there:

  1. An audit of existing public admin data held, by national and local government, and consistent published registers of databases and algorithms / AI / ML currently in use.
  2. Identify the lawful basis for each set of data processes, their earliest records dates and content.
  3. Publish that resulting ROPA and storage limitations.
  4. Assign accountable owners to databases, tools and the registers.
  5. Sort out how you will communicate with people whose data you unlawfully process to meet the law, or stop processing it.
  6. And above all, publish a timeline for data quality processes and show that you understand how the degradation of data accuracy, quality affect the rights and responsibilities in law that change over time, as a result.

Goldstaub went on to say on ethics and inclusion, that if it’s not diverse, it’s not ethical. Perhaps the next panel itself and similar events could take a lesson learned from that, as such APPG panel events are not as diverse as they could or should be themselves.  Some of the biggest harms in the use of AI are after all for those in communities least represented, and panels like this tend to ignore lived reality.

The Rt Rev Croft then wrapped up the introductory talks on that more human note, and by exploding some myths.  He importantly talked about the consequences he expects of the increasing use of AI and its deployment in ‘the future of work’ for example, and its effects for our humanity. He proposed 5 topics for inclusion in the strategy and suggested it is essential to engage a wide cross section of society. And most importantly to ask, what is this doing to us as people?

There were then some of the usual audience questions asked on AI, transparency, garbage-in garbage-out, challenges of high risk assessment, and agreements or opposition to the EU AI regulation.

What frustrates me most in these discussions is that the technology is an assumed given, and the bias that gives to the discussion, is itself ignored. A holistic national AI strategy should be looking at if and why AI at all. What are the consequences of this focus on AI and what policy-making-oxygen and capacity does it take away from other areas of what government could or should be doing? The questioner who asks how adaptive learning could use AI for better learning in education, fails to ask what does good learning look like, and if and how adaptive tools fit into that, analogue or digital, at all.

I would have liked to ask panelists if they agree that proposals of public engagement and digital literacy distract from lack of human accountability for bad policy decisions that use machine-made support? Taking  examples from 2020 alone, there were three applications of algorithms and data in the public sector challenged by civil society because of their harms: from the Home Office dropping its racist visa algorithm, DWP court case finding ‘irrational and unlawful’ in Universal Credit decisions, and the “mutant algorithm” of summer 2020 exams. Digital literacy does nothing to help people in those situations. What AI has done is to increase the speed and scale of the harms caused by harmful policy, such as the ‘Hostile Environment’ which is harmful by design.

Any Roadmap, AI Council recommendations, and any national strategy if serious about what good looks like, must answer how would those harms be prevented in the public sector *before* being applied. It’s not about the tech, AI or not, but misuse of power. If the strategy or a Roadmap or ethics code fails to state how it would prevent such harms, then it isn’t serious about ethics in AI, but ethics washing its aims under the guise of saying the right thing.

One unspoken problem right now is the focus on the strategy solely for the delivery of a pre-determined tool (AI). Who cares what the tool is? Public sector data comes from the relationship between people and the provision of public services by government at various levels, and its AI strategy seems to have lost sight of that.

What good would look like in five years would be the end of siloed AI discussion as if it is a desirable silver bullet, and mythical numbers of ‘economic growth’ as a result, but see AI treated as is any other tech and its role in end-to-end processes or service delivery would be discussed proportionately. Panelists would stop suggesting that the GDPR is hard to understand or people cannot apply it.  Almost all of the same principles in UK data laws have applied for over twenty years. And regardless of the GDPR, the Convention 108 applies to the UK post-Brexit unchanged, including associated Council of Europe Guidelines on AI, data protection, privacy and profiling.

Data laws. AI regulation. Profiling. Codes of Practice on children, online safety or biometrics and emotional or gait recognition. There *are* gaps in data protection law when it comes to biometric data not used for unique identification purposes. But much of this is already rolled into other law and regulation for the purposes of upholding human rights and the rule of law. The challenge in the UK is often not having the law, but its lack of enforcement. There are concerns in civil society that the DCMS is seeking to weaken core ICO duties even further. Recent  government, council and think tank roadmaps talk of the UK leading on new data governance, but in reality simply want to see established laws rewritten to be less favourable of rights. To be less favourable towards people.

Data laws are *human* rights-based laws. We will never get a workable UK national data strategy or national AI strategy if government continues to ignore the very fabric of what they are to be built on. Policy failures will be repeated over and over until a strategy supports people to exercise their rights and have them respected.

Imagine if the next APPG on AI asked what would human rights’ respecting practice and policy look like, and what infrastructure would the government need to fund or build to make it happen?  In public-private sector areas (like edTech). Or in the justice system, health, welfare, children’s social care. What could that Roadmap look like and how we can make it happen over what timeframe? Strategies that could win public trust *and* get the sectoral wins the government and industry are looking for. Then we might actually move forwards on getting a functional strategy that would work, for delivering public services and where both AI and data fit into that.

The Rise of Safety Tech

At the CRISP hosted, Rise of Safety Tech, event  this week,  the moderator asked an important question: What is Safety Tech? Very honestly Graham Francis of the DCMS answered among other things, “It’s an answer we are still finding a question to.”

From ISP level to individual users, limitations to mobile phone battery power and app size compatibility, a variety of aspects within a range of technology were discussed. There is a wide range of technology across this conflated set of products packaged under the same umbrella term. Each can be very different from the other, even within one set of similar applications, such as school Safety Tech.

It worries me greatly that in parallel to the run up to the Online Harms legislation that their promotion appears to have assumed the character of a done deal. Some of these tools are toxic to children’s rights because of the policy that underpins them. Legislation should not be gearing up to make the unlawful lawful, but fix what is broken.

The current drive is towards the normalisation of the adoption of such products in the UK, and to make them routine. It contrasts with the direction of travel of critical discussion outside the UK.

Some Safety Tech companies have human staff reading flagged content and making decisions on it, while others claim to use only AI. Both might be subject to any future EU AI Regulation for example.

In the U.S. they also come under more critical scrutiny. “None of these things are actually built to increase student safety, they’re theater, Lindsay Oliver,  project manager for the Electronic Frontier Foundation was quoted as saying in an article just this week.

Here in the U.K. their regulatory oversight is not only startlingly absent, but the government is becoming deeply invested in cultivating the sector’s growth.

The big questions include who watches the watchers, with what scrutiny and safeguards? Is it safe, lawful, ethical, and does it work?

Safety Tech isn’t only an answer we are still finding a question to. It is a world view, with a particular value set. Perhaps the only lens through which its advocates believe the world wide web should be seen, not only by children, but by anyone. And one that the DCMS is determined to promote with “the UK as a world-leader” in a worldwide export market.

As an example one of the companies the DCMS champions in its May 2020 report, ‘‘Safer technology, safer users” claims to export globally already. eSafe Global is now providing a service to about 1 million students and schools throughout the UK, UAE, Singapore, Malaysia and has been used in schools in Australia since 2011.

But does the Department understand what they are promoting? The DCMS Minister responsible, Oliver Dowden said in Parliament on December 15th 2020: “Clearly, if it was up to individuals within those companies to identify content on private channels, that would not be acceptable—that would be a clear breach of privacy.”

He’s right. It is. And yet he and his Department are promoting it.

So how is this going to play out if at all, in the Online Harms legislation expected soon, that he owns together with the Home Office? Sadly the needed level of understanding by the Minister or in the third sector and much of the policy debate in the media, is not only missing, but is actively suppressed by the moral panic whipped up in emotive personal stories around a Duty of Care and social media platforms. Discussion is siloed about identifying CSAM, or grooming, or bullying or self harm, and actively ignores the joined-up, wider context within which Safety Tech operates.

That context is the world of the Home Office. Of anti-terrorism efforts. Of mass surveillance and efforts to undermine encryption that are as nearly old as the Internet. The efforts to combat CSAM or child grooming online, operate in the same space. WePROTECT for example, sits squarely amid it all, established in 2014 by the UK Government and the then UK Prime Minister, David Cameron. Scrutiny of UK breaches of human rights law are well documented in ECHR rulings. Other state members of the alliance including the UAE stand accused of buying spyware to breach activists’ encrypted communications. It is disingenuous for any school Safety Tech actors to talk only of child protection without mention of this context. School Safety Tech while all different, operate by tagging digital activity with categories of risk, and these tags can include terrorism and extremism.

Once upon a time, school filtering and blocking services meant only denying access to online content that had no place in the classroom. Now it can mean monitoring all the digital activity of individuals, online and offline, using school or personal devices, working around encryption, whenever connected to the school network. And it’s not all about in-school activity. No matter where a child’s account is connected to the school network, or who is actually using it, their activity might be monitored 24/7, 365 days a year. A user’s activity that matches with the thousands of words or phrases on watchlists and in keyword libraries gets logged, and profiles individuals with ‘vulnerable’ behaviour tags, sometimes creating alerts. Their scope has crept from flagging up content, to flagging up children. Some schools create permanent records including false positives because they retain everything in a risk-averse environment, even things typed that a child subsequently deleted, and may be distributed and accessible by an indefinite number of school IT staff and stored in further third parties’ systems like CPOMS or Capita SIMS.

A wide range of the rights of the child are breached by mass monitoring in the UK, such as outlined in the UN Committee on the Rights of the Child General Comment No.25 which states that, “Any digital surveillance of children, together with any associated automated processing of personal data, should respect the child’s right to privacy and should not be conducted routinely, indiscriminately or without the child’s knowledge or, in the case of very young children, that of their parent or caregiver; nor should it take place without the right to object to such surveillance, in commercial settings and educational and care settings, and consideration should always be given to the least privacy-intrusive means available to fulfil the desired purpose.” (para 75)

Even the NSPCC, despite their recent public policy that opposes secure messaging using end-to-send encryption, recognises on its own Childline webpage the risk for children from content monitoring of children’s digital spaces, and that such monitoring may make them less safe.

In my work in 2018, one school Safety Tech company accepted our objections from defenddigitalme, that this monitoring went too far in its breach of children’s confidentially and safe spaces, and it agreed to stop monitoring counselling services. But there are roughly fifteen active companies here in the UK and the data protection regulator, the ICO despite being publicly so keen to be seen to protect children’s rights, has declined to act to protect children from the breach of their privacy and data protection rights across this field.

There are questions that should be straightforward to ask and answer, and while some CEOs are more willing to engage constructively with criticism and ideas for change than others, there is reluctance to address the key question: what is the lawful basis for monitoring children in school, at home, in- or out-side school hours?

Another important question often without an answer, is how do these companies train their algorithms whether in age verification or child safety tech?  How accurate are the language inferences for an AI designed to catch children out who are being deceitful and where  are assumptions, machine or man-made, wrong or discriminatory? It is overdue that our Regulator, the ICO, should do what the FTC did with Paravision, and require companies that develop tools through unlawful data processing to delete the output from it, the trained algorithm, plus products created from it.

Many of the harms from profiling children were recognised by the ICO in the Met Police gangs matrix: discrimination, conflation of victim and perpetrator, notions of ‘pre-crime’ without independent oversight,  data distributed out of context, and excessive retention.

Harm is after all why profiling of children should be prohibited. And where, in exceptional circumstances, States may lift this restriction, it is conditional that appropriate safeguards are provided for by law.

While I believe any of the Safety Tech generated category profiles could be harmful to a child through mis-interventions, being treated differently by staff as a result, or harm a trusted relationship,  perhaps the potentially most devastating to a child’s prospects are from mistakes that could be made under the Prevent duty.

The UK Home Office has pushed its Prevent agenda through schools since 2015, and it has been built into school Safety Tech by-design. School Safety Tech while all different, operate by tagging digital activity with categories of risk, and these tags can include terrorism and extremism.  I know of schools that have flags attached to children’s records that are terrorism related, but who have had no Prevent referral. But there is no transparency of these numbers at all. There is no oversight to ensure children do not stay wrongly tagged with those labels. Families may never know.

Perhaps the DCMS needs to ask itself, are the values of the UK Home Office really what the UK should export to children globally from “the UK as a world-leader” without independent legal analysis, without safeguards, and without taking accountability for their effects?

The Home Office values are demonstrated in its approach to the life and death of migrants at sea, children with no recourse to public funds, to discriminatory stop and search, a Department that doesn’t care enough to even understand or publish the impact of its interventions on children and their families.

The Home Office talk is of safeguarding children, but it is opposed to them having safe spaces online. School Safety Tech tools actively work around children’s digital security, can act as a man-in-the-middle, and can create new risks. There is no evidence I have seen that on balance convinces me that school Safety Tech does in fact make children safer. But plenty of evidence that the Home Office appears to want to create the conditions that make children less secure so that such tools could thrive, by weakening the security of digital activity through its assault on end-to-end encryption. My question is whether Online Harms is to be the excuse to give it a lawful basis.

Today there are zero statutory transparency obligations, testing or safety standards required of school Safety Tech before it can be procured in UK state education at scale.

So what would a safe and lawful framework for operation look like? It would be open to scrutiny and require regulatory action, and law.

There are no published numbers of how many records are created about how many school children each year. There are no safeguards in place to protect children’s rights or protection from harm in terms of false positives, error retention, transfer of records to the U.S. or third party companies, or how many covert photos they have enabled to be taken of children via webcam by school staff.  There is no equivalent of medical device ‘foreseeable misuse risk assessment’  such as ISO 14971 would require, despite systems being used for mental health monitoring with suicide risk flags. Children need to know what is on their record and to be able to seek redress when it is wrong. The law would set boundaries and safeguards and both existing and future law would need to be enforced. And we need independent research on the effects of school surveillance, and its chilling effects on the mental health and behaviour of developing young people.

Companies may argue they are transparent, and seek to prove how accurate their tools are. Perhaps they may become highly accurate.

But no one is yet willing to say in the school Safety Tech sector, these are thousands of words that if your child types may trigger a flag, or indeed, here’s an annual report of all the triggered flags and your own or your child’s saved profile. A school’s interactions with children’s social care already offers a framework for dealing with information that could put a child at risk from family members, so reporting should be do-able.

At the end of the event this week, the CRISP event moderator said of their own work, outside schools, that, “we are infiltrating bad actor networks across the globe and we are looking at everything they are saying. […] We have a viewpoint that there are certain lines where privacy doesn’t exist anymore.”

Their company website says their work involves, “uncovering and predicting the actions of bad actor, activist, agenda-driven and interest groups“. That’s a pretty broad conflation right there.  Their case studies include countering social media activism against a luxury apparel brand. And their legal basis of ‘legitimate interests‘ for their data processing might seem flimsy at best, for such a wide ranging surveillance activity where, ‘privacy doesn’t exist anymore’.

I must often remind myself that the people behind Safety Tech may epitomise the very best of what some believe is making the world safer online as they see it. But it is *as they see it*.  And if  policy makers or CEOs have convinced themselves that because ‘we are doing it for good, a social impact, or to safeguard children’, that breaking the law is OK, then it should be a red flag that these self-appointed ‘good guys’ appear to think themselves above the law.

My takeaway time and time again, is that companies alongside governments, policy makers, and a range of lobbying interests globally, want to redraw the lines around human rights, so that they can overstep them. There are “certain lines” that don’t suit their own business models or agenda. The DCMS may talk about seeing its first safety tech unicorn, but not about the private equity funding, or where they pay their taxes. Children may be the only thing they talk about protecting but they never talk of protecting children’s rights.

In the school Safety Tech sector, there is activity that I believe is unsafe, or unethical, or unlawful. There is no appetite or motivation so far to fix it. If in upcoming Online Harms legislation the government seeks to make lawful what is unlawful today, I wonder who will be held accountable for the unsafe and the unethical, that come with the package dealand will the Minister run that reputational risk?


Ethics washing in AI. Any colour as long as it’s dark blue?

The opening discussion from the launch of the Institute for Ethics in AI in the Schwarzman Centre for Humanties in Oxford both asked many questions and left many open.

The panel event is available to watch on YouTube.

The Director recognised in his opening remarks where he expected their work to differ from the talk of ethics in AI that can become ‘matters of facile mottos hard to distinguish from corporate PR’, like “Don’t be evil.” I would like to have heard him go on to point out the reasons why, because I fear this whole enterprise is founded on just that.

My first question is whether the Institute will ever challenge its own need for existence. It is funded, therefore it is. An acceptance of the technological value and inevitability of AI is after all, built into the name of the Institute.

As Powles and Nissenbaum, wrote in 2018, “the endgame is always to “fix” A.I. systems, never to use a different system or no system at all.”

My second question is on the three drivers they went on to identify, in the same article, “Artificial intelligence… is backed by real-world forces of money, power, and data.”

So let’s follow the money.

The funder of the Schwarzman Centre for Humanties the home of the new Institute is also funding AI ethics work across the Atlantic, at Harvard, Yale and other renowned institutions that you might expect to lead in the publication of influential research. The intention at the MIT Schwarzman College of Computing, is that his investment “will reorient MIT to address the opportunities and challenges presented by the rise of artificial intelligence including critical ethical and policy considerations to ensure that the technologies are employed for the common good.” Quite where does that ‘reorientation’ seek to end up?

The panel discussed power.

The idea of ‘citizens representing citizens rather than an elite class representing citizens’, should surely itself be applied to challenge who funds work that shapes public debate. How much influence is democratic for one person to wield?

“In 2007, Mr. Schwarzman was included in TIME’s “100 Most Influential People.” In 2016, he topped Forbes Magazine’s list of the most influential people in finance and in 2018 was ranked in the Top 50 on Forbes’ list of the “World’s Most Powerful People.” [Blackstone]

The panel also talked quite a bit about data.

So I wonder what work the Institute will do in this area and the values that might steer it.

In 2020 Schwarzman’s private equity company Blackstone, acquired a majority stake in Ancestry, a provider of ‘digital family history services with 3.6 million subscribers in over 30 countries’. DNA. The Chief Financial Officer of Alphabet Inc. and Google Inc sits on Blackstone’s board. Big data. The biggest. Bloomberg reported in December 2020 that, ‘Blackstone’s Next Product May Be Data From Companies It Buys’. “Blackstone, which holds stakes in about 97 companies through its private equity funds, ramped up its data push in 2015.”

It was Nigel Shadbolt who picked up the issues of data and of representation as relates to putting human values at the centre of design. He suggested that there is growing disquiet that rather than everyday humans’ self governance, or the agency of individuals, this can mean the values of ‘organised group interests’ assert control. He picked up on the values that we most prize, as things that matter in value-based computing and later on, that transparency of data flows as a form of power being important to understand. Perhaps the striving for open data as revealing power, should also apply to funding in a more transparent, publicly accessible model?

AI in a democratic culture.

Those whose lives are most influenced by AI are often those the most excluded in discussing its harms, and rarely involved in its shaping or application. Prof Hélène Landemore (Yale University) asked perhaps the most important question in the discussion, given its wide-ranging dance around the central theme of AI and its role or effects in a democratic culture, that included Age Appropriate Design, technical security requirements, surveillance capitalism and fairness. Do we in fact have democracy or agency today at all?

It is after all not technology itself that has any intrinsic ethics but those who wield its power, those who are designing it, and shaping the future through it, those human-accountability-owners who need to uphold ethical standards in how technology controls others’ lives.

The present is already one in which human rights are infringed by machine-made and data-led decisions about us without us, without fairness, without recourse, and without redress. It is a world that includes a few individuals in control of a lot. A world in which Yassen Aslam this week said, “the conditions of work, are being hidden behind the technology.”

The ethics of influence.

I want to know what’s in it for this funder to pivot from his work life, past and present, to funding ethics in AI, and why now? He’s not renowned for his ethical approach in the world. Rather from his past at Lehman Brothers to the funding of Donald Trump, he is better known for his reported “inappropriate analogy” on Obama’s tax policies or when he reportedly compared ‘Blackstone’s unsuccessful attempt to buy a mortgage company in the midst of the subprime homeloans crisis to the devastation wreaked by an atomic bomb dropped on Hiroshima in 1945.’

In the words of the 2017 International Business Times article, How Billionaire Trump Adviser Evades Ethics Law While Shaping Policies That Make Money For His Wall Street Firm, Schwarzman has long been a fixture in Republican politics.” “Despite Schwarzman’s formal policy role in the Trump White House, he is not technically on the White House payroll.” Craig Holman of Public Citizen, was reported as saying, “We’ve never seen this type of abuse of the ethics laws”. While politics may have moved on, we are arguably now in a time Schwarzman described as a golden age that arrives, when you have a mess.”

The values behind the money, power, and data matter in particular because it is Oxford. Emma Briant has raised her concerns in Wired, about the report from the separate Oxford Internet Institute, Industrialized Disinformation: 2020 Global Inventory of Organized Social Media Manipulationbecause of how influential the institute is.

Will the work alone at the new ethics Institute be enough to prove that its purpose is not for the funder or his friends to use their influence to have their business interests ethics-washed in Oxford blue?  Or might what the Institute chooses not to research, say just as much? It is going to have to prove its independence and own ethical position in everything it does, and does not do, indefinitely. The panel covered a wide range of already well-discussed, popular but interesting topics in the field, so we can only wait and see.

I still think, as I did in 2019, that corporate capture is unhealthy for UK public policy. If done at scale, with added global influence, it is not only unhealthy for the future of public policy, but for academia. In this case it has the potential in practice to be at best irrelevant corporate PR, but at worst to be harmful for the direction of travel in the shaping of global attitudes towards a whole field of technology.

Thinking to some purpose