Category Archives: human rights

Waste products: bodily data and the datafied child

Recent conversations and the passage of the Data Protection and Digital Information Bill in parliament, have made me think once again about what the future vision for UK children’s data could be.

Some argue that processing and governance should be akin to a health model, first do no harm, professional standards, training, ISO lifecycle oversight, audits and governance bodies to approve exceptional releases and re-use.

Education data is health and body data

Children’s personal data in the educational context is remarkably often health data directly (social care, injury, accident, self harm, mental health) or indirectly (mood and emotion or eating patterns).

Children’s data in education is increasingly bodily data. An AI education company CEO was even reported to have considered, “bone-mapping software to track pupils’ emotions” linking a child’s bodily data and data of the mind. For a report written by Pippa King and myself in 2021, The State of Biometrics 2022: A Review of Policy and Practice in UK Education, we mapped the emerging prevalence of biometrics in educational settings. Published on the ten-year anniversary of the Protection of Freedoms Act 2012, we challenged the presumption that the data protection law is complied with well, or is effective enough alone in the protection of children’s data or digital rights.

We mustn’t forget, when talking about data in education, children do not go to school in order to produce data or to have their lives recorded, monitored or profiled through analytics. It’s not the purpose of their activity. They go to school to exercise their right in law to receive education, that data production is a by-product of the activity they are doing.

Education data as a by product of the process

Thinking of these together as children’s lives in by-products used by others, reminded me of the Alder Hey scandal published over twenty years ago, but going back decades.  In particular, the inquiry considered the huge store of body parts and residual human tissue of dead children accumulated between 1988 to 1995.

“It studied the obligation to establish ‘lack of objection’ in the event of a request to retain organs and tissue taken at a Coroner’s post-mortem for medical education and research.” (2001)

Thinking about the parallels of children’s personal data produced and extracted in education as a by-product, and organ and tissue waste a by-product of routine medical procedures in the living, highlights several lessons that we could be drawing today about digital processing of children’s lives in data and child/parental rights.

Digital bodies of the dead less protected than their physical parts

It also exposes gaps between the actual scenario today that the bodily tissue and the bodily data about deceased children could be being treated differently, since the data protection regime only applies to the living. We should really be forward looking and include rights here for all that go beyond the living “natural persons”, because our data does, and that may affect those who we leave behind. It is insufficient for researchers and others who wish to use data without restriction to object, because this merely pushes off the problem, increasing the risk of public rejection of ‘hidden’ plans later. (see DDM second reading briefing on recital 27, p 30/32),

What could we learn from handling body parts for the digital body?

In the children’s organ and tissue scandal, management failed to inform or provide suitable advice and support necessary to families.

Recommendations were made for change on consent to post-mortem examinations of children, and a new approach to consent and an NHS hospital post-mortem consent form for children and all residual tissue were adopted sector-wide.

The retention and the destruction of genetic material is considered in the parental consent process required for any testing that continues to use the bodily material from the child. In the Alder Hey debate this was about deceased children, but similar processes are in place now for obtaining parental consent to research re-use and retention for waste or ‘surplus’ tissue that comes from everyday operations on the living.

But new law in the Data Protection and Digital Information Bill is going to undermine current protections for genetic material in the future and has experts in that subject field extremely worried.

The DPDI Bill will consider the data of the dead for the first time

To date it only covers the data of or related to the living or “natural persons” and it is ironic that the rest of the Bill does the polar opposite, not about living and dead, but by redefining both personal data and research purposes it takes what is today personal data ‘in scope’ of data protection law and places it out of scope and beyond its governance due to exemptions, or changes in controller responsibility over time. Meaning a whole lot of data about children and the rest of us) will not be covered by DP law at all. (Yes, those are bad things in the Bill).

Separately, the new law as drafted, will also create a divergence from its generally accepted scope, and will start to bring data into scope the ‘personal data’ of the dead.

Perhaps as a result of limited parliamentary time, the DPDI Bill (see col. 939) is being used to include amendments on, “Retention of information by providers of internet services in connection with death of child,” to amend the Online Safety Act 2023 to enable OFCOM to give internet service providers a notice requiring them to retain information in connection with an investigation by a coroner (or, in Scotland, procurator fiscal) into the death of a child suspected to have taken their own life. The new clause also creates related offences.”

While primarily for the purposes of formal investigation into the role of social media in children’s suicide, and directions from Ofcom to social media companies to retain information for the period of one year beginning with the date of the notice, it highlights the difficulty of dealing with data after the death of a loved one.

This problem is perhaps no less acute where a child or adult has left no ‘digital handover’ via a legacy contact eg at Apple you can assign someone to be this person in the event of your own death from any cause. But what happens if your relation has not set this up and has been the holder of the digital key to your entire family photo history stored on a company’s cloud?  Is this a question of data protection, or digital identity management, or of physical product ownership?

Harvesting children’s digital bodies is not what people want

In our DDM research and report, “the words we use in data policy: putting people back in the picture” we explored how the language used to talk about personal data, has a profound effect on how people think about it.

In the current digital landscape personal data can often be seen as a commodity, a product to mine, extract and exploit and pass around to others. More of an ownership and IP question and the broadly U.S. approach. Data collection is excessive in “Big Data” mountains and “data lakes”, described just like the EU food surpluses of the 1970s. Extraction and use without effective controls creates toxic waste, is polluting and met with resistance. This environment is not sustainable and not what young people want. Enforcement of the data protection principles of purpose limitation and data minimisation should be helping here, but young people don’t see it.

When personal data is considered as ‘of the body’ or bodily residue, data as part of our life, the resulting view was that data is something that needs protecting. That need is generally held to be true, and represented in European human rights-based data laws and regulation. A key aim of protecting data is to protect the person.

In a workshop for that report preparation, teenagers expressed unease that data about them being ‘harvested’ to exploit as human capital and find their rights are not adequately enabled or respected. They find data can be used to replace conversation with them, and mean they are misrepresented by it, and at the same time there is a paradox that a piece of data can be your ‘life story’ and single source of truth advocating on your behalf.

Parental and children’s rights are grafted together and need recognised processes that respect this, as managed in health

Children’s competency and parental rights are grafted together in many areas of a child’s life and death, so why not by default in the digital environment? What additional mechanisms in a process are needed where both views carry legal weight? What are the specific challenges that need extra attention in data protection law due to the characteristics of data that can be about more than one person, be controlled by and not only be about the child, and parental rights?

What might we learn for the regulation of practice of a child’s digital footprint from how health manages residual tissue processing? Who is involved, what are the steps of the process and how is it communicated onwardly accompanying data flows around a system?

Where data protection rules do not apply, certain activities may still constitute an interference with Article 8 of the European Convention on Human Rights, which protects the right to private and family life. (WP 29 Opinion 4/2007 on the concept of personal data p24).

Undoubtedly the datafied child is an inseparable ‘data double’ of the child. Data users about children, who do so without their permission, without informing them or their families, without giving children and parents the tools to exercise their rights to have a say and control their digital footprint in life and in death, might soon find themselves being treated in the same way as accountable individuals in the Alder Hey scandal were, many years after the events took place.

 


Minor edits and section sub-headings added on 18/12 for clarity plus a reference to the WP29 opinion 04/2007 on personal data.

Automated suspicion is always on

In the Patrick Ness trilogy, Chaos Walking, the men can hear each others’ every thought but not the women.

That exposure of their bodily data and thought, means almost impossible privacy,  and no autonomy over their own bodily control of movement or of action. Any man that tries to block access to their thoughts is treated with automatic suspicion.

It has been on my mind since last week’s get together at FIPR. We were tasked before the event to present what we thought would be the greatest risk to rights [each pertinent to the speaker’s focus area] in the next five years.

Wendy Grossman said at the event and in her blog, “I’d look at the technologies being deployed around European and US borders to surveil migrants. Migrants make easy targets for this type of experimentation because they can’t afford to protest and can’t vote. “Automated suspicion,” Euronews.next calls it. That habit of mind is dangerous.” Those tools often focus on control of humans’ bodies. They infringe on freedom of movement.

In education, technology companies sell automated suspicion detection tools to combat plagiarism and cheating in exams. Mood detection to spot outliers in concentration. Facial detection to bar the excluded from premises or the lunch queue, or normalise behavioural anomalies, control physical attendance and mental presence. Automated suspicion is the opposite of building trusted human relationships.

I hadn’t had much space to think in the weeks before the event, between legislation, strategic litigation and overdue commitments to reports, events, and to others. But on reflection, I failed to explain why the topic area I picked above all others matters. It really matters.

It is the combination of a growth of children’s bodily data processing and SafetyTech deployed in schools. It’s not only because such tools normalise the surveillance of everything children do, send, share or search for on a screen, or that many enable the taking of covert webcam photos,  or even the profiles and labels it can create on terrorism and extremism or that can out LGBTQ+ teens. But that at its core, lies automated suspicion and automated control. Not only of bodily movement and actions, but of thought. Without any research or challenge to what that does to child development or their experience of social interactions and of authority.

First let’s take suspicion.

Suspicion of harms to self, harms to others, harms from others.

The software / systems / tools inspect the text or screen content the users enter into  devices (including text the users delete and text before it is encrypted) assuming a set of risks all of the time. When a potential risk is detected, the tools can capture and store a screenshot of the users’ screen. Depending on the company design and option bought, human company moderators may or may not first review the screenshots (recorded on a rolling basis also ‘without’ any trigger so as to have context ahead of the event) and text captures to verify the triggered events before sending to the school’s designated safeguarding lead. An estimated 1% of all triggered material might be sent on to a school to review and choose whether or not to act on. But regardless of that, the children’s data (including screenshots, text, and redacted text) may be stored for more than a year by the company before being deleted. Even content not seen as necessary but, “content which poses no risk on its own but is logged in case it becomes relevant in the future”.

Predictive threat, automated suspicion

In-school technology is not only capturing what is done by children but what they say they do, or might do, or think of doing. SafetyTech enables companies and school staff to police what children do and what they think, and it is quite plainly designed to intervene in actions and thoughts before things happen. It is predictively policing pupils in schools.

Safeguarding-in-schools systems were already one of my greatest emerging concerns but I suspect coinciding with recent wars, that the keywords in topics seen as connected to the Prevent programme will find a match rate at an all time high since 2016 and the risks it brings due to being wrong will have increased with it. But while we have now got various company CEOs talking about shared concerns, not least outing LGBTQ students as the CDT reported this year in the U.S. and a whistleblower who wanted to talk about the sensitive content the staff can see from their company side, there is not yet appetite to fix this across the sector. The ICO returned our case for sectoral attention, with no enforcement. DfE guidance still ignores the at home, out of hours contexts and those among the systems that can enable school staff or company staff to take photos of the children and no one might know. We’ve had lawyers write letters and submitted advice in consultations and yet it’s ignored to date.

Remember the fake bomb detectors that were golf ball machines? That’s the potential scenario we’ve got in education in “safeguarding in schools” tech. Automated decision making in black boxes that no one has publicly tested, no one can see inside, and we’ve no data on its discriminatory effects through language matching or any effective negative or false positives, and the harms it is or is not causing. We’ve risk averse institutions made vulnerable to scams. It may be utterly brilliant technology, with companies falling over independent testing that proves it ‘works’. I’ve just not seen any.

Some companies themselves say they need better guidance and agree there are significant gaps. Opendium, one leading provider of internet filtering and monitoring solutions, blogged about views expressed at a 2019 conference held by the Police Service’s Counter Terrorism Internet Referral Unit that schools need better advice .

Freedom of Thought

But it’s not just about what children do, but any mention of what they *might* do or their opinions of themselves, others or anything else. We have installed systems of thought surveillance into schools, looking for outliers or ‘extremists’ in different senses, and in its now everyday sense, underpinned by the Prevent programme and British Values. These systems do not only expose and create controls of children’s behaviours in what they do, but in their thoughts, their searches, what they type and share, send, or even, don’t and delete.

Susie Algere, human rights lawyer, describes, Freedom of Thought as, “protected absolutely in international human rights law. This means that, if an activity interferes with our right to think for ourselves inside our heads (the so-called “forum internum”) it can never be justified for any reason. The right includes three elements:

the right to keep our thoughts private
the right to keep our thoughts free from manipulation, and
the right not to be penalised for our thoughts.”

These SafetyTech systems don’t respect any of that. They infringe on freedom of thought.

Bodily data and contextual collapse

Depending on the company, SafetyTech may be built on keyword matching technology commonly used in the gaming tech industry.

Gaming data collected from children is a whole field in its own right – bodily data from haptics, and neuro data. Personal data from immersive environments that in another sector would be classified clearly as “health” data, and in the gaming sector too, will fall under the same “special category” or “sensitive data” due to its nature, not its context. But it is being collected at scale by companies that aren’t used to dealing with the demands of professional confidentiality and concept of ‘first do no harm’ that the health sector are founded on. Perhaps we’re not quite at the everyday for everyone in society, Ready Player One stage yet, but for those in communities who are creating a vast amount of data about themselves the questions over its oversight its retention, and perhaps its redistribution with authorities in particular with policing should be of urgent consideration. And those tools are on the way into the classroom.

At school level the enormous growth in the transfer of bodily data is not yet haptics but of bodily harm. A vast sector has grown up to support the digitisation of children’s safety, physical harms noticed by staff on children picked up at home, or accidents and incidents recorded at school. Often including marking full body outlines with where the injury has been.

The issues here again, are in part created by taking this data  beyond the physical environment of a child’s direct care and beyond the digital firewalls of child protection agencies and professionals. There are no clear universal policies on sealed records. ie not releasing the data of children-at-risk or those who undergo a name change, once it’s been added into school information management systems or into commercial company products like CPOMS, MyConcern, or Tootoot.

Similarly there is no clear national policy on the onward distribution into the National Pupil Database of the records of children in need (CiN) of child protection, which in my opinion, are inadequately shielded. The CIN census is a statutory social care data return made by every Local Authority to the Department for Education (DfE). It captures information about all children who have been referred to children’s social care regardless of whether further action is taken or not.

As of September 2022, there were only 70 individuals flagged for shielding and that includes both current and former pupils in the entire database. There were 23 shielded pupil records collected by the Department via the 2022 January censuses alone (covering early years, schools and alternative provision).

No statement or guidance is given direct to settings about excluding children from returns to the DfE. As of September 2022, there were 2,538,656 distinct CiN (any ‘child in need’ referred to children’s social care services within the year) / LAC ([state] looked after child) child records (going back to 2006), regardless of at-risk status, able to be matched to some home address information via other sources, (non CiN / LAC) all included in the NPD. The data is highly highly sensitive and detailed, including “categories of abuse” not only monitoring and capturing what has been done to children, but what is done by children.

Always on, always watching

The challenge for rights work in this sector is not primarily a technical problem but one of mindset. Do you think this is what schools are for? Are they aligned with the aims of education? One SafetyTech company CEO at a conference certainly marketed their tool as something that employers want children to get used to, to normalise the gaze of authority and monitoring of your attention span. In real Black Mirror stuff, you could almost hear him say, “their eyeballs belong to me for fifteen million merits”.

Monitoring in-class attendance is moving not only towards checking are you physically in school,  but are you present in focus as well.

Education is moving towards an always-on mindset for many, whether it be data monitoring and collection with the stated aims of personalising learning or the claims by companies that have trialed mood and emotion tech on pupils in England. Facial scanning is sold as a way of seeing if the class mood is “on point” with learning. Are they ‘engaged’?  After Pippa King spotted a live-trial in the wild starting in UK schools, we at Defend Digital Me had a chat with one company CEO who agreed after discussion, and the ICO blogpost on ’emotion tech’ hype, to stop that product rollout and cut it altogether from their portfolio. Under the EU AI Act it would soon be banned too, to protect children from its harms (children in the UK included, were Britain still under EU laws but now post-Brexit, they’re not).

The Times Education Commission reported in 2021 that Priya Lakhani told one of the Education Commission’s oral evidence sessions that Century Tech, “decided against using bone-mapping software to track pupils’ emotions through the cameras on their computers. Teachers were unhappy about pupils putting their cameras on for safeguarding reasons but there were also moral problems with supplying such technology to autocratic regimes around the world.”

But would you even consider this in an educational context at all?

Apps that blame and shame behaviours using RAG scores exposed to peers on wall projected charts are certainly already here. How long before such ’emotion’ and ‘mood’ tech emerges in Britain seeking a market beyond the ban in the EU, joined up with that which can blame and shame for lapses in concentration?

Is this simply the world now, that children are supposed to normalize third-party bodily surveillance and behavioural nudge?

That same kind of thinking in ‘estimation’ ‘safety’ and ‘blame’ might well be seen soon in eye scanning drivers in “advanced driver distraction warning systems”. Drivers staying ‘on track’ may be one area we will be expected to get used to monitoring our eyeballs, but will it be used to differentiate and discriminate between drivers for insurance purposes, or redirect blame for accidents? What about monitoring workers at computer desks, with smoking breaks and distraction costing you in your wage packet?

Body and Mind belong ‘on track’ and must be overseen

This routine monitoring of your face is expanding at pace in policing but policing the everyday to restrict access is going to affect the average person potentially far more than the use of facial detection and recognition in every public space. Your face is your passport and the computer can say no. Age as the gatekeeper of identity to participation and public and private spaces is already very much here online and will be expanded online in the UK by the Online Safety Act (noting other countries have realised its flaws and foolishness). Age verification and age assurance if given any weight, will inevitably lead to the balkanisation of the Internet, to throttling of content through prioritisation of who is permitted to do or see what, and control ofy content moderation.

In UK night clubs age verification is being normalised through facial recognition. Soon the only permitted Digital ID in what are (for now) purposes limited to rental and employment checks, will be the accredited government ID if the Data Protection and Digital Information Bill passes as drafted. But scope creep will inevitably move from what is possible, to what is required, across every aspect of our lives where identity is made an obligation for proof of eligibility.

Why all this matters is that we see a direction  of travel over and over again. Once “the data” is collected and retained there is an overwhelming desire down the line to say, well now we’ve got it, how can we use it? Increasingly that means joining it all up. And then passing it around to others. And the DPDI Bill takes away the safeguards around that over time (See KC opinion para 20, p.6).

It is something data protection law and lack of enforcement are already failing to protect us from adequately, because excessive data retention should be impossible under the data minimisation principle and purpose limitation, but controllers argue linked data ‘is not new data’. What we should see instead in enforcement is against the excessive retention of data that creates ‘new knowledge’ that goes beyond our reasonable expectations we see the government and companies gaining ever greater power to intervene in the lives of the data subjects, the people. The draft new law does the opposite.

Who decides what ‘on track’ looks like?

School SafetyTech is therefore the current embodiment of my greatest areas of concern for children’s rights in educational settings right now. Because it is an overlapping tech that monitors both what you do when, and claims to be able to put the thinking behind it in context. Tools in schools are moving towards prediction and interventions and the combinations of bodily control, thought, mood and emotion. They are shifting from on the server to on device and go with you everywhere your phone goes. ‘Interventions’ bring a whole new horizon of the potential infringements of rights and outcomes and questions of who decides what can be used for what purposes in a classroom, in loco parentis.

Filtering and monitoring technology in school “safetyTech”, blocks content and profiles the user over time. This monitoring of bodily behaviours, monitoring actions and thoughts, leads to staff acting on automated suspicion. It can lead to imposing control of bodily movement and of thoughts and actions. It’s adopted at scale for millions of children and students across the UK. It’s without oversight or published universal safety standards.

This is not a single technology, it’s a market and a mindset.

Who decides what is ‘suitable’, ‘on track’, and where ‘intervention’ is required is built into design?  It is not a problem of technology causing harm, but social and political choices and values embodied in technology that can be used to cause harm. For example in identifying and enabling the persecution of Muslim students that are fasting during Ramadan, based on their dining records. In the UK we have all the same tools already in place.

Who does any technology serve? is a question we have not yet resolved in education in England. The best interests of the child, the teacher, the institution, the State or company that built it?  Interests and incentives may overlap or may be contradictory. But who decides, and who is given the knowledge of how that was decided? As tech is becoming increasingly designed to run without any human intervention the effects of the automated decisions, in turn, can be significant, and happen at speed and scale.

Patrick Ness coined the phrase,”The Noise is a man unfiltered, and without a filter, a man is just chaos walking”. Controlling chaos may be a desirable government aim, but at what cost to whose freedoms?

AI in the public sector today, is the RAAC of the future

Reinforced Autoclaved Aerated Concrete (RAAC) used in the school environment is giving our Education Minister a headache. Having been the first to address the problem most publicly, she’s coming under fire as responsible for failure; for Ministerial failure to act on it in thirteen years of a Conservative government since 2010, and the failure of the fabric of educational settings itself.

Decades after buildings’ infrastructure started using RAAC, there is now a parallel digital infrastructure in educational settings. It’s worth thinking about what’s caused the RAAC problem and how it was identified. Could we avoid the same things in the digital environment and in the design, procurement and use of edTech products, and in particular, Artificial Intelligence?

Where has it been used?

In the procurement of school infrastructure, RAAC has been integrated into some parts of the everyday school system, especially in large flat roofs built around the 1960s-80s. It is now hard to detect and remedy or remove without significant effort. There was short-term thinking, short-term spending, and no strategy for its full life cycle or end-of-life expectations. It’s going to be expensive, slow, and difficult to find it and fix.

Where is the risk and what was the risk assessment?

Both most well-known recent cases, the 2016 Edinburgh School masonry collapse and the 2018 roof incident, happened in the early morning when no pupils were present, but, according to the 2019 safety alert by SCOSS, “in either case, the consequences could have been more severe, possibly resulting in injuries or fatalities. There is therefore a risk, although its extent is uncertain.”

That risk has been known for a long time, as today’s education minister Gillian Keegan rightly explained in that interview before airing her frustration. Perhaps it was not seen as a pressing priority because it was not seen as a new problem. In fact locally it often isn’t seen much at all, as it is either hidden behind front-end facades or built into hard-to-see places, like roofs. But already, ‘in the 1990s structural deficiencies became apparent’. (Discussed in papers by the Building Research Establishment (BRE) In the 1990s and again in 2002).

What has changed, according to expert reports, is that those visible problems are no longer behaving as expected in advance,  giving time for mitigation in what had previously been one-off catastrophic incidents. What was only affecting a few, could now affect the many at scale, and without warning. The most recent failures show there is no longer a reliable margin to act, before parts of the mainstream state education infrastructure pose children a threat to life.

Where is the similarity in the digital environment?

AI is the RAAC of another Minister’s future—it’s often similarly sold today as cost-saving, quick and easy to put in place.  You might need fewer people to install it rather than the available alternatives.

AI is being widely introduced at speed into children’s private and family life in England through its procurement and application in the infrastructure of public services; in education and children’s services and policing and in welfare; and some companies claim to be able to identify mood or autism or to be able to profile and influence mental health. Children rarely have any choice or agency to control its often untested effects or outcomes on them, in non-consensual settings.

If you’re working in AI “safety” right now, consider this a parable.

  • There are plenty of people pointing out risk in the current adoption of AI into UK public sector infrastructure; in schools, in health, in welfare, and in prisons and the justice system;
  • There are plenty of cases where harm is very real, but first seen by those in power as affecting the marginalised and minority;
  • There are no consistent published standards or obligations on transparency or of accountability to which AI sellers must hold their products before procurement and affect on people;
  • And there are no easily accessible records of where what type of AI is being procured and built into which public infrastructure, making tracing and remedy even harder in case of product recall.

The objectives of any company, State, service users, the public and investors may not be aligned. Do investors have a duty to ensure that artificial intelligence is developed in an ethical and responsible way? Prioritising short term economic gain and convenience, ahead of human impact or the long term public interest, has resulted in parts of schools’ infrastructure collapsing. And some AI is already going the same way.

The Cardiff Data Justice Lab together with Carnegie Trust have published numerous examples of cancelled systems across public services. “Pressure on public finances means that governments are trying to do more with less. Increasingly, policymakers are turning to technology to cut costs. But what if this technology doesn’t work as it should?” they asked.

In places where similar technology has been in place longer, we already see the impact and harm to people. In 2022, the Chicago Sun Times published an article noting that, “Illinois wisely stopped using algorithms in child welfare cases, but at least 26 states and Washington, D.C., have considered using them, and at least 11 have deployed them. A recent investigation found they are often unreliable and perpetuate racial disparities.” And the author wrote, “Government agencies that oversee child welfare should be prohibited from using algorithms.”

Where are the parallels in the problem and its fixes?

It’s also worth considering how AI can be “removed” or stopped from working in a system. Often not through removal at all, but simply throttling, shutting off that functionality. The problematic parts of the infrastructure remains in situ, but can’t easily be taken out after being designed-in. Whole products may also be difficult to remove.

The 2022 Institution of Structural Engineers’ report summarises the challenge now how to fix the current RAAC problems. Think about what this would mean doing to fix a failure of digital infrastructure:

  • Positive remedial supports and Emergency propping, to mitigate against known deficiencies or unknown/unproven conditions
  • Passive, fail safe supports, to mitigate catastrophic failure of the panels if a panel was to fail
  • Removal of individual panels and replacement with an alternative solution
  • Entire roof replacement to remove the ongoing liabilities
  • Periodic monitoring of the panels for their remaining service life

RAAC has not become a risk to life. It already was from design. While still recognised as a ‘good construction material for many purposes’ it has been widely used in unsafe ways in the wrong places.

RAAC planks made fifty years ago did not have the same level of quality control as we would demand today and yet was procured and put in place for decades after it was known to be unsafe for some uses, and risk assessments saying so.

RAAC was given an exemption from the commonly used codes of practice of reinforced concrete design (RC).

RAAC is scattered among non-RAAC infrastructure, making finding and fixing it, or its removal, very much harder than if it had been recorded in a register, making it easily traceable.

RAAC developers and sellers may no longer exist or have gone out of business without any accountability.

Current AI discourse should be asking not only for retrospective accountability or even life-cycle accountability, but also what does accountable AI look like by design and how do you guarantee it?

  • How do we prevent risk of harm to people from poor quality of systems designed to support them, what will protect people from being affected by unsafe products in those settings in the first place?
  • Are the incentives correct in procurement to enable adequate Risk Assessment be carried out by those who choose to use it?
  • Rather than accepting risk and retroactively expecting remedial action across all manner of public services in future—ignoring a growing number of ticking time bombs—what should public policy makers be doing to avoid putting them in place?
  • How will we know where the unsafe products were built into, if they are permitted then later found to be a threat-to-life?
  • How is safety or accountability upheld for the lifecycle of the product if companies stop making it, or go out of business?
  • How does anyone working with systems applied to people, assess their ongoing use and ensure it promotes human flourishing?

In the digital environment we still have margin to act, to ensure the safety of everyday parts of institutional digital infrastructure in mainstream state education and prevent harm to children. Whether that’s from parts of a product’s code, or use in the wrong way, or entire products. AI is already used in the infrastructure of school’ curriculum planning, curriculum content, or steering children’s self-beliefs and behaviours, and the values of the adult society these pupils will become. Some products have been oversold as AI when they weren’t, overhyped, overused and under explained,  their design is hidden away and kept from sight or independent scrutiny– some with real risks and harms. Right now, some companies and policy makers are making familiar errors and ‘safety-washing’ AI harms, ignoring criticism and pushing it off as someone else’s future problem.

In education, they could learn lessons from RAAC.


Background references

BBC Newsnight Timeline: reports from as far back as 1961 about aerated concrete concerns. 01/09/2023

BBC Radio 4 The World At One: Was RAAC mis-sold? 04/09/2023

Pre-1980 RAAC roof planks are now past their expected service life. CROSS. (2020) Failure of RAAC planks in schools.

A 2019 safety alert by SCOSS, “Failure of Reinforced Autoclaved Aerated Concrete (RAAC) Planks” following the sudden collapse of a school flat roof in 2018.

The Local Government Association (LGA) and the Department for Education (DfE) then contacted all school building owners and warned of ‘risk of sudden structural failure.’

In February 2022, the Institution of Structural Engineers published a report, Reinforced Autoclaved Aerated Concrete (RAAC) Panels Investigation and Assessment with follow up in April 2023, including a proposed approach to the classification of these risk factors and how these may impact on the proposed remediation and management of RAAC. (p.11)

image credit: DALL·E 2 OpenAI generated using the prompt “a model of Artificial Intelligence made from concrete slabs”.

 

Ensuring people have a say in future data governance

Based on a talk prepared for an event in parliament, hosted by Connected By Data and chaired by Lord Tim Clement-Jones, focusing on the Data Protection and Digital Information Bill, on Monday 5th December 17:00-19:00. “Ensuring people have a say in future data governance”.

Some reflections on Data in Schools (a) general issues; (b) the direction of travel the Government going in and; (c) what should happen, in the Bill or more widely.

Following Professor Sonia Livingstone who focussed primarily on the issues connected with edTech, I focussed on the historical and political context of where we are today, on ‘having a say’ in education data and its processing in, across, and out of, the public sector.


What should be different with or without this Bill?

Since I ran out of time yesterday I’m going to put first what I didn’t get around to: the key conclusions that point to what is possible with or without new Data Protection law. We should be better at enabling the realisation of existing data rights in the education sector today. The state and extended services could build tools for schools to help them act as controllers and for children to realise rights like a PEGE (a personalized exam grade explainer to show exam candidates what data was used to calculate their grade and how), Data usage reports should be made available at least annually from schools to help families understand what data about their children has gone where; and methods that enable the child or family to correct errors or express a Right to Object should be mandatory in schools’ information management systems.  Supplier standards on accuracy and error notifications should be made explicit and statutory, and supplier service level agreements affected by repeated failures.

Where is the change needed to create the social license for today’s practice, even before we look to the future?

“Ensuring people have a say in future data governance”. There has been a lot of asking lots of people for a say in the last decade. When asked, the majority of people generally want the same thingsboth those who are willing and less willing to have personal data about them re-used that was collected for administrative purposes in the public sectorto be told what data is collected for and how it is used, opt-in to re-use, to be able to control distribution, and protections for redress and against misuse strengthened in legislation.

Read Doteveryone’s public attitudes work. Or the Ipsos MORI polls or work by Wellcome. (see below). Or even the care.data summaries.

The red lines in the “Dialogues on Data” report from workshops carried out across different devolved regions of the UK for the 2013 ADRN remain valid today (about the reuse of deidentified linked public admin datasets by qualified researchers in safe settings not even raw identifying data), in particular with relation to:

  • Creating large databases containing many variables/data from a large number of public sector sources
  • Allowing administrative data to be linked with business data
  • Linking of passively collected administrative data, in particular geo-location data

“All of the above were seen as having potential privacy implications or allowing the possibility of reidentification of individuals within datasets. The other ‘red-line’ for some participants was allowing researchers for private companies to access data, either to deliver a public service or in order to make profit. Trust in private companies’ motivations were low.”

Much of this reflects what children and young people say as well. RAENG (2010) carried out engagement work with children on health data Privacy and Prejudice: young people’s views on the development and use of Electronic Patient Records (911.18 KB). They are very clear about wanting to keep their medical details under their own control and away from the ‘wrong hands’ which includes potential employers, commercial companies and parents.

Our own engagement work with a youth group aged 14-25 at a small scale was published in 2020 in our work, The Words We Use in Data Policy: Putting People Back in the Picture, and reflected what the Office for the Regulation of National Statistics went to publish in their own 2022 report, Visibility, Vulnerability and Voice (as a framework to explore whether the current statistics are helping society to understand the experiences of children and young people in all aspects of their lives). Young people worry about misrepresentation, about the data being used in place of conversations about them to take decisions that affect their lives, and about the power imbalance it creates without practical routes for complaint or redress. We all agree children’s voice is left out of the debate on data about them.

Parents are left out too. Defenddigitalme commissioned a parental survey via Survation (2018) under 50% felt they had sufficient control of their child’s digital footprint, and 2/3rds had not heard of the National Pupil Database or its commercial reuse.

So why is it that the public voice, loud and clear, is ignored in public policy and ignored in the drafting of the Data Protection and Digital Information Bill?

When it comes to education, debate should start with children’s and family rights in education, and education policy, not about data produced as its by-product.

The Universal Declaration of Human Rights Article 26 grafts a parent’s right onto child’s right to education, to choose the type of that education and it defines the purposes of education.

Education shall be directed to the full development of the human personality and to the strengthening of respect for human rights and fundamental freedoms. It shall promote understanding, tolerance and friendship among all nations, racial or religious groups, and shall further the activities of the United Nations for the maintenance of peace. Becoming a set of data points for product development or research is not the reason children go to school and hand over their personal details in the admissions process at all.

The State of the current landscape
To realise change, we must accept the current state of play and current practice. This includes a backdrop of trying to manage data well in the perilous state of public infrastructure, shrinking legal services and legal aid for children, ever-shrinking educational services in and beyond mainstream education, staff shortages and retention issues, and the lack of ongoing training or suitable and sustainable IT infrastructure for staff and learners.

Current institutional guidance and national data policy in the field is poor and takes the perspective of the educational setting but not the person.

Three key issues are problems from top-down and across systems:

  • Data repurposing i.e. SATS Key Stage 2 tests which are supposed to be measures of school performance not individual attainment are re-used as risk indicators in Local Authority datasets used to identify families for intervention, which it’s not designed for.
  • Vast amount of data distribution and linkage with other data: policing, economic drivers (LEO) and Local Authority broad data linkage without consent for purposes that exceed the original data collection purpose parents are told and use it like Kent, or Camden, “for profiling the needs of the 38,000 families across the borough”  plus further automated decision-making.
  • Accuracy in education data is a big issue, in part because families never get to see the majority of data created about a child much of which is opinion, and not submitted by them: ie the Welsh government fulfilled a Subject Access Request to one parent concerned with their own child’s record, and ended up revealing that every child in 2010 had been wrongly recorded thanks to a  Capita SIMS coding error, as having been in-care at some point in the past. Procurement processes should build penalties for systemic mistakes and lessons learned like this, into service level agreements, but instead we seem to allow the same issues to repeat over and over again.

What the DfE Does today

Government needs to embrace the fact it can only get data right, if it does the right thing. That includes policy that upholds the law by design. This needs change in its own purposes and practice.

National Pupil Data is a bad example from the top down. The ICO 2019-20 audit of the Department for Education — it is not yet published in full but findings included failings such as no Record of Processing Activity (ROPA), Not able to demonstrate compliance, and no fair processing. All of which will be undermined further by the Bill.

The Department for Education has been giving away 15 million people’s personal confidential data since 2012 and never told them. They know this. They choose to ignore it. And on top of that, didn’t inform people who were in school since then, that Mr Gove changed the law. So now over 21 million people’s pupil records are being given away to companies and other third parties, for use in ways we do not expect, and it is misused too. In 2015, more secret data sharing began, with the Home Office. And another pilot in 2018 with the DWP.

Government wanted to and changed the law on education admin data in 2012 and got it wrong. Education data alone is a sin bin of bad habits and complete lack of public and professional engagement, before even starting to address data quality and accuracy and backwards looking policy built on bad historic data.

The Commercial department do not have appropriate controls in place to protect personal data being processed on behalf of the DfE by data processors.” (ICO audit of the DfE , 2020)

Gambling companies ended up misusing access to learner records for over two years exposed in 2020 by journalists at the Sunday Times.

The government wanted nationality data from the Department for Education to be collected for the purposes of another (the Home Office) and got it very wrong. People boycotted the collection until it was killed off and data later destroyed.

Government changed the law on Higher Education in 2017 and got it wrong.  Now  third parties pass around named equality monitoring records like religion, sexual orientation, and disability and it is stored forever on named national pupil records. The Department for Education (DfE) now holds sexual orientation data on almost 3.2 million, and religious belief data on 3.7 million people.

After the summary findings published by the ICO of their compulsory audit of the Department for Education,  the question now is what will the Department and government do to address the 139 recommendations for improvement, with over 60% classified as urgent or high priority. Is the government intentional about change? We don’t think so at defend digital me, so we are, and welcome any support of our legal challenge.

Before we write new national law we must recognise and consider UK inconsistency and differences across education

Existing frameworks law and statutory guidance and recommendations need understood in the round (eg devolved education, including the age of a child and their capacity to undertake a contract in Scotland (at 16), the geographical applications of the Protection of Freedoms Act 2012, also the Prevent Duty since 2015 and its wider effects as a result of profiling children in counter-terrorism that reach beyond poor data protection and impacts on privacy (see The UN Special Rapporteur 2014 report on children’s rights and freedom of expression) – a plethora of Council of Europe work is applicable here in education that applies to UK as a member state: guidelines on data protection, AI, human rights, rule of law and the role of education in the promotion of democratic citizenship and a protection against authoritarian regimes and extreme nationalism.

The Bill itself
The fundamental principles of the GDPR and Data Protection law are undermined further from an already weak starting point since the 2018 Bill adopted exemptions that were not introduced by other countries in immigration and law enforcement.

  • The very definitions of personal and biometric data need close scrutiny.
  • Accountability is weakened (DPO, DPIA and prior consultation for high risk no longer necessary, ROPA)
  • Purpose limitation is weakened (legitimate interests and additional conditions for LI)
  • Redress is missing (Children and routes for child justice)
  • Henry VIII powers on customer data and business data must go.
  • And of course it only covers the living. What about children’s data misuse that causes distress and harms to human dignity but that is not covered strictly by UK Data Protection law, such as the children whose identities were used for undercover police in the SpyCops scandal. Recital 27 under the GDPR permits a possible change here.

Where are the Lessons Learned reflected in the Bill?

This Bill should be able to look at recent ICO enforcement action or judicial reviews to learn where and what is working and not working in data protection law. Lessons learned should be plentiful on public communications and fair processing, on the definitions of research, on discrimination, accuracy and bad data policy decisions. But where are those lessons in the Bill learned from health data sharing, why the care.data programme ran into trouble and similar failures repeated in the most recent GP patient data grab, or Google DeepMind and the RoyalFree? In policing from the Met Police Gangs Matrix?  In Home Affairs from the judicial review launched to challenge the lawfulness of an algorithm used by the Home Office to process visa applications? Or in education from the summer of 2020 exams fiasco?

The major data challenges as a result of government policy are not about data at all, but bad policy decisions which invariably mean data is involved because of ubiquitous digital first policy, public administration, and the nature of digital record keeping. In education examples include:

  • Partisan political agendas: i.e. the narrative of absence numbers makes no attempt to disaggregate the “expected” absence rate from anything on top, and presenting the idea as fact, that 100,000 children have not returned to school, “as a result of all of this”, is badly misleading to the point of being a lie.
  • Policy that ignores the law. The biggest driver of profiling children in the state education sector, despite the law that profiling children should not be routine, is the Progress 8 measure: about which Leckie & late Harvey Goldstein (2017) concluded in their work on the evolution of school league tables in England 1992-2016: ‘Contextual value-added’, ‘expected progress’ and ‘progress 8’ that, “all these progress measures and school league tables more generally should be viewed with far more scepticism and interpreted far more cautiously than have often been to date.”

The Direction of Travel
Can any new consultation or debate on the changes promised in data protection reform, ensure people have a say in future data governance, the topic for today, and what if any difference would it make?

Children’s voice and framing of children in National Data Strategy is woeful, either projected as victims or potential criminals. That must change.

Data protection law has existed in much similar form to today since 1984. Yet we have scant attention paid to it in ways that meet public expectations, fulfil parental and children’s expectations, or respect the basic principles of the law today. We have enabled technologies to enter into classrooms without any grasp of scale or risks in England that even Scotland has not with their Local Authority oversight and controls over procurement standards. Emerging technologies: tools that claim to be able to identify emotion and mood and use brain scanning, the adoption of e-proctoring, and mental health prediction apps which are treated very differently from they would be in the NHS Digital environment with ethical oversight and quality standards to meet — these are all in classrooms interfering with real children’s lives and development now, not some far-off imagined future.

This goes beyond data protection into procurement, standards, safety, understanding pedagogy, behavioural influence, and policy design and digital strategy. It is furthermore, naive to think this legislation, if it happens at all, is going to be the piece of law that promotes children’s rights when the others in play from the current government do not: the revision of the Human Rights Act, the recent PCSC Bill clauses on data sharing, and the widespread use of exemptions and excuses around data for immigration enforcement.

Conclusion
If policymakers who want more data usage treat people as producers of a commodity, and continue to ignore the publics’ “say in future data governance” then we’ll keep seeing the boycotts and the opt-outs and create mistrust in government as well as data conveners and controllers widening the data trust deficit**. The culture must change in education and other departments.

Overall, we must reconcile the focus of the UK national data strategy, with a rights-based governance framework to move forward the conversation in ways that work for the economy and research, and with the human flourishing of our future generations at its heart. Education data plays a critical role in social, economic, democratic and even security policy today and should be recognised as needing urgent and critical attention.


References:

Local Authority algorithms

The Data Justice Lab has researched how public services are increasingly automated and government institutions at different levels are using data systems and AI. However, our latest report, Automating Public Services: Learning from Cancelled Systems, looks at another current development: The cancellation of automated decision-making systems (ADS) that did not fulfil their goals, led to serious harm, or met caused significant opposition through community mobilization, investigative reporting, or legal action. The report provides the first comprehensive overview of systems being cancelled across western democracies.

New Research Report: Learning from Cancelled Systems

Policing thoughts, proactive technology, and the Online Safety Bill

Former counter-terrorism police chief attacks Rishi Sunak’s Prevent plans“, reads a headline in today’s Guardian. “Former counter-terrorism chief Sir Peter Fahy […] said: “The widening of Prevent could damage its credibility and reputation. It makes it more about people’s thoughts and opinions. Fahy said: “The danger is the perception it creates that teachers and health workers are involved in state surveillance.”

This article leaves out that today’s reality is already far ahead of proposals or perception. School children and staff are already surveilled in these ways. Not only are things monitored that people think type or read or search for online and offline in the digital environment, but copies may be collected, retained by companies and interventions made.

The products don’t only permit monitoring of trends on aggregated data in overviews of student activity but the behaviours of individual students. And these can be deeply intrusive and sensitive when you are talking about self harm, abuse, and terrorism.

(For more on the safety tech sector, often using AI in proactive monitoring, see my previous post (May 2021) The Rise of Safety Tech.)

Intrusion through inference and interventions

From 1 July 2015 all schools have been subject to the Prevent duty under section 26 of the Counter-Terrorism and Security Act 2015, in the exercise of their functions, to have “due regard to the need to prevent people from being drawn into terrorism”.  While these products are about monitoring far more than the remit of Prevent,  many companies actively market online filtering, blocking and monitoring safety products as a way of meeting that in the digital environment. Such as, “Lightspeed Filter™ helps you meet all of the Prevent Duty’s online regulations…

Despite there being no obligation to date, to fulfil this duty through technology, some companies’ way of selling such tools could be interpreted as threatening if schools don’t use it. Like this example:

“Failure to comply with the requirements may result in intervention from the Prevent Oversight Board, prompt an Ofsted inspection or incur loss of funding.”

Such products may create and send real-time alerts to company or school staff when children attempt to reach sites or type “flagged words” related to radicalisation or extremism on any online platform.

Under the auspices of the safeguarding-in-schools data sharing and web monitoring in the Prevent programme children may be labelled with terrorism or extremism labels, data which may be passed on to others or stored outside the UK without their knowledge. The drift in what is considered significant, has been from terrorism into now more vague and broad terms of extremism and radicalisation. Away from some assessment of intent and capability of action, into interception and interventions for potentially insignificant potential vulnerabilities and inferred assumptions of disposition towards such ideas. This is not potentially going to police thoughts as suggested by Fahy of Sunak’s views. It is already doing so. Policing thoughts in the developing child and holding them accountable for it like this in ways that are unforeseeable, is inappropriate and requires thorough investigation into its effects on children, including mental health.

But it’s important to understand that these libraries of thousands of words, ever changing and in multiple languages, and what the systems are looking for and flag, often claiming to do it using Artificial Intelligence, go far beyond Prevent. ‘Legal but harmful’ is their bread and butter. Self harm, harm to or from others.

While companies have no obligations to publish how the monitoring or flagging operates, what the words or phrases or blocked websites are, their error rates (positive and negative) or the effects on children or school staff and their behaviour as a result, these companies have a great deal of influence what gets inferred from what children do online, and who decides what to act on.

Why does it matter?

Schools have normalized the premise that systems they introduce should monitor activity outside of the school network, and hours. And that strangers or their private companies’ automated systems should be involved in inferring or deciding what children are ‘up to’ before the school staff who know the children in front of them.

In a defenddigitalme report, The State of Data 2020, we included a case study on one company that has since been bought out.  And bought again. As of August 2018 eSafe was monitoring approximately one million school children plus staff across the UK. This case study they used in their public marketing raised all sorts of questions on professional  confidentiality and school boundaries, personal privacy, ethics, and companies’ role and technical capability, as well as the lack of any safety tech accountability.

“A female student had been writing an emotionally charged letter to her Mum using Microsoft Word, in which she revealed she’d been raped. Despite the device used being offline, eSafe picked this up and alerted John and his care team who were able to quickly intervene.”

Their then CEO  had told the House of Lords 2016 Communication Committee enquiry on the Children and the Internet, how the products are not only monitoring children in school or school hours:

“Bearing in mind we are doing this throughout the year, the behaviours we detect are not confined to the school bell starting in the morning and ringing in the afternoon, clearly; it is 24/7 and it is every day of the year. Lots of our incidents are escalated through activity on evenings, weekends and school holidays.”

Similar products offer a photo capturing feature of users (pupils while using the device being monitored) described as “common across most solutions in the sector” by this company:

When a critical safeguarding keyword is copied, typed or searched for across the school network, schools can turn on NetSupport DNA’s webcams capture feature (this feature is turned-off by default) to capture an image of the user (not a recording) who has triggered the keyword.

How many webcam photos have been taken of children by school staff or others through those systems, and for what purposes, kept by whom? In the U.S. in 2010, Lower Merion School District, Philadelphia settled a lawsuit for using laptop webcams to take photos of students.  Thousands of photos had been taken even at home, out of hours, without their knowledge.

Who decides what does and does not trigger interventions across different products? In the month of December 2017 alone, eSafe claims they added 2254 words to their threat libraries.

Famously, Impero’s system even included the word “biscuit” which they say is a term used to define a gun. Their system was used by more than “half a million students and staff in the UK” in 2018. And students had better not talk about “taking a wonderful bath.” Currently there is no understanding or oversight of the accuracy of this kind of software and black-box decision-making is often trusted without openness to human question or correction.

Aside from how the range of tools that are all different work, there are very basic questions about whether such policies and tools help or harm children in various ways at all. The UN Special Rapporteur’s 2014 report on children’s rights and freedom of expression stated:

“The result of vague and broad definitions of harmful information, for example in determining how to set Internet filters, can prevent children from gaining access to information that can support them to make informed choices, including honest, objective and age-appropriate information about issues such as sex education and drug use. This may exacerbate rather than diminish children’s vulnerability to risk.” (2014)

U.S. safety tech creates harms

Today in the U.S. the CDT published a report on school monitoring systems there, many of which are also used over here. The report revealed that 13 percent of students knew someone who had been outed as a result of student-monitoring software. Another conclusion the CDT draws, is that monitoring is used for discipline more often than for student safety.

We don’t have that same research for the UK, but we’ve seen IT staff openly admit to using the webcam feature to take photos of young boys who are “mucking about” on the school library computer.

The Online Safety Bill scales up problems like this

The Online Safety Bill seeks to expand how such ‘behavioural identification technology’ can be expanded outside schools.

“Proactive technology include content moderation technology, user profiling technology or behaviour identification technology which utilises artificial intelligence or machine learning.” (p151 Online Safety Bill, August 3, 2022)

The “proactive technology requirement” is as yet rather open ended, left to OFCOM in Codes of Practice but the scope creep of such AI-based tools has become ever more intrusive in education. Legal but harmful is decided by companies and the IWF and any number of opaque third parties whose process and decision-making we know little about. It’s important not to conflate filtering, blocking lists of ‘unsuitable’ websites that can be accessed in schools, with monitoring and tracking individual behaviours.

‘Technological developments that have the capacity to interfere with our freedom of thought fall clearly within the scope of “morally unacceptable harm,”‘ according to Algere (2017), and yet this individual interference is at the very core of school safeguarding tech and policy by design.

In 2018, the ‘lawful but harmful’ list of activities in the Online Harms White paper was nearly identical with those terms used by school Safety Tech companies. The Bill now appears to be trying to create a new legitimate basis for these practices, more about underpinning a developing market, than supporting children’s safety or rights.

Chilling speech is itself controlling content

While a lot of debate about the Bill has been the free speech impacts of content removal, there has been less about what is unwritten but how it will operate to prevent speech and participation in the digital environment for children. The chilling effect of surveillance on access and participation online is well documented. Younger people and women are more likely to be negatively affected (Penney, 2017). The chilling effect on thought and opinion is worsened in these types of tools that trigger an alert even when what is typed is quickly deleted or remains unsent or shared. Thoughts are no longer private.

The ability to use end-to-end encryption on private messaging platforms is simply worked around by these kinds of tools, trading security for claims of children’s safety. Anything on screen may be read in the clear by some systems, even capturing passwords and bank details.

Graham Smith has written, “It may seem like overwrought hyperbole to suggest that the [Online Harms] Bill lays waste to several hundred years of fundamental procedural protections for speech. But consider that the presumption against prior restraint appeared in Blackstone’s Commentaries (1769). It endures today in human rights law. That presumption is overturned by legal duties that require proactive monitoring and removal before an independent tribunal has made any determination of illegality.”

More than this, there is no determination of illegality in legal but harmful activity. It’s opinion. The government is prone to argue that, “nothing in the Bill says X…” but you need to understand this context of how such proactive behavioural monitoring tools work is through threat and the resultant chilling effect to impose unwritten control. This Bill does not create a safer digital environment, it creates threat models for users and companies, to control how we think and behave.

What do children and parents think?

Young people’s own views that don’t fit the online harms narrative have been ignored by Westminster scrutiny Committees. A 2019 survey by the Australian e-safety commissioner found that over half (57%) of child respondents were uncomfortable with background monitoring processes, and 43 %were unsure about these tools’ effectiveness in ensuring online safety.

And what of the role of parents? Article 3(2) of the UNCRC says: “States Parties undertake to ensure the child such protection and care as is necessary for his or her wellbeing, taking into account the rights and duties of his or her parents, legal guardians, or other individuals  legally responsible for him or her, and, to this end, shall take all appropriate legislative and administrative measures.” (my emphasis)

In 2018, 84% of 1,004 parents in England who we polled through Survation, agreed that children and guardians should be informed how this monitoring activity works and wanted to know what the keywords were. (We didn’t ask if it should happen at all or not.)

The wide ranging nature [of general monitoring] rather than targeted and proportionate interference has been judged to be in breach of law and a serious interference with rights, previously. Neither policy makers nor companies should assume parents want safety tech companies to remove autonomy, or make inferences about our children’s lives. Parents if asked, reject the secrecy in which it happens today and demand transparency and accountability. Teachers can feel anxious talking about it at all. There’s no clear routes for error corrections, in fact it’s not done because some claim in building up profiles staff should not delete anything and ignore claims of errors, in case a pattern of behaviour is missed. But there’s no independent assessments available to evidence these tools work or are worth the costs. There are no routes for redress or responsibility taken for tech-made mistakes. None of which makes children safer online.

Before broadening out where such monitoring tools are used, their use and effects on school children need to be understood and openly debated. Policy makers may justify turning a blind eye to harms created by one set of technology providers while claiming that only the other tech providers are the problem, because it suits political agendas or industry aims, but children’s rights and their wellbeing should not be sacrificed in doing so.  Opaque, unlawful and unsafe practice must stop. A quid pro quo for getting access to millions of children’s intimate behaviour, should be transparent access to their product workings, and accepting standards on universal safe accountable practices. Families need to know what’s recorded. To have routes for redress when a daughter researching ‘cliff walks’ gets flagged as a suicide risk or an environmentally interested teenage son searching for information on ‘black rhinos’ is asked about his potential gang membership. The tools sold as solutions to online harms, shouldn’t create more harm like these reported real-life case studies.

Teachers are ‘involved in state surveillance’ as Fahy put it, through Prevent. Sunak was wrong to point away from the threats of the far right in his comments. But the far broader unspoken surveillance of children’s personal lives, behaviours and thoughts through general monitoring in schools, and what will be imposed through the Online Safety Bill more broadly, should concern us far more than was said.

Man or machine: who shapes my child? #WorldChildrensDay 2021

A reflection for World Children’s Day 2021. In ten years’ time my three children will be in their twenties. What will they and the world around them have become? What will shape them in the years in between?


Today when people talk about AI, we hear fears of consciousness in AI. We see, I, Robot.  The reality of any AI that will touch their lives in the next ten years is very different. The definition may be contested but artificial intelligence in schools already involves automated decision making at speed and scale, without compassion or conscience, but with outcomes that affect children’s lives for a long time.

The guidance of today—in policy documents, and well intentioned toolkits and guidelines and oh yes yet another ‘ethics’ framework— is all fairly same-y in terms of the issues identified.

Bias in training data. Discrimination in outcomes. Inequitable access or treatment. Lack of understandability or transparency of decision-making. Lack of routes for redress. More rarely thoughts on exclusion, disability and accessible design, and the digital divide. In seeking to fill it, the call can conclude with a cry to ensure ‘AI for all’.

Most of these issues fail to address the key questions in my mind, with regards to AI in education.

Who gets to shape a child’s life and the environment they grow up in? The special case of children is often used for special pleading in government tech issues. Despite this, in policy discussion and documents, govt. fails over and over again to address children as human beings.

Children are still developing. Physically, emotionally, their sense of fairness and justice, of humor, of politics and who they are.

AI is shaping children in ways that schools and parents cannot see.  And the issues go beyond limited agency and autonomy. Beyond the UNCRC articles 8 and 18, the role of the parent and lost boundaries between schools and home, and 23 and 29. (See at the end in detail).

Concerns about accessibility published on AI are often about the individual and inclusion, in terms of design to be able to participate. But once they can participate, where is the independent measurement and evaluation of impact on their educational progress, or physical and mental development? What is their effect?

From overhyped like Edgenuity, to the oversold like ClassCharts (that didn’t actually have any AI in it but it still won Bett Show Awards), frameworks often mention but still have no meaningful solutions for the products that don’t work and fail.

But what about the harms from products that work as intended? These can fail human dignity or create a chilling effect, like exam proctoring tech. Those safety tech that infer things and cause staff to intervene even if the child was only chatting about ‘a terraced house.’ Punitive systems that keep profiles of behaviour points long after a teacher would have let it go. What about those shaping the developing child’s emotions and state of mind by design and claim to operate within data protection law? Those who measure and track mental health or make predictions for interventions by school staff?

Brain headbands to transfer neurosignals aren’t biometric data in data protection terms if not used to or able to uniquely identify a child.

“Wellbeing” apps are not being regulated as medical devices and yet are designed to profile and influence mental health and mood and schools adopt them at scale.

If AI is being used to deliver a child’s education, but only in the English language, what risk does this tech-colonialism create in evangelising  children in non-native English speaking families through AI, not only in access to teaching, but on reshaping culture and identity?

At the institutional level, concerns are only addressed after the fact. But how should they be assessed as part of procurement when many AI are marketed as , it never stops “learning about your child”? Tech needs full life-cycle oversight, but what companies claim their products do is often only assessed to pass accreditation at a single point in time.

But the biggest gap in governance is not going to be fixed by audits or accreditation of algorithmic fairness. It is the failure to recognize the redistribution of not only agency but authority; from individuals to companies (teacher doesn’t decide what you do next, the computer does). From public interest institutions to companies (company X determines the curriculum content, not the school). And from State to companies (accountability for outcomes has fallen through the gap in outsourcing activity to the AI company). We are automating authority, and with it the shirking of responsibility, the liability for the machine’s flaws, and accepting it is the only way, thanks to our automation bias. Accountability must be human, but whose?

Around the world the rush to regulate AI, or related tech in Online Harms, or Digital Services, or Biometrics law, is going to embed, not redistribute power, through regulatory capitalism.

We have regulatory capture including on government boards and bodies that shape the agenda; unrealistic expectations of competition shaping the market; and we’re ignoring transnational colonialisation of whole schools or even regions and countries shaping the delivery of education at scale.

We’re not regulating the questions: Who does the AI serve and how do we deal with conflicts of interest between child’s rights, family, school staff, the institution or State, and the company’s wants? Where do we draw the line between public interest, private interests, and who decides what are the best interests of each child?

We’re not managing what the implications are of the datafied child being mined and analysed in order to train companies’ AI. Is it ethical or desirable to use children’s behaviour as sources of business intelligence, to donate free labour in school systems performed for companies to profit from, without any choice (see UNCRC Art 32)?

We’re barely aware as parents, if a company will decide how a child is tested in a certain way, asked certain questions about their mental health, given nudges to ‘improve’ their performance or mood.  It’s not a question of ‘is it in the best interests of a child’, but rather, who designs it and can schools assess compatibility with a child’s fundamental rights and freedoms to develop free from interference?

It’s not about protection of ‘the data’ although data protection should be about the protection of the person, not only enabling data flows for business.

It’s about protection from strangers engineering a child’s development in closed systems.

It is about child protection from unknown and unlimited number of persons interfering with who they will become.

Today’s laws and debate are too often about regulating someone else’s opinion; how it should be done, not if it should be done at all.

It is rare we read any challenge of the ‘inevitability’ of AI [in education] narrative.

Who do I ask my top two questions on AI in education:
(a) who gets and grants permission to shape my developing child, and
(b) what happens to the duty of care in loco parentis as schools outsource authority to an algorithm?


UNCRC

Article 8

1. States Parties undertake to respect the right of the child to preserve his or her identity, including nationality, name and family relations as recognised by law without unlawful interference.

Article 18

1. States Parties shall use their best efforts to ensure recognition of the principle that both parents have common responsibilities for the upbringing and development of the child. Parents or, as the case may be, legal guardians, have the primary responsibility for the upbringing and development of the child. The best interests of the child will be their basic concern.

Article 29

1. States Parties agree that the education of the child shall be directed to:

(a) The development of the child’s personality, talents and mental and physical abilities to their fullest potential;

(c) The development of respect for the child’s parents, his or her own cultural identity, language and values, for the national values of the country in which the child is living, the country from which he or she may originate, and for civilizations different from his or her own;

Article 30

In those States in which ethnic, religious or linguistic minorities or persons of indigenous origin exist, a child belonging to such a minority or who is indigenous shall not be denied the right, in community with other members of his or her group, to enjoy his or her own culture

 

The Rise of Safety Tech

At the CRISP hosted, Rise of Safety Tech, event  this week,  the moderator asked an important question: What is Safety Tech? Very honestly Graham Francis of the DCMS answered among other things, “It’s an answer we are still finding a question to.”

From ISP level to individual users, limitations to mobile phone battery power and app size compatibility, a variety of aspects within a range of technology were discussed. There is a wide range of technology across this conflated set of products packaged under the same umbrella term. Each can be very different from the other, even within one set of similar applications, such as school Safety Tech.

It worries me greatly that in parallel to the run up to the Online Harms legislation that their promotion appears to have assumed the character of a done deal. Some of these tools are toxic to children’s rights because of the policy that underpins them. Legislation should not be gearing up to make the unlawful lawful, but fix what is broken.

The current drive is towards the normalisation of the adoption of such products in the UK, and to make them routine. It contrasts with the direction of travel of critical discussion outside the UK.

Some Safety Tech companies have human staff reading flagged content and making decisions on it, while others claim to use only AI. Both might be subject to any future EU AI Regulation for example.

In the U.S. they also come under more critical scrutiny. “None of these things are actually built to increase student safety, they’re theater, Lindsay Oliver,  project manager for the Electronic Frontier Foundation was quoted as saying in an article just this week.

Here in the U.K. their regulatory oversight is not only startlingly absent, but the government is becoming deeply invested in cultivating the sector’s growth.

The big questions include who watches the watchers, with what scrutiny and safeguards? Is it safe, lawful, ethical, and does it work?

Safety Tech isn’t only an answer we are still finding a question to. It is a world view, with a particular value set. Perhaps the only lens through which its advocates believe the world wide web should be seen, not only by children, but by anyone. And one that the DCMS is determined to promote with “the UK as a world-leader” in a worldwide export market.

As an example one of the companies the DCMS champions in its May 2020 report, ‘‘Safer technology, safer users” claims to export globally already. eSafe Global is now providing a service to about 1 million students and schools throughout the UK, UAE, Singapore, Malaysia and has been used in schools in Australia since 2011.

But does the Department understand what they are promoting? The DCMS Minister responsible, Oliver Dowden said in Parliament on December 15th 2020: “Clearly, if it was up to individuals within those companies to identify content on private channels, that would not be acceptable—that would be a clear breach of privacy.”

He’s right. It is. And yet he and his Department are promoting it.

So how is this going to play out if at all, in the Online Harms legislation expected soon, that he owns together with the Home Office? Sadly the needed level of understanding by the Minister or in the third sector and much of the policy debate in the media, is not only missing, but is actively suppressed by the moral panic whipped up in emotive personal stories around a Duty of Care and social media platforms. Discussion is siloed about identifying CSAM, or grooming, or bullying or self harm, and actively ignores the joined-up, wider context within which Safety Tech operates.

That context is the world of the Home Office. Of anti-terrorism efforts. Of mass surveillance and efforts to undermine encryption that are as nearly old as the Internet. The efforts to combat CSAM or child grooming online, operate in the same space. WePROTECT for example, sits squarely amid it all, established in 2014 by the UK Government and the then UK Prime Minister, David Cameron. Scrutiny of UK breaches of human rights law are well documented in ECHR rulings. Other state members of the alliance including the UAE stand accused of buying spyware to breach activists’ encrypted communications. It is disingenuous for any school Safety Tech actors to talk only of child protection without mention of this context. School Safety Tech while all different, operate by tagging digital activity with categories of risk, and these tags can include terrorism and extremism.

Once upon a time, school filtering and blocking services meant only denying access to online content that had no place in the classroom. Now it can mean monitoring all the digital activity of individuals, online and offline, using school or personal devices, working around encryption, whenever connected to the school network. And it’s not all about in-school activity. No matter where a child’s account is connected to the school network, or who is actually using it, their activity might be monitored 24/7, 365 days a year. A user’s activity that matches with the thousands of words or phrases on watchlists and in keyword libraries gets logged, and profiles individuals with ‘vulnerable’ behaviour tags, sometimes creating alerts. Their scope has crept from flagging up content, to flagging up children. Some schools create permanent records including false positives because they retain everything in a risk-averse environment, even things typed that a child subsequently deleted, and may be distributed and accessible by an indefinite number of school IT staff and stored in further third parties’ systems like CPOMS or Capita SIMS.

A wide range of the rights of the child are breached by mass monitoring in the UK, such as outlined in the UN Committee on the Rights of the Child General Comment No.25 which states that, “Any digital surveillance of children, together with any associated automated processing of personal data, should respect the child’s right to privacy and should not be conducted routinely, indiscriminately or without the child’s knowledge or, in the case of very young children, that of their parent or caregiver; nor should it take place without the right to object to such surveillance, in commercial settings and educational and care settings, and consideration should always be given to the least privacy-intrusive means available to fulfil the desired purpose.” (para 75)

Even the NSPCC, despite their recent public policy that opposes secure messaging using end-to-send encryption, recognises on its own Childline webpage the risk for children from content monitoring of children’s digital spaces, and that such monitoring may make them less safe.

In my work in 2018, one school Safety Tech company accepted our objections from defenddigitalme, that this monitoring went too far in its breach of children’s confidentially and safe spaces, and it agreed to stop monitoring counselling services. But there are roughly fifteen active companies here in the UK and the data protection regulator, the ICO despite being publicly so keen to be seen to protect children’s rights, has declined to act to protect children from the breach of their privacy and data protection rights across this field.

There are questions that should be straightforward to ask and answer, and while some CEOs are more willing to engage constructively with criticism and ideas for change than others, there is reluctance to address the key question: what is the lawful basis for monitoring children in school, at home, in- or out-side school hours?

Another important question often without an answer, is how do these companies train their algorithms whether in age verification or child safety tech?  How accurate are the language inferences for an AI designed to catch children out who are being deceitful and where  are assumptions, machine or man-made, wrong or discriminatory? It is overdue that our Regulator, the ICO, should do what the FTC did with Paravision, and require companies that develop tools through unlawful data processing to delete the output from it, the trained algorithm, plus products created from it.

Many of the harms from profiling children were recognised by the ICO in the Met Police gangs matrix: discrimination, conflation of victim and perpetrator, notions of ‘pre-crime’ without independent oversight,  data distributed out of context, and excessive retention.

Harm is after all why profiling of children should be prohibited. And where, in exceptional circumstances, States may lift this restriction, it is conditional that appropriate safeguards are provided for by law.

While I believe any of the Safety Tech generated category profiles could be harmful to a child through mis-interventions, being treated differently by staff as a result, or harm a trusted relationship,  perhaps the potentially most devastating to a child’s prospects are from mistakes that could be made under the Prevent duty.

The UK Home Office has pushed its Prevent agenda through schools since 2015, and it has been built into school Safety Tech by-design. School Safety Tech while all different, operate by tagging digital activity with categories of risk, and these tags can include terrorism and extremism.  I know of schools that have flags attached to children’s records that are terrorism related, but who have had no Prevent referral. But there is no transparency of these numbers at all. There is no oversight to ensure children do not stay wrongly tagged with those labels. Families may never know.

Perhaps the DCMS needs to ask itself, are the values of the UK Home Office really what the UK should export to children globally from “the UK as a world-leader” without independent legal analysis, without safeguards, and without taking accountability for their effects?

The Home Office values are demonstrated in its approach to the life and death of migrants at sea, children with no recourse to public funds, to discriminatory stop and search, a Department that doesn’t care enough to even understand or publish the impact of its interventions on children and their families.

The Home Office talk is of safeguarding children, but it is opposed to them having safe spaces online. School Safety Tech tools actively work around children’s digital security, can act as a man-in-the-middle, and can create new risks. There is no evidence I have seen that on balance convinces me that school Safety Tech does in fact make children safer. But plenty of evidence that the Home Office appears to want to create the conditions that make children less secure so that such tools could thrive, by weakening the security of digital activity through its assault on end-to-end encryption. My question is whether Online Harms is to be the excuse to give it a lawful basis.

Today there are zero statutory transparency obligations, testing or safety standards required of school Safety Tech before it can be procured in UK state education at scale.

So what would a safe and lawful framework for operation look like? It would be open to scrutiny and require regulatory action, and law.

There are no published numbers of how many records are created about how many school children each year. There are no safeguards in place to protect children’s rights or protection from harm in terms of false positives, error retention, transfer of records to the U.S. or third party companies, or how many covert photos they have enabled to be taken of children via webcam by school staff.  There is no equivalent of medical device ‘foreseeable misuse risk assessment’  such as ISO 14971 would require, despite systems being used for mental health monitoring with suicide risk flags. Children need to know what is on their record and to be able to seek redress when it is wrong. The law would set boundaries and safeguards and both existing and future law would need to be enforced. And we need independent research on the effects of school surveillance, and its chilling effects on the mental health and behaviour of developing young people.

Companies may argue they are transparent, and seek to prove how accurate their tools are. Perhaps they may become highly accurate.

But no one is yet willing to say in the school Safety Tech sector, these are thousands of words that if your child types may trigger a flag, or indeed, here’s an annual report of all the triggered flags and your own or your child’s saved profile. A school’s interactions with children’s social care already offers a framework for dealing with information that could put a child at risk from family members, so reporting should be do-able.

At the end of the event this week, the CRISP event moderator said of their own work, outside schools, that, “we are infiltrating bad actor networks across the globe and we are looking at everything they are saying. […] We have a viewpoint that there are certain lines where privacy doesn’t exist anymore.”

Their company website says their work involves, “uncovering and predicting the actions of bad actor, activist, agenda-driven and interest groups“. That’s a pretty broad conflation right there.  Their case studies include countering social media activism against a luxury apparel brand. And their legal basis of ‘legitimate interests‘ for their data processing might seem flimsy at best, for such a wide ranging surveillance activity where, ‘privacy doesn’t exist anymore’.

I must often remind myself that the people behind Safety Tech may epitomise the very best of what some believe is making the world safer online as they see it. But it is *as they see it*.  And if  policy makers or CEOs have convinced themselves that because ‘we are doing it for good, a social impact, or to safeguard children’, that breaking the law is OK, then it should be a red flag that these self-appointed ‘good guys’ appear to think themselves above the law.

My takeaway time and time again, is that companies alongside governments, policy makers, and a range of lobbying interests globally, want to redraw the lines around human rights, so that they can overstep them. There are “certain lines” that don’t suit their own business models or agenda. The DCMS may talk about seeing its first safety tech unicorn, but not about the private equity funding, or where they pay their taxes. Children may be the only thing they talk about protecting but they never talk of protecting children’s rights.

In the school Safety Tech sector, there is activity that I believe is unsafe, or unethical, or unlawful. There is no appetite or motivation so far to fix it. If in upcoming Online Harms legislation the government seeks to make lawful what is unlawful today, I wonder who will be held accountable for the unsafe and the unethical, that come with the package dealand will the Minister run that reputational risk?


Ethics washing in AI. Any colour as long as it’s dark blue?

The opening discussion from the launch of the Institute for Ethics in AI in the Schwarzman Centre for Humanties in Oxford both asked many questions and left many open.

The panel event is available to watch on YouTube.

The Director recognised in his opening remarks where he expected their work to differ from the talk of ethics in AI that can become ‘matters of facile mottos hard to distinguish from corporate PR’, like “Don’t be evil.” I would like to have heard him go on to point out the reasons why, because I fear this whole enterprise is founded on just that.

My first question is whether the Institute will ever challenge its own need for existence. It is funded, therefore it is. An acceptance of the technological value and inevitability of AI is after all, built into the name of the Institute.

As Powles and Nissenbaum, wrote in 2018, “the endgame is always to “fix” A.I. systems, never to use a different system or no system at all.”

My second question is on the three drivers they went on to identify, in the same article, “Artificial intelligence… is backed by real-world forces of money, power, and data.”

So let’s follow the money.

The funder of the Schwarzman Centre for Humanties the home of the new Institute is also funding AI ethics work across the Atlantic, at Harvard, Yale and other renowned institutions that you might expect to lead in the publication of influential research. The intention at the MIT Schwarzman College of Computing, is that his investment “will reorient MIT to address the opportunities and challenges presented by the rise of artificial intelligence including critical ethical and policy considerations to ensure that the technologies are employed for the common good.” Quite where does that ‘reorientation’ seek to end up?

The panel discussed power.

The idea of ‘citizens representing citizens rather than an elite class representing citizens’, should surely itself be applied to challenge who funds work that shapes public debate. How much influence is democratic for one person to wield?

“In 2007, Mr. Schwarzman was included in TIME’s “100 Most Influential People.” In 2016, he topped Forbes Magazine’s list of the most influential people in finance and in 2018 was ranked in the Top 50 on Forbes’ list of the “World’s Most Powerful People.” [Blackstone]

The panel also talked quite a bit about data.

So I wonder what work the Institute will do in this area and the values that might steer it.

In 2020 Schwarzman’s private equity company Blackstone, acquired a majority stake in Ancestry, a provider of ‘digital family history services with 3.6 million subscribers in over 30 countries’. DNA. The Chief Financial Officer of Alphabet Inc. and Google Inc sits on Blackstone’s board. Big data. The biggest. Bloomberg reported in December 2020 that, ‘Blackstone’s Next Product May Be Data From Companies It Buys’. “Blackstone, which holds stakes in about 97 companies through its private equity funds, ramped up its data push in 2015.”

It was Nigel Shadbolt who picked up the issues of data and of representation as relates to putting human values at the centre of design. He suggested that there is growing disquiet that rather than everyday humans’ self governance, or the agency of individuals, this can mean the values of ‘organised group interests’ assert control. He picked up on the values that we most prize, as things that matter in value-based computing and later on, that transparency of data flows as a form of power being important to understand. Perhaps the striving for open data as revealing power, should also apply to funding in a more transparent, publicly accessible model?

AI in a democratic culture.

Those whose lives are most influenced by AI are often those the most excluded in discussing its harms, and rarely involved in its shaping or application. Prof Hélène Landemore (Yale University) asked perhaps the most important question in the discussion, given its wide-ranging dance around the central theme of AI and its role or effects in a democratic culture, that included Age Appropriate Design, technical security requirements, surveillance capitalism and fairness. Do we in fact have democracy or agency today at all?

It is after all not technology itself that has any intrinsic ethics but those who wield its power, those who are designing it, and shaping the future through it, those human-accountability-owners who need to uphold ethical standards in how technology controls others’ lives.

The present is already one in which human rights are infringed by machine-made and data-led decisions about us without us, without fairness, without recourse, and without redress. It is a world that includes a few individuals in control of a lot. A world in which Yassen Aslam this week said, “the conditions of work, are being hidden behind the technology.”

The ethics of influence.

I want to know what’s in it for this funder to pivot from his work life, past and present, to funding ethics in AI, and why now? He’s not renowned for his ethical approach in the world. Rather from his past at Lehman Brothers to the funding of Donald Trump, he is better known for his reported “inappropriate analogy” on Obama’s tax policies or when he reportedly compared ‘Blackstone’s unsuccessful attempt to buy a mortgage company in the midst of the subprime homeloans crisis to the devastation wreaked by an atomic bomb dropped on Hiroshima in 1945.’

In the words of the 2017 International Business Times article, How Billionaire Trump Adviser Evades Ethics Law While Shaping Policies That Make Money For His Wall Street Firm, Schwarzman has long been a fixture in Republican politics.” “Despite Schwarzman’s formal policy role in the Trump White House, he is not technically on the White House payroll.” Craig Holman of Public Citizen, was reported as saying, “We’ve never seen this type of abuse of the ethics laws”. While politics may have moved on, we are arguably now in a time Schwarzman described as a golden age that arrives, when you have a mess.”

The values behind the money, power, and data matter in particular because it is Oxford. Emma Briant has raised her concerns in Wired, about the report from the separate Oxford Internet Institute, Industrialized Disinformation: 2020 Global Inventory of Organized Social Media Manipulationbecause of how influential the institute is.

Will the work alone at the new ethics Institute be enough to prove that its purpose is not for the funder or his friends to use their influence to have their business interests ethics-washed in Oxford blue?  Or might what the Institute chooses not to research, say just as much? It is going to have to prove its independence and own ethical position in everything it does, and does not do, indefinitely. The panel covered a wide range of already well-discussed, popular but interesting topics in the field, so we can only wait and see.

I still think, as I did in 2019, that corporate capture is unhealthy for UK public policy. If done at scale, with added global influence, it is not only unhealthy for the future of public policy, but for academia. In this case it has the potential in practice to be at best irrelevant corporate PR, but at worst to be harmful for the direction of travel in the shaping of global attitudes towards a whole field of technology.

“Michal Serzycki” Data Protection Award 2021

It is a privilege to be a joint-recipient in the fourth year of the “Michal Serzycki” Data Protection Award, and I thank the Data Protection Authority in Poland (UODO) for the recognition of work for the benefit of promoting data protection values and the right to privacy.

I appreciate the award in particular as the founder of an NGO, and the indirect acknowledgement of the value of NGOs to be able to contribute to public policy, including openness towards international perspectives, standards, the importance of working together, and our role in holding the actions of state authorities and power to account, under the rule of law.

The award is shared with Mrs Barbara Gradkowska, Director of the Special School and Educational Center in Zamość, whose work in Poland has been central to the initiative, Your Data — Your Concern, an educational Poland-wide programme for schools that is supported and recognized by the UODO. It offers support to teachers in vocational training centres, primary, middle and high schools related to personal data protection and the right to privacy in education.

And it is also shared with Mr Maciej Gawronski, Polish legal advisor and authority in data protection, information technology, cloud computing, cybersecurity, intellectual property and business law.

The UODO has long been a proactive advocate in the schools’ sector in Poland for the protection of children’s data rights, including recent enforcement after finding the processing of children’s biometric data using fingerprint readers unlawful, when using a school canteen and ensuring destruction of pupil data obtained unlawfully.

In the rush to remote learning in 2020 in response to school closures in COVID-19, the UODO warmly received our collective international call for action, a letter in which over thirty organisations worldwide called on policy makers, data protection authorities and technology providers, to take action, and encouraged international collaboration to protect children around the world during the rapid adoption of digital educational technologies (“edTech”). The UODO issued statements and a guide on school IT security and data protection.

In September 2020, I worked with their Data Protection Office at a distance, in delivering a seminar for teachers, on remote education.

The award also acknowledges my part in the development of the Guidelines on Children’s Data Protection in an Education Setting adopted in November 2020, working in collaboration with country representatives at the Council of Europe Committee for Convention 108, as well as with observers, and the Committee’s incredible staff.

2020 was a difficult year for people around the world under COVID-19 to uphold human rights and hold the space to push back on encroachmentespecially for NGOs, and in community struggles from the Black Lives Matter movement to environmental action to  UK students on the streets of London to protest algorithmic unfairness. In Poland the direction of travel is to reduce women’s rights in particular. Poland’s ruling Law and Justice (PiS) party has been accused of politicising the constitutional tribunal and using it to push through its own agenda on abortion, and the government appears set on undermining the rule of law creating a ‘chilling effect’ for judges. The women of Poland are again showing the world, what it means and what it can cost to lose progress made.

In England at defenddigitalme, we are waiting to hear later this month, what our national Department for Education will do to better protect millions of children’s rights, in the management of national pupil records, after our Data Protection regulator, the ICO’s audit and intervention. Among other sensitive content, the National Pupil Database holds sexual orientation data on almost 3.2 million students’ named records, and religious belief on 3.7 million.

defenddigitalme is a call to action to protect children’s rights to privacy across the education sector in England, and beyond. Data protection has a role to playwithin the broader rule of law to protect and uphold the right to privacy, to prevent state interference in private and family life, and in the protection of the full range of human rights necessary in a democratic society. Fundamental human rights must be universally protected to foster human flourishing, to protect the personal dignity and freedoms of every individual, to promote social progress and better standards of life in larger freedoms.


The award was announced at the conference,Real personal data protection in remote reality,” organized by the Personal Data Protection Office UODO, as part of the celebration of the 15th Data Protection Day on 28th January, 2021 with an award ceremony held on its eve in Warsaw.

Is the Online Harms ‘Dream Ticket’ a British Green Dam?

The legal duty in Online Harms government proposals is still vague.

For some it may sound like the ‘“dream ticket”.  A framework of censorship to be decided by companies, enabled through the IWF and the government in Online Safety laws. And ‘free’ to all. What companies are already doing today in surveillance of all outgoing  and *incoming* communications that is unlawful, made lawful. Literally, the nanny state could decide, what content will be blocked, if, “such software should “absolutely” be pre-installed on all devices for children at point of sale and “…people could run it the other side to measure what people are doing as far as uploading content.”

From Parliamentary discussion it was clear that the government will mandate platforms, “to use automated technology…, including, where proportionate, on private channels,” even when services are encrypted.

No problem, others might say, there’s an app for that. “It doesn’t matter what program the user is typing in, or how it’s encrypted.”

But it was less clear in the consultation outcome updated yesterday,  that closed in July 2019 and still says, “we are consulting on definitions of private communications, and what measures should apply to these services.” (4.8)

Might government really be planning to impose or incentivise surveillance on [children’s] mobile phones at the point of sale in the UK? This same ‘dream ticket’ company was the only company  mentioned by the Secretary of State for DCMS yesterday. After all, it is feasible. In 2009 Chinese state media reported that the Green Dam Youth Escort service, was only installed in 20 million computers in internet cafes and schools.

If government thinks it would have support for such proposals, it  may have overlooked the outrage that people feel about companies prying on our everyday lives. Or has already forgotten the summer 2020 student protests over the ‘mutant algorithm’.

There is conversely already incidental harm and opaque error rates from the profiling UK children’s behaviour while monitoring their online and offline computer activity, logged against thousands of words in opaque keyword libraries. School safeguarding services are already routine in England, and are piggy backed by the Prevent programme. Don’t forget one third of referrals to Prevent come from education and over 70% are not followed through with action.  Your child and mine might already be labelled with ‘extremism’, ‘terrorism’, ‘suicide’ or ‘cyberbullying’ or have had their photos taken by the webcam of their device an unlimited number of times, thanks to some of these ‘safeguarding’ software and services, and the child and parents never know.

Other things that were not clear yesterday, but will matter, is if the ‘harm’ of the Online Harms proposals will be measured by intent, or measured by the response to it. What is harm or hate or not, is contested across different groups online, and weaponised, at scale.

The wording of the Law Commission consultation closing on Friday on communications offences also matters, and asks about intention to harm a likely audience, where harm is defined as any non-trivial emotional, psychological, or physical harm, but should not require proof of actual harm. This together with any changes on hate crime and on intimate images in effect proposes changes on ‘what’ can be said, how, and ‘to whom’ and what is considered ‘harmful’ or ‘hateful’ conduct.  It will undoubtedly have massive implications for the digital environment once all joined up. It matters when ‘culture wars’ online, can catch children in the cross fire.

I’ve been thinking about all this, against the backdrop of the Bell v Tavistock [2020] EWHC 3274 judgement with implications from the consideration of psychological harm, children’s evolving capacity, the right to be heard and their autonomy, a case where a parent involved reportedly has not even told their own child.

We each have a right to respect for our private life, our family life, our home and our correspondence. Children are rights holders in their own right. Yet it appears the government and current changes in lawmaking may soon interfere with that right in a number of ways, while children are used at the heart of everyone’s defence.

In order to find that an interference is “necessary in a democratic society” any interference with rights and freedoms should be necessary and proportionate for each individual, not some sort of ‘collective’ harm that permits a rolling, collective interference.

Will the proposed outcomes prevent children from exercising their views or full range of rights, and restrict online participation? There may be a chilling effect on speech. There is in schools. Sadly these effects may well be welcomed by those who believe not only that some rights are more equal than others, but some children, more than others.

We’ll have to wait for more details. As another MP in debate noted yesterday, “The Secretary of State rightly focused on children, but this is about more than children; it is about the very status of our society ….”