In the Patrick Ness trilogy, Chaos Walking, the men can hear each others’ every thought but not the women.
That exposure of their bodily data and thought, means almost impossible privacy, and no autonomy over their own bodily control of movement or of action. Any man that tries to block access to their thoughts is treated with automatic suspicion.
It has been on my mind since last week’s get together at FIPR. We were tasked before the event to present what we thought would be the greatest risk to rights [each pertinent to the speaker’s focus area] in the next five years.
Wendy Grossman said at the event and in her blog, “I’d look at the technologies being deployed around European and US borders to surveil migrants. Migrants make easy targets for this type of experimentation because they can’t afford to protest and can’t vote. “Automated suspicion,” Euronews.next calls it. That habit of mind is dangerous.” Those tools often focus on control of humans’ bodies. They infringe on freedom of movement.
In education, technology companies sell automated suspicion detection tools to combat plagiarism and cheating in exams. Mood detection to spot outliers in concentration. Facial detection to bar the excluded from premises or the lunch queue, or normalise behavioural anomalies, control physical attendance and mental presence. Automated suspicion is the opposite of building trusted human relationships.
I hadn’t had much space to think in the weeks before the event, between legislation, strategic litigation and overdue commitments to reports, events, and to others. But on reflection, I failed to explain why the topic area I picked above all others matters. It really matters.
It is the combination of a growth of children’s bodily data processing and SafetyTech deployed in schools. It’s not only because such tools normalise the surveillance of everything children do, send, share or search for on a screen, or that many enable the taking of covert webcam photos, or even the profiles and labels it can create on terrorism and extremism or that can out LGBTQ+ teens. But that at its core, lies automated suspicion and automated control. Not only of bodily movement and actions, but of thought. Without any research or challenge to what that does to child development or their experience of social interactions and of authority.
First let’s take suspicion.
Suspicion of harms to self, harms to others, harms from others.
The software / systems / tools inspect the text or screen content the users enter into devices (including text the users delete and text before it is encrypted) assuming a set of risks all of the time. When a potential risk is detected, the tools can capture and store a screenshot of the users’ screen. Depending on the company design and option bought, human company moderators may or may not first review the screenshots (recorded on a rolling basis also ‘without’ any trigger so as to have context ahead of the event) and text captures to verify the triggered events before sending to the school’s designated safeguarding lead. An estimated 1% of all triggered material might be sent on to a school to review and choose whether or not to act on. But regardless of that, the children’s data (including screenshots, text, and redacted text) may be stored for more than a year by the company before being deleted. Even content not seen as necessary but, “content which poses no risk on its own but is logged in case it becomes relevant in the future”.
Predictive threat, automated suspicion
In-school technology is not only capturing what is done by children but what they say they do, or might do, or think of doing. SafetyTech enables companies and school staff to police what children do and what they think, and it is quite plainly designed to intervene in actions and thoughts before things happen. It is predictively policing pupils in schools.
Safeguarding-in-schools systems were already one of my greatest emerging concerns but I suspect coinciding with recent wars, that the keywords in topics seen as connected to the Prevent programme will find a match rate at an all time high since 2016 and the risks it brings due to being wrong will have increased with it. But while we have now got various company CEOs talking about shared concerns, not least outing LGBTQ students as the CDT reported this year in the U.S. and a whistleblower who wanted to talk about the sensitive content the staff can see from their company side, there is not yet appetite to fix this across the sector. The ICO returned our case for sectoral attention, with no enforcement. DfE guidance still ignores the at home, out of hours contexts and those among the systems that can enable school staff or company staff to take photos of the children and no one might know. We’ve had lawyers write letters and submitted advice in consultations and yet it’s ignored to date.
Remember the fake bomb detectors that were golf ball machines? That’s the potential scenario we’ve got in education in “safeguarding in schools” tech. Automated decision making in black boxes that no one has publicly tested, no one can see inside, and we’ve no data on its discriminatory effects through language matching or any effective negative or false positives, and the harms it is or is not causing. We’ve risk averse institutions made vulnerable to scams. It may be utterly brilliant technology, with companies falling over independent testing that proves it ‘works’. I’ve just not seen any.
Some companies themselves say they need better guidance and agree there are significant gaps. Opendium, one leading provider of internet filtering and monitoring solutions, blogged about views expressed at a 2019 conference held by the Police Service’s Counter Terrorism Internet Referral Unit that schools need better advice .
Freedom of Thought
But it’s not just about what children do, but any mention of what they *might* do or their opinions of themselves, others or anything else. We have installed systems of thought surveillance into schools, looking for outliers or ‘extremists’ in different senses, and in its now everyday sense, underpinned by the Prevent programme and British Values. These systems do not only expose and create controls of children’s behaviours in what they do, but in their thoughts, their searches, what they type and share, send, or even, don’t and delete.
Susie Algere, human rights lawyer, describes, Freedom of Thought as, “protected absolutely in international human rights law. This means that, if an activity interferes with our right to think for ourselves inside our heads (the so-called “forum internum”) it can never be justified for any reason. The right includes three elements:
the right to keep our thoughts private
the right to keep our thoughts free from manipulation, and
the right not to be penalised for our thoughts.”
These SafetyTech systems don’t respect any of that. They infringe on freedom of thought.
Bodily data and contextual collapse
Depending on the company, SafetyTech may be built on keyword matching technology commonly used in the gaming tech industry.
Gaming data collected from children is a whole field in its own right – bodily data from haptics, and neuro data. Personal data from immersive environments that in another sector would be classified clearly as “health” data, and in the gaming sector too, will fall under the same “special category” or “sensitive data” due to its nature, not its context. But it is being collected at scale by companies that aren’t used to dealing with the demands of professional confidentiality and concept of ‘first do no harm’ that the health sector are founded on. Perhaps we’re not quite at the everyday for everyone in society, Ready Player One stage yet, but for those in communities who are creating a vast amount of data about themselves the questions over its oversight its retention, and perhaps its redistribution with authorities in particular with policing should be of urgent consideration. And those tools are on the way into the classroom.
At school level the enormous growth in the transfer of bodily data is not yet haptics but of bodily harm. A vast sector has grown up to support the digitisation of children’s safety, physical harms noticed by staff on children picked up at home, or accidents and incidents recorded at school. Often including marking full body outlines with where the injury has been.
The issues here again, are in part created by taking this data beyond the physical environment of a child’s direct care and beyond the digital firewalls of child protection agencies and professionals. There are no clear universal policies on sealed records. ie not releasing the data of children-at-risk or those who undergo a name change, once it’s been added into school information management systems or into commercial company products like CPOMS, MyConcern, or Tootoot.
Similarly there is no clear national policy on the onward distribution into the National Pupil Database of the records of children in need (CiN) of child protection, which in my opinion, are inadequately shielded. The CIN census is a statutory social care data return made by every Local Authority to the Department for Education (DfE). It captures information about all children who have been referred to children’s social care regardless of whether further action is taken or not.
As of September 2022, there were only 70 individuals flagged for shielding and that includes both current and former pupils in the entire database. There were 23 shielded pupil records collected by the Department via the 2022 January censuses alone (covering early years, schools and alternative provision).
No statement or guidance is given direct to settings about excluding children from returns to the DfE. As of September 2022, there were 2,538,656 distinct CiN (any ‘child in need’ referred to children’s social care services within the year) / LAC ([state] looked after child) child records (going back to 2006), regardless of at-risk status, able to be matched to some home address information via other sources, (non CiN / LAC) all included in the NPD. The data is highly highly sensitive and detailed, including “categories of abuse” not only monitoring and capturing what has been done to children, but what is done by children.
Always on, always watching
The challenge for rights work in this sector is not primarily a technical problem but one of mindset. Do you think this is what schools are for? Are they aligned with the aims of education? One SafetyTech company CEO at a conference certainly marketed their tool as something that employers want children to get used to, to normalise the gaze of authority and monitoring of your attention span. In real Black Mirror stuff, you could almost hear him say, “their eyeballs belong to me for fifteen million merits”.
Monitoring in-class attendance is moving not only towards checking are you physically in school, but are you present in focus as well.
Education is moving towards an always-on mindset for many, whether it be data monitoring and collection with the stated aims of personalising learning or the claims by companies that have trialed mood and emotion tech on pupils in England. Facial scanning is sold as a way of seeing if the class mood is “on point” with learning. Are they ‘engaged’? After Pippa King spotted a live-trial in the wild starting in UK schools, we at Defend Digital Me had a chat with one company CEO who agreed after discussion, and the ICO blogpost on ’emotion tech’ hype, to stop that product rollout and cut it altogether from their portfolio. Under the EU AI Act it would soon be banned too, to protect children from its harms (children in the UK included, were Britain still under EU laws but now post-Brexit, they’re not).
The Times Education Commission reported in 2021 that Priya Lakhani told one of the Education Commission’s oral evidence sessions that Century Tech, “decided against using bone-mapping software to track pupils’ emotions through the cameras on their computers. Teachers were unhappy about pupils putting their cameras on for safeguarding reasons but there were also moral problems with supplying such technology to autocratic regimes around the world.”
But would you even consider this in an educational context at all?
Apps that blame and shame behaviours using RAG scores exposed to peers on wall projected charts are certainly already here. How long before such ’emotion’ and ‘mood’ tech emerges in Britain seeking a market beyond the ban in the EU, joined up with that which can blame and shame for lapses in concentration?
Is this simply the world now, that children are supposed to normalize third-party bodily surveillance and behavioural nudge?
That same kind of thinking in ‘estimation’ ‘safety’ and ‘blame’ might well be seen soon in eye scanning drivers in “advanced driver distraction warning systems”. Drivers staying ‘on track’ may be one area we will be expected to get used to monitoring our eyeballs, but will it be used to differentiate and discriminate between drivers for insurance purposes, or redirect blame for accidents? What about monitoring workers at computer desks, with smoking breaks and distraction costing you in your wage packet?
Body and Mind belong ‘on track’ and must be overseen
This routine monitoring of your face is expanding at pace in policing but policing the everyday to restrict access is going to affect the average person potentially far more than the use of facial detection and recognition in every public space. Your face is your passport and the computer can say no. Age as the gatekeeper of identity to participation and public and private spaces is already very much here online and will be expanded online in the UK by the Online Safety Act (noting other countries have realised its flaws and foolishness). Age verification and age assurance if given any weight, will inevitably lead to the balkanisation of the Internet, to throttling of content through prioritisation of who is permitted to do or see what, and control ofy content moderation.
In UK night clubs age verification is being normalised through facial recognition. Soon the only permitted Digital ID in what are (for now) purposes limited to rental and employment checks, will be the accredited government ID if the Data Protection and Digital Information Bill passes as drafted. But scope creep will inevitably move from what is possible, to what is required, across every aspect of our lives where identity is made an obligation for proof of eligibility.
Why all this matters is that we see a direction of travel over and over again. Once “the data” is collected and retained there is an overwhelming desire down the line to say, well now we’ve got it, how can we use it? Increasingly that means joining it all up. And then passing it around to others. And the DPDI Bill takes away the safeguards around that over time (See KC opinion para 20, p.6).
It is something data protection law and lack of enforcement are already failing to protect us from adequately, because excessive data retention should be impossible under the data minimisation principle and purpose limitation, but controllers argue linked data ‘is not new data’. What we should see instead in enforcement is against the excessive retention of data that creates ‘new knowledge’ that goes beyond our reasonable expectations we see the government and companies gaining ever greater power to intervene in the lives of the data subjects, the people. The draft new law does the opposite.
Who decides what ‘on track’ looks like?
School SafetyTech is therefore the current embodiment of my greatest areas of concern for children’s rights in educational settings right now. Because it is an overlapping tech that monitors both what you do when, and claims to be able to put the thinking behind it in context. Tools in schools are moving towards prediction and interventions and the combinations of bodily control, thought, mood and emotion. They are shifting from on the server to on device and go with you everywhere your phone goes. ‘Interventions’ bring a whole new horizon of the potential infringements of rights and outcomes and questions of who decides what can be used for what purposes in a classroom, in loco parentis.
Filtering and monitoring technology in school “safetyTech”, blocks content and profiles the user over time. This monitoring of bodily behaviours, monitoring actions and thoughts, leads to staff acting on automated suspicion. It can lead to imposing control of bodily movement and of thoughts and actions. It’s adopted at scale for millions of children and students across the UK. It’s without oversight or published universal safety standards.
This is not a single technology, it’s a market and a mindset.
Who decides what is ‘suitable’, ‘on track’, and where ‘intervention’ is required is built into design? It is not a problem of technology causing harm, but social and political choices and values embodied in technology that can be used to cause harm. For example in identifying and enabling the persecution of Muslim students that are fasting during Ramadan, based on their dining records. In the UK we have all the same tools already in place.
Who does any technology serve? is a question we have not yet resolved in education in England. The best interests of the child, the teacher, the institution, the State or company that built it? Interests and incentives may overlap or may be contradictory. But who decides, and who is given the knowledge of how that was decided? As tech is becoming increasingly designed to run without any human intervention the effects of the automated decisions, in turn, can be significant, and happen at speed and scale.
Patrick Ness coined the phrase,”The Noise is a man unfiltered, and without a filter, a man is just chaos walking”. Controlling chaos may be a desirable government aim, but at what cost to whose freedoms?