Category Archives: privacy

Pirates and their stochastic parrots

It’s a privilege to have a letter published in the FT as I do today, and thanks to the editors for all their work in doing so.

I’m a bit sorry that it lost the punchline which was supposed to bring a touch of AI humour about pirates and their stochastic parrots. And its rather key point was cut that,

“Nothing in current European laws, including Convention 108 for the UK, prevents companies developing AI lawfully.”

So for the record, and since it’s (£), my agreed edited version was:

“The multi-signatory open letter advertisement, paid for by Meta, entitled “Europe needs regulatory certainty on AI” (September 19) was fittingly published on International Talk Like a Pirate Day.

It seems the signatories believe they cannot do business in Europe without “pillaging” more of our data and are calling for new law.

Since many companies lobbied against the General Data Protection Regulation or for the EU AI Act to be weaker, or that the Council of Europe’s AI regulation should not apply to them, perhaps what they really want is approval to turn our data into their products without our permission.

Nothing in current European laws, including Convention 108 for the UK, prevents companies developing AI lawfully. If companies want more consistent enforcement action, I suggest Data Protection Authorities comply and act urgently to protect us from any pirates out there, and their greedy stochastic parrots. “

Prior to print they asked to cut out a middle paragraph too.

“In the same week, LinkedIn sneakily switched on a ‘use me for AI development’ feature for UK users without telling us (paused the next day); Larry Ellison suggested at Oracle’s Financial Analyst Meeting  that more AI should usher in an era of mass citizen surveillance, and our Department for Education has announced it will allow third parties to exploit school children’s assessment data for AI product building, and can’t rule out it will include personal data.”

It is in fact the cumulative effect around the recent flurry of AI activities by various parties, state and commercial, that deserves greater attention rather than being only about this Meta-led complaint. Who is grabbing what data and what infrastructure contracts and creating what state dependencies and strengths to what end game?  While some present the “AI race” as China or India versus the EU or the US to become AI “super powers”, is what “Silicon Valley” offers, their way is the only way, a better offer?

It’s not in fact, “Big Tech” I’m concerned about, but the arrogance of so many companies that in the middle of regulatory scrutiny  would align themselves with one that would rather put out PR that omits the fact they are under it, instead only calling for the law to be changed, and frankly misleading the public by suggesting it is all for our own good than talk about how this serves their own interests.

Who do they think they are to dictate what new laws must look like when they seem simply unwilling to stick to those we have?

Perhaps this open letter serves as a useful starting point to direct DPAs to the companies in need of most scrutiny around their data practices. They seem to be saying they want weaker laws or more enforcement. Some are already well known for challenging both. Who could forget Meta (Facebook’s) secret emotional contagion study involving children in which friends’ postings were moved to influence moods, or the case of letting third parties, including Cambridge Analytica access users’ data? Then there’s the data security issues or the fine over international transfers or the anti-trust issues. And there’s the legal problems with their cookies. And all of this built from humble beginnings by the same founder of Facemash “a prank website” to rate women as hot or not.

As Congressman Long reportedly told Zuckerberg in 2018, “You’re the guy to fix this. We’re not. You need to save your ship.”

The Meta-led ad called for “harmonisation enshrined in regulatory frameworks like the GDPR” and I absolutely agree. The DPAs need to stand tall and stand up to OpenAI and friends (ever dwindling in number so it seems) and reassert the basic, fundamental principles of data protection laws from the GDPR to Convention 108 to protect fundamental human rights. Our laws should do so whether companies like them or not. After all, it is often abuse of data rights by companies, and states, that populations need protection from.

Data protection ‘by design and by default’ is not optional under European data laws established for decades. It is not enough to argue that processing is necessary because you have chosen to operate your business in a particular way, nor a necessary part of your chosen methods.

The Netherlands DPA is right to say scraping is almost always unlawful. A legitimate interest cannot be simply plucked from thin air by anyone who is neither an existing data controller nor processor and has no prior relationship to the data subjects who have no reasonable expectation of their re-use of data online that was not posted for the purposes that the scraper has grabbed it and without any informed processing and offer of an opt out. Instead the only possible basis for this kind of brand new controller should be consent. Having to break the law, hardly screams ‘innovation’.

Regulators do not exist to pander to wheedling, but to independently uphold the law in a democratic society in order to protect people, not prioritise the creation of products:

  • Lawfulness, fairness and transparency.
  • Purpose limitation.
  • Data minimisation.
  • Accuracy.
  • Storage limitation.
  • Integrity and confidentiality (security)
    and
  • Accountability.

In my view, it is the lack of dissausive enforcement as part of checks-and-balances on big power like this, regardless of where it resides, that poses one of the biggest data-related threats to humanity.

Not AI, nor being “left out” of being used to build it for their profit.

The “new normal” is not inevitable. (1/2)

Today Keir Starmer talked about us having more control in our lives. He said, “markets don’t give you control – that is almost literally their point.”

This week we’ve seen it embodied in a speech given by Oracle co-founder Larry Ellison, at their Financial Analyst Meeting 2024. He said that AI is on the verge of ushering in a new era of mass behavioural surveillance, of police and citizens. Oracle he suggested, would be the technological backbone for such applications, keeping everyone “on their best behaviour” through constant real-time machine-learning-powered monitoring.(LE FAQs 1:09:00).

Ellison’s sense of unquestionable entitlement to decide that *he* should be the company to control how all citizens (and police) behave-by-design, and his omission of any consideration of any democratic mandate for that, should shock us. Not least because he’s wrong in some of his claims. (There is no evidence that having a digital dystopia makes this difference in school safety, in particular given the numbers of people known to the school).

How does society trust in this direction of travel in how our behaviour is shaped and how corporations impose their choices? How can a government promise society more control in our lives and yet enable a digital environment, which plays a large part in our everyday life, of which we seem to have ever less control?

The new government sounds keen on public infrastructure investment as a route to structural transformation. But the risk is that the cost constraints mean they seek the results expected from the same plays in an industrial development strategy of old, but now using new technology and tools. It’s a big mistake, huge. And nothing less than national democracy is at stake because individuals cannot meaningfully hold corporations to account. The economic and political context in which an industrial strategy is defined is now behind paywalls and without parliamentary consensus, oversight or societal legitimacy in a formal democratic environment. The nature of the constraints on businesses’ power were once more tangible and localised and their effects easier to see in one place. Power has moved away from government to corporations considerably in the time Labour was out of it. We are now more dependent on being users of multiple private techno-solutions to everyday things often that we hate using, from paying for the car park, to laundry, to job hunting. All with indirect effects on national security, on service provsion at scale, as well as direct everyday effects for citizens and increasingly, our disempowerment and lack of agency in our own lives.

LinkedIn this week first chose not to ask users at all in what has become the OpenAI modus operandi of take first, and ask forgiveness later to grab our personal data from the platform to train “content creation AI models.” (And then it did a U-turn.)

Data Protection law is supposed to offer people protection from such misuse, but without enforcement, ever more companies are starting to copy each other on rinse and repeat.

The Convention 108 requires respect for rights and fundamental freedoms, and in particular the right to privacy, and special categories of data which may not be processed automatically unless domestic law provides appropriate safeguards. It must be obtained fairly. In the cases of many emerging [Generative] AI companies have disregarded fundamentals of European DP laws: Purpose limitation for incompatible uses, no relationship between the subject and company, lack of accuracy or even being actively offered a right to object when in fact it should be a consent basis, means there is unfair and unlawful processing. If we are to accept that anyone at all can take any personal data online and use it for an entirely different purpose to turn into commercial products ignoring all this, then frankly the Data Protection authorities may as well close. Are these commercial interests simply so large that they believe they can get away with steam-rollering over democratic voice and human rights, as well as the rule of (data protection) law? Unless there is consistent ‘cease and desist’ type enforcement, it seems to be rapidly becoming the new normal.

If instead regulators were to bring meaningful enforcement that is dissuasive as the law is supposed to be, what would change? What will shift practice on facial recognition in the world as foreseen by Larry Ellison, and shift public policy to sourcing responsibly? How is democracy to be saved from technocratic authoritarianism going global? If the majority of people in living in democracies are never asked for their views, and have changes imposed on their lives that they do not want how do we raise a right to object and take control?

While the influence in in our political systems of institutions, such as Oracle, and their financial interests are in growing ever more-, and ever larger -, AI models, the interests of our affected communities are not represented in state decisions at national or global levels.

While tools are being built to resist content scraping from artists, what is there for faces and facts, or even errors, about our lives?

Ted Chiang asked in the New Yorker in 2023, whether an alternative is possible to the  current direction of travel. “Some might say that it’s not the job of A.I. to oppose capitalism. That may be true, but it’s not the job of A.I. to strengthen capitalism, either. Yet that is what it currently does.” The greatest current risk of AI is not what we imagine from I, Robot he suggested, but “A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value.”

I remember giving a debate talk as a teen thirty years ago, about the risk of rising sea levels in Vanuatu. It is a reality that causes harm. Satellite data indicates the sea level has risen there by about 6 mm per year since 1993. Someone told me in an interesting recent Twitter exchange, when it comes to climate impacts and AI that they are, “not a proponent of ‘reducing consumption'”.  The reality of climate change in the UK today, only hints at the long-term consequences for the world from the effects of migration and extreme weather events. Only by restraining consumption might we change those knock-on effects of climate change in anything like the necessary timeframe, or the next thirty years worsen more quickly.

But instead of hearing meaningful assessment of what we need in public policy, we hear politicians talk about growth as what they want and that bigger can only be better. More AI. More data centres. What about more humanity in this machine-led environment?

 While some argue for an alternative, responsible, rights respecting path as the only sustainable path forwards, like Meeri Haataja, Chief Executive and Founder, Saidot, Helsinki, Finland in his August letter to the FT,  some of the largest companies appear to be suggesting this week, publishing ads in various European press, that they seem to struggle to follow the law (like Meta), so Europe needs a new one. And to paraphrase, they suggest it’s for our own good.

And that’s whether we like it, or not. You might not choose to use any of their Meta products but might still be being used by them, using your activity online to to create advertising market data that can be sold. Researchers at the University of Oxford analysed a million smartphone apps and found that “the average one contains third‑party code from 10 different companies that facilitates this kind of tracking. Nine out of 10 of them sent data to Google.  Four out of 10 of them sent data to Facebook. In the case of Facebook, many of them sent data automatically without the individual having the opportunity to say no to it.” Reuse for training AI is far more explicit and wrong and “it is for Meta [or any other company] to ensure and demonstrate ongoing compliance.” We have rights, and the rule of law, and at stake are our democratic processes.

“The balance of the interests of working people” must include respect for their fundamental rights and freedoms in the digital environment as well as supporting the interests of economic growth and they are mutually achievable not mutually exclusive.

But in the UK we put the regulator on a leash in 2015, constrained by a duty towards “economic growth”. It’s a constraint that should not apply to industry and market regulators, like the ICO.

While some companies plead to be let off with their bad behaviour, others expect to profit from encouraging the state to increase the monitoring of ours or asking that the law be written ever in their favour. Regulators need to stand tall, and stand up to it and the government needs to remove their leash.

Even in 1959, Labour MPs included housewives concerned with being misled by advertisers and manufacturers (06:40). Many of the electoral issues have stayed the same in over 65 years.  But the power grab going on in this era of information age is unprecedented.

We need not accept this techno-authoritarianism as a “new normal” and inevitable. If indeed as Starmer concluded, “Britain belongs to you“, then it needs MPs to act like it and defend fundamental rights and freedoms to uphold values like the rule of law, even with companies who believe it does not apply to them. With a growing swell of nationalism and plenty who may not believe Britain belongs to all of us, but that, ‘Tomorrow belongs to Me‘, it is indeed a time when “great forces demand a decisive government prepared to face the future.”


See also Part 2: Farming out our Children. AI AI Oh.

The video referenced above is of Dr. Sasha Luccioni, the research scientist and climate lead at HuggingFace, an open-source community and machine-learning platform for AI developers, and is part 1 of the TED Radio Hour episode, Our tech has a climate problem.

Farming out our children. AI AI Oh. (2/2)

Today Keir Starmer talked about us having more control in our lives. “Taking back control is a Labour argument”, he said. So let’s see it in education tech policy where parents told us in 2018, less than half felt they had sufficient control of their child’s digital footprint.

Not only has the UK lost control of which companies control large parts of the state education infrastructure and its delivery, the state is *literally* giving away control of our children’s lives recorded in identifiable data at national level, and since 2012 included giving it to journalists, think tanks, and companies.

Why it matters is less about the data per se, but what is done with it without our permission and how that affects our lives.

Politicians’ love affair with AI (undefined) seems to be as ardent as under the previous government. The State appears to have chosen to further commercialise children’s lives in data, having announced towards the end of the school summer holidays that  the DfE and DSIT will give pupils’ assessment data to companies for AI product development. I get angry about this, because the data is badly misunderstood, and not a product to pass around but the stories of children’s lives in data, and that belongs to them to control.

Are we asking the right questions today about AI and education?  In 2016 in a post for Nesta, Sam Smith foresaw the algorithmic fiasco that would happen in the summer of 2020  pointing out that exam-marking algorithms like any other decisions, have unevenly distributed consequences. What prevents that happening daily but behind closed doors and in closed systems? The answer is, nothing.

Both the adoption of AI in education and education about AI is unevenly distributed. Driven largely by commercial interests, some are co-opting teaching unions for access to the sector, others more cautious, have focused on the challenges of bias and discrimination and plagiarism. As I recently wrote in Schools Week, the influence of corporate donors and their interests in shaping public sector procurement, such as the Tony Blair Institute’s backing by Oracle owner Larry Ellison, therefore demands scrutiny.

Should society allow its public sector systems and laws to be shaped primarily to suit companies? The users of the systems are shaped by how those companies work, so who keeps the balance in check?

In a 2021 reflection here on World Children’s Day, I asked the question, Man or Machine, who shapes my child? Three years later, I am still concerned about the failure to recognize and address the question of redistribution of not only pupils’ agency but teachers’ authority; from individuals to companies (pupils and the teacher don’t decide what is ‘right’ you do next, the ‘computer’ does). From public interest institutions to companies (company X determines the curriculum content of what the computer does and how, not the school). And from State to companies (accountability for outcomes falls through the gap in outsourcing activity to the AI company).

Why it matters, is that these choices do not only influence how we are teaching and learning, but how children feel about it and develop.

The human response to surveillance (and that is what much of AI relies on, massive data-veillance and dashboards) is a result of the chilling effect of being ‘watched‘ by known or unknown persons behind the monitoring. We modify our behaviours to be compliant to their expectations. We try not to stand out from the norm, or to protect ourselves from resulting effects.

The second reason we modify our behaviours is to be compliant with the machine itself. Thanks to the lack of a responsible human in the interaction mediated by the AI tool, we are forced to change what we do to comply with what the machine can manage. How AI is changing human behaviour is not confined to where we walk, meet, play and are overseen in out- or indoor spaces. It is in how we respond to it, and ultimately, how we think.

In the simplest examples, using voice assistants shapes how children speak, and in prompting generative AI applications, we can see how we are forced to adapt how we think to put the questions best suited to getting the output we want. We are changing how we behave to suit machines. How we change behaviour is therefore determined by the design of the company behind the product.

There is limited public debate yet on the effects of this for education, on how children act, interact, and think using machines, and no consensus in the UK education sector whether it is desirable to introduce these companies and their steering that bring changes in teaching and learning and to the future of society, as a result.

And since then in 2021, I would go further. The neo-liberal approach to education and its emphasis on the efficiency of human capital and productivity, on individualism and personalisation,  all about producing ‘labour market value’, and measurable outcomes, is commonly at the core of AI in teaching and learning platforms.

Many tools dehumanise children into data dashboards, rank and spank their behaviours and achivements, punish outliers and praise norms, and expect nothing but strict adherence to rules (sometimes incorrect ones, like mistakes in maths apps). As some companies have expressly said, the purpose of this is to normalise such behaviours ready to be employees of the future, and the reason their tools are free is to normalise their adoption for life.

AI by the normalisation of values built-in by design to tools, is even seen by some as encouraging fascistic solutions to social problems.

But the purpose of education is not only about individual skills and producing human capital to exploit.  Education is a vital gateway to rights and the protection of a democratic society. Education must not only be about skills as an economic driver when talking about AI and learners in terms of human capital, but include rights, championing the development of a child’s personality to their fullest potential, and intercultural understanding, digital citizenship on dis-/misinformation, discrimination and the promotion and protection of democracy and the natural world. “It shall promote understanding, tolerance and friendship among nations, racial or religious groups, and shall further the activities of the United Nations for the maintenance of peace.”

Peter Kyle, the UK DSIT’s Secretary of State said last week, that, “more than anything else, it is growth that will shape those young people’s future.” But what will be used to power all this growth in AI, at what environmental and social costs, and will we get a say?

Don’t forget, in this project announcement the Minister said, “This is the first of many projects that will transform how we see and use public sector data.” That’s our data, about us. And when it comes to schools, that’s not only the millions of learners who’ve left already but who are school children today. Are we really going to accept turning them into data fodder for AI without a fight? As Michael Rosen summed up so perfectly in 2018, “First they said they needed data about the children  to find out what they’re learning… then the children became data.”  If this is to become the new normal, where is the mechanism for us to object? And why this, now, in such a hurry?

Purpose limitation should also prevent retrospective reuse of learners’ records and data, but it hasn’t so far on general identifying and sensitive data distribution from the NPD at national level or from edTech in schools. The project details, scant as they are, suggest parents were asked for consent in this particular pilot, but the Faculty AI notice seems legally weak for schools, and when it comes to using pupil data for building into AI products the question is whether consent can ever be valid — since it cannot be withdrawn once given, and the nature of being ‘freely given’ is affected by the power imbalance.

So far there is no field to record an opt out in any schools’ Information Management Systems though many discussions suggest it would be relatively straightforward to make it happen. However it’s important to note their own DSIT public engagement work on that project says that opt-in is what those parents told the government they would expect. And there is a decade of UK public engagement on data telling government opt-in is what we want.

The regulator has been silent so far on the DSIT/DfE announcement despite lack of fair processing and failures on Articles 12, 13 and 14 of the GDPR being one of the key findings in its 2020 DfE audit. I can use a website to find children’s school photos, scraped without our permission. What about our school records?

Will the government consult before commercialising children’s lives in data to feed AI companies and ‘the economy’ or any of the other “many projects that will transform how we see and use public sector data“?  How is it different from the existing ONS, ADR, or SAIL databank access points and processes? Will the government evaluate the impact on child development, behaviour or mental health of increasing surveillance in schools? Will MPs get an opt-in or even -out, of the commercialisation of their own school records?

I don’t know about ‘Britain belongs to us‘, but my own data should.


See also Part 1: The New Normal is Not Inevitable.

Automated suspicion is always on

In the Patrick Ness trilogy, Chaos Walking, the men can hear each others’ every thought but not the women.

That exposure of their bodily data and thought, means almost impossible privacy,  and no autonomy over their own bodily control of movement or of action. Any man that tries to block access to their thoughts is treated with automatic suspicion.

It has been on my mind since last week’s get together at FIPR. We were tasked before the event to present what we thought would be the greatest risk to rights [each pertinent to the speaker’s focus area] in the next five years.

Wendy Grossman said at the event and in her blog, “I’d look at the technologies being deployed around European and US borders to surveil migrants. Migrants make easy targets for this type of experimentation because they can’t afford to protest and can’t vote. “Automated suspicion,” Euronews.next calls it. That habit of mind is dangerous.” Those tools often focus on control of humans’ bodies. They infringe on freedom of movement.

In education, technology companies sell automated suspicion detection tools to combat plagiarism and cheating in exams. Mood detection to spot outliers in concentration. Facial detection to bar the excluded from premises or the lunch queue, or normalise behavioural anomalies, control physical attendance and mental presence. Automated suspicion is the opposite of building trusted human relationships.

I hadn’t had much space to think in the weeks before the event, between legislation, strategic litigation and overdue commitments to reports, events, and to others. But on reflection, I failed to explain why the topic area I picked above all others matters. It really matters.

It is the combination of a growth of children’s bodily data processing and SafetyTech deployed in schools. It’s not only because such tools normalise the surveillance of everything children do, send, share or search for on a screen, or that many enable the taking of covert webcam photos,  or even the profiles and labels it can create on terrorism and extremism or that can out LGBTQ+ teens. But that at its core, lies automated suspicion and automated control. Not only of bodily movement and actions, but of thought. Without any research or challenge to what that does to child development or their experience of social interactions and of authority.

First let’s take suspicion.

Suspicion of harms to self, harms to others, harms from others.

The software / systems / tools inspect the text or screen content the users enter into  devices (including text the users delete and text before it is encrypted) assuming a set of risks all of the time. When a potential risk is detected, the tools can capture and store a screenshot of the users’ screen. Depending on the company design and option bought, human company moderators may or may not first review the screenshots (recorded on a rolling basis also ‘without’ any trigger so as to have context ahead of the event) and text captures to verify the triggered events before sending to the school’s designated safeguarding lead. An estimated 1% of all triggered material might be sent on to a school to review and choose whether or not to act on. But regardless of that, the children’s data (including screenshots, text, and redacted text) may be stored for more than a year by the company before being deleted. Even content not seen as necessary but, “content which poses no risk on its own but is logged in case it becomes relevant in the future”.

Predictive threat, automated suspicion

In-school technology is not only capturing what is done by children but what they say they do, or might do, or think of doing. SafetyTech enables companies and school staff to police what children do and what they think, and it is quite plainly designed to intervene in actions and thoughts before things happen. It is predictively policing pupils in schools.

Safeguarding-in-schools systems were already one of my greatest emerging concerns but I suspect coinciding with recent wars, that the keywords in topics seen as connected to the Prevent programme will find a match rate at an all time high since 2016 and the risks it brings due to being wrong will have increased with it. But while we have now got various company CEOs talking about shared concerns, not least outing LGBTQ students as the CDT reported this year in the U.S. and a whistleblower who wanted to talk about the sensitive content the staff can see from their company side, there is not yet appetite to fix this across the sector. The ICO returned our case for sectoral attention, with no enforcement. DfE guidance still ignores the at home, out of hours contexts and those among the systems that can enable school staff or company staff to take photos of the children and no one might know. We’ve had lawyers write letters and submitted advice in consultations and yet it’s ignored to date.

Remember the fake bomb detectors that were golf ball machines? That’s the potential scenario we’ve got in education in “safeguarding in schools” tech. Automated decision making in black boxes that no one has publicly tested, no one can see inside, and we’ve no data on its discriminatory effects through language matching or any effective negative or false positives, and the harms it is or is not causing. We’ve risk averse institutions made vulnerable to scams. It may be utterly brilliant technology, with companies falling over independent testing that proves it ‘works’. I’ve just not seen any.

Some companies themselves say they need better guidance and agree there are significant gaps. Opendium, one leading provider of internet filtering and monitoring solutions, blogged about views expressed at a 2019 conference held by the Police Service’s Counter Terrorism Internet Referral Unit that schools need better advice .

Freedom of Thought

But it’s not just about what children do, but any mention of what they *might* do or their opinions of themselves, others or anything else. We have installed systems of thought surveillance into schools, looking for outliers or ‘extremists’ in different senses, and in its now everyday sense, underpinned by the Prevent programme and British Values. These systems do not only expose and create controls of children’s behaviours in what they do, but in their thoughts, their searches, what they type and share, send, or even, don’t and delete.

Susie Algere, human rights lawyer, describes, Freedom of Thought as, “protected absolutely in international human rights law. This means that, if an activity interferes with our right to think for ourselves inside our heads (the so-called “forum internum”) it can never be justified for any reason. The right includes three elements:

the right to keep our thoughts private
the right to keep our thoughts free from manipulation, and
the right not to be penalised for our thoughts.”

These SafetyTech systems don’t respect any of that. They infringe on freedom of thought.

Bodily data and contextual collapse

Depending on the company, SafetyTech may be built on keyword matching technology commonly used in the gaming tech industry.

Gaming data collected from children is a whole field in its own right – bodily data from haptics, and neuro data. Personal data from immersive environments that in another sector would be classified clearly as “health” data, and in the gaming sector too, will fall under the same “special category” or “sensitive data” due to its nature, not its context. But it is being collected at scale by companies that aren’t used to dealing with the demands of professional confidentiality and concept of ‘first do no harm’ that the health sector are founded on. Perhaps we’re not quite at the everyday for everyone in society, Ready Player One stage yet, but for those in communities who are creating a vast amount of data about themselves the questions over its oversight its retention, and perhaps its redistribution with authorities in particular with policing should be of urgent consideration. And those tools are on the way into the classroom.

At school level the enormous growth in the transfer of bodily data is not yet haptics but of bodily harm. A vast sector has grown up to support the digitisation of children’s safety, physical harms noticed by staff on children picked up at home, or accidents and incidents recorded at school. Often including marking full body outlines with where the injury has been.

The issues here again, are in part created by taking this data  beyond the physical environment of a child’s direct care and beyond the digital firewalls of child protection agencies and professionals. There are no clear universal policies on sealed records. ie not releasing the data of children-at-risk or those who undergo a name change, once it’s been added into school information management systems or into commercial company products like CPOMS, MyConcern, or Tootoot.

Similarly there is no clear national policy on the onward distribution into the National Pupil Database of the records of children in need (CiN) of child protection, which in my opinion, are inadequately shielded. The CIN census is a statutory social care data return made by every Local Authority to the Department for Education (DfE). It captures information about all children who have been referred to children’s social care regardless of whether further action is taken or not.

As of September 2022, there were only 70 individuals flagged for shielding and that includes both current and former pupils in the entire database. There were 23 shielded pupil records collected by the Department via the 2022 January censuses alone (covering early years, schools and alternative provision).

No statement or guidance is given direct to settings about excluding children from returns to the DfE. As of September 2022, there were 2,538,656 distinct CiN (any ‘child in need’ referred to children’s social care services within the year) / LAC ([state] looked after child) child records (going back to 2006), regardless of at-risk status, able to be matched to some home address information via other sources, (non CiN / LAC) all included in the NPD. The data is highly highly sensitive and detailed, including “categories of abuse” not only monitoring and capturing what has been done to children, but what is done by children.

Always on, always watching

The challenge for rights work in this sector is not primarily a technical problem but one of mindset. Do you think this is what schools are for? Are they aligned with the aims of education? One SafetyTech company CEO at a conference certainly marketed their tool as something that employers want children to get used to, to normalise the gaze of authority and monitoring of your attention span. In real Black Mirror stuff, you could almost hear him say, “their eyeballs belong to me for fifteen million merits”.

Monitoring in-class attendance is moving not only towards checking are you physically in school,  but are you present in focus as well.

Education is moving towards an always-on mindset for many, whether it be data monitoring and collection with the stated aims of personalising learning or the claims by companies that have trialed mood and emotion tech on pupils in England. Facial scanning is sold as a way of seeing if the class mood is “on point” with learning. Are they ‘engaged’?  After Pippa King spotted a live-trial in the wild starting in UK schools, we at Defend Digital Me had a chat with one company CEO who agreed after discussion, and the ICO blogpost on ’emotion tech’ hype, to stop that product rollout and cut it altogether from their portfolio. Under the EU AI Act it would soon be banned too, to protect children from its harms (children in the UK included, were Britain still under EU laws but now post-Brexit, they’re not).

The Times Education Commission reported in 2021 that Priya Lakhani told one of the Education Commission’s oral evidence sessions that Century Tech, “decided against using bone-mapping software to track pupils’ emotions through the cameras on their computers. Teachers were unhappy about pupils putting their cameras on for safeguarding reasons but there were also moral problems with supplying such technology to autocratic regimes around the world.”

But would you even consider this in an educational context at all?

Apps that blame and shame behaviours using RAG scores exposed to peers on wall projected charts are certainly already here. How long before such ’emotion’ and ‘mood’ tech emerges in Britain seeking a market beyond the ban in the EU, joined up with that which can blame and shame for lapses in concentration?

Is this simply the world now, that children are supposed to normalize third-party bodily surveillance and behavioural nudge?

That same kind of thinking in ‘estimation’ ‘safety’ and ‘blame’ might well be seen soon in eye scanning drivers in “advanced driver distraction warning systems”. Drivers staying ‘on track’ may be one area we will be expected to get used to monitoring our eyeballs, but will it be used to differentiate and discriminate between drivers for insurance purposes, or redirect blame for accidents? What about monitoring workers at computer desks, with smoking breaks and distraction costing you in your wage packet?

Body and Mind belong ‘on track’ and must be overseen

This routine monitoring of your face is expanding at pace in policing but policing the everyday to restrict access is going to affect the average person potentially far more than the use of facial detection and recognition in every public space. Your face is your passport and the computer can say no. Age as the gatekeeper of identity to participation and public and private spaces is already very much here online and will be expanded online in the UK by the Online Safety Act (noting other countries have realised its flaws and foolishness). Age verification and age assurance if given any weight, will inevitably lead to the balkanisation of the Internet, to throttling of content through prioritisation of who is permitted to do or see what, and control ofy content moderation.

In UK night clubs age verification is being normalised through facial recognition. Soon the only permitted Digital ID in what are (for now) purposes limited to rental and employment checks, will be the accredited government ID if the Data Protection and Digital Information Bill passes as drafted. But scope creep will inevitably move from what is possible, to what is required, across every aspect of our lives where identity is made an obligation for proof of eligibility.

Why all this matters is that we see a direction  of travel over and over again. Once “the data” is collected and retained there is an overwhelming desire down the line to say, well now we’ve got it, how can we use it? Increasingly that means joining it all up. And then passing it around to others. And the DPDI Bill takes away the safeguards around that over time (See KC opinion para 20, p.6).

It is something data protection law and lack of enforcement are already failing to protect us from adequately, because excessive data retention should be impossible under the data minimisation principle and purpose limitation, but controllers argue linked data ‘is not new data’. What we should see instead in enforcement is against the excessive retention of data that creates ‘new knowledge’ that goes beyond our reasonable expectations we see the government and companies gaining ever greater power to intervene in the lives of the data subjects, the people. The draft new law does the opposite.

Who decides what ‘on track’ looks like?

School SafetyTech is therefore the current embodiment of my greatest areas of concern for children’s rights in educational settings right now. Because it is an overlapping tech that monitors both what you do when, and claims to be able to put the thinking behind it in context. Tools in schools are moving towards prediction and interventions and the combinations of bodily control, thought, mood and emotion. They are shifting from on the server to on device and go with you everywhere your phone goes. ‘Interventions’ bring a whole new horizon of the potential infringements of rights and outcomes and questions of who decides what can be used for what purposes in a classroom, in loco parentis.

Filtering and monitoring technology in school “safetyTech”, blocks content and profiles the user over time. This monitoring of bodily behaviours, monitoring actions and thoughts, leads to staff acting on automated suspicion. It can lead to imposing control of bodily movement and of thoughts and actions. It’s adopted at scale for millions of children and students across the UK. It’s without oversight or published universal safety standards.

This is not a single technology, it’s a market and a mindset.

Who decides what is ‘suitable’, ‘on track’, and where ‘intervention’ is required is built into design?  It is not a problem of technology causing harm, but social and political choices and values embodied in technology that can be used to cause harm. For example in identifying and enabling the persecution of Muslim students that are fasting during Ramadan, based on their dining records. In the UK we have all the same tools already in place.

Who does any technology serve? is a question we have not yet resolved in education in England. The best interests of the child, the teacher, the institution, the State or company that built it?  Interests and incentives may overlap or may be contradictory. But who decides, and who is given the knowledge of how that was decided? As tech is becoming increasingly designed to run without any human intervention the effects of the automated decisions, in turn, can be significant, and happen at speed and scale.

Patrick Ness coined the phrase,”The Noise is a man unfiltered, and without a filter, a man is just chaos walking”. Controlling chaos may be a desirable government aim, but at what cost to whose freedoms?

Policing thoughts, proactive technology, and the Online Safety Bill

Former counter-terrorism police chief attacks Rishi Sunak’s Prevent plans“, reads a headline in today’s Guardian. “Former counter-terrorism chief Sir Peter Fahy […] said: “The widening of Prevent could damage its credibility and reputation. It makes it more about people’s thoughts and opinions. Fahy said: “The danger is the perception it creates that teachers and health workers are involved in state surveillance.”

This article leaves out that today’s reality is already far ahead of proposals or perception. School children and staff are already surveilled in these ways. Not only are things monitored that people think type or read or search for online and offline in the digital environment, but copies may be collected, retained by companies and interventions made.

The products don’t only permit monitoring of trends on aggregated data in overviews of student activity but the behaviours of individual students. And these can be deeply intrusive and sensitive when you are talking about self harm, abuse, and terrorism.

(For more on the safety tech sector, often using AI in proactive monitoring, see my previous post (May 2021) The Rise of Safety Tech.)

Intrusion through inference and interventions

From 1 July 2015 all schools have been subject to the Prevent duty under section 26 of the Counter-Terrorism and Security Act 2015, in the exercise of their functions, to have “due regard to the need to prevent people from being drawn into terrorism”.  While these products are about monitoring far more than the remit of Prevent,  many companies actively market online filtering, blocking and monitoring safety products as a way of meeting that in the digital environment. Such as, “Lightspeed Filter™ helps you meet all of the Prevent Duty’s online regulations…

Despite there being no obligation to date, to fulfil this duty through technology, some companies’ way of selling such tools could be interpreted as threatening if schools don’t use it. Like this example:

“Failure to comply with the requirements may result in intervention from the Prevent Oversight Board, prompt an Ofsted inspection or incur loss of funding.”

Such products may create and send real-time alerts to company or school staff when children attempt to reach sites or type “flagged words” related to radicalisation or extremism on any online platform.

Under the auspices of the safeguarding-in-schools data sharing and web monitoring in the Prevent programme children may be labelled with terrorism or extremism labels, data which may be passed on to others or stored outside the UK without their knowledge. The drift in what is considered significant, has been from terrorism into now more vague and broad terms of extremism and radicalisation. Away from some assessment of intent and capability of action, into interception and interventions for potentially insignificant potential vulnerabilities and inferred assumptions of disposition towards such ideas. This is not potentially going to police thoughts as suggested by Fahy of Sunak’s views. It is already doing so. Policing thoughts in the developing child and holding them accountable for it like this in ways that are unforeseeable, is inappropriate and requires thorough investigation into its effects on children, including mental health.

But it’s important to understand that these libraries of thousands of words, ever changing and in multiple languages, and what the systems are looking for and flag, often claiming to do it using Artificial Intelligence, go far beyond Prevent. ‘Legal but harmful’ is their bread and butter. Self harm, harm to or from others.

While companies have no obligations to publish how the monitoring or flagging operates, what the words or phrases or blocked websites are, their error rates (positive and negative) or the effects on children or school staff and their behaviour as a result, these companies have a great deal of influence what gets inferred from what children do online, and who decides what to act on.

Why does it matter?

Schools have normalized the premise that systems they introduce should monitor activity outside of the school network, and hours. And that strangers or their private companies’ automated systems should be involved in inferring or deciding what children are ‘up to’ before the school staff who know the children in front of them.

In a defenddigitalme report, The State of Data 2020, we included a case study on one company that has since been bought out.  And bought again. As of August 2018 eSafe was monitoring approximately one million school children plus staff across the UK. This case study they used in their public marketing raised all sorts of questions on professional  confidentiality and school boundaries, personal privacy, ethics, and companies’ role and technical capability, as well as the lack of any safety tech accountability.

“A female student had been writing an emotionally charged letter to her Mum using Microsoft Word, in which she revealed she’d been raped. Despite the device used being offline, eSafe picked this up and alerted John and his care team who were able to quickly intervene.”

Their then CEO  had told the House of Lords 2016 Communication Committee enquiry on the Children and the Internet, how the products are not only monitoring children in school or school hours:

“Bearing in mind we are doing this throughout the year, the behaviours we detect are not confined to the school bell starting in the morning and ringing in the afternoon, clearly; it is 24/7 and it is every day of the year. Lots of our incidents are escalated through activity on evenings, weekends and school holidays.”

Similar products offer a photo capturing feature of users (pupils while using the device being monitored) described as “common across most solutions in the sector” by this company:

When a critical safeguarding keyword is copied, typed or searched for across the school network, schools can turn on NetSupport DNA’s webcams capture feature (this feature is turned-off by default) to capture an image of the user (not a recording) who has triggered the keyword.

How many webcam photos have been taken of children by school staff or others through those systems, and for what purposes, kept by whom? In the U.S. in 2010, Lower Merion School District, Philadelphia settled a lawsuit for using laptop webcams to take photos of students.  Thousands of photos had been taken even at home, out of hours, without their knowledge.

Who decides what does and does not trigger interventions across different products? In the month of December 2017 alone, eSafe claims they added 2254 words to their threat libraries.

Famously, Impero’s system even included the word “biscuit” which they say is a term used to define a gun. Their system was used by more than “half a million students and staff in the UK” in 2018. And students had better not talk about “taking a wonderful bath.” Currently there is no understanding or oversight of the accuracy of this kind of software and black-box decision-making is often trusted without openness to human question or correction.

Aside from how the range of tools that are all different work, there are very basic questions about whether such policies and tools help or harm children in various ways at all. The UN Special Rapporteur’s 2014 report on children’s rights and freedom of expression stated:

“The result of vague and broad definitions of harmful information, for example in determining how to set Internet filters, can prevent children from gaining access to information that can support them to make informed choices, including honest, objective and age-appropriate information about issues such as sex education and drug use. This may exacerbate rather than diminish children’s vulnerability to risk.” (2014)

U.S. safety tech creates harms

Today in the U.S. the CDT published a report on school monitoring systems there, many of which are also used over here. The report revealed that 13 percent of students knew someone who had been outed as a result of student-monitoring software. Another conclusion the CDT draws, is that monitoring is used for discipline more often than for student safety.

We don’t have that same research for the UK, but we’ve seen IT staff openly admit to using the webcam feature to take photos of young boys who are “mucking about” on the school library computer.

The Online Safety Bill scales up problems like this

The Online Safety Bill seeks to expand how such ‘behavioural identification technology’ can be expanded outside schools.

“Proactive technology include content moderation technology, user profiling technology or behaviour identification technology which utilises artificial intelligence or machine learning.” (p151 Online Safety Bill, August 3, 2022)

The “proactive technology requirement” is as yet rather open ended, left to OFCOM in Codes of Practice but the scope creep of such AI-based tools has become ever more intrusive in education. Legal but harmful is decided by companies and the IWF and any number of opaque third parties whose process and decision-making we know little about. It’s important not to conflate filtering, blocking lists of ‘unsuitable’ websites that can be accessed in schools, with monitoring and tracking individual behaviours.

‘Technological developments that have the capacity to interfere with our freedom of thought fall clearly within the scope of “morally unacceptable harm,”‘ according to Algere (2017), and yet this individual interference is at the very core of school safeguarding tech and policy by design.

In 2018, the ‘lawful but harmful’ list of activities in the Online Harms White paper was nearly identical with those terms used by school Safety Tech companies. The Bill now appears to be trying to create a new legitimate basis for these practices, more about underpinning a developing market, than supporting children’s safety or rights.

Chilling speech is itself controlling content

While a lot of debate about the Bill has been the free speech impacts of content removal, there has been less about what is unwritten but how it will operate to prevent speech and participation in the digital environment for children. The chilling effect of surveillance on access and participation online is well documented. Younger people and women are more likely to be negatively affected (Penney, 2017). The chilling effect on thought and opinion is worsened in these types of tools that trigger an alert even when what is typed is quickly deleted or remains unsent or shared. Thoughts are no longer private.

The ability to use end-to-end encryption on private messaging platforms is simply worked around by these kinds of tools, trading security for claims of children’s safety. Anything on screen may be read in the clear by some systems, even capturing passwords and bank details.

Graham Smith has written, “It may seem like overwrought hyperbole to suggest that the [Online Harms] Bill lays waste to several hundred years of fundamental procedural protections for speech. But consider that the presumption against prior restraint appeared in Blackstone’s Commentaries (1769). It endures today in human rights law. That presumption is overturned by legal duties that require proactive monitoring and removal before an independent tribunal has made any determination of illegality.”

More than this, there is no determination of illegality in legal but harmful activity. It’s opinion. The government is prone to argue that, “nothing in the Bill says X…” but you need to understand this context of how such proactive behavioural monitoring tools work is through threat and the resultant chilling effect to impose unwritten control. This Bill does not create a safer digital environment, it creates threat models for users and companies, to control how we think and behave.

What do children and parents think?

Young people’s own views that don’t fit the online harms narrative have been ignored by Westminster scrutiny Committees. A 2019 survey by the Australian e-safety commissioner found that over half (57%) of child respondents were uncomfortable with background monitoring processes, and 43 %were unsure about these tools’ effectiveness in ensuring online safety.

And what of the role of parents? Article 3(2) of the UNCRC says: “States Parties undertake to ensure the child such protection and care as is necessary for his or her wellbeing, taking into account the rights and duties of his or her parents, legal guardians, or other individuals  legally responsible for him or her, and, to this end, shall take all appropriate legislative and administrative measures.” (my emphasis)

In 2018, 84% of 1,004 parents in England who we polled through Survation, agreed that children and guardians should be informed how this monitoring activity works and wanted to know what the keywords were. (We didn’t ask if it should happen at all or not.)

The wide ranging nature [of general monitoring] rather than targeted and proportionate interference has been judged to be in breach of law and a serious interference with rights, previously. Neither policy makers nor companies should assume parents want safety tech companies to remove autonomy, or make inferences about our children’s lives. Parents if asked, reject the secrecy in which it happens today and demand transparency and accountability. Teachers can feel anxious talking about it at all. There’s no clear routes for error corrections, in fact it’s not done because some claim in building up profiles staff should not delete anything and ignore claims of errors, in case a pattern of behaviour is missed. But there’s no independent assessments available to evidence these tools work or are worth the costs. There are no routes for redress or responsibility taken for tech-made mistakes. None of which makes children safer online.

Before broadening out where such monitoring tools are used, their use and effects on school children need to be understood and openly debated. Policy makers may justify turning a blind eye to harms created by one set of technology providers while claiming that only the other tech providers are the problem, because it suits political agendas or industry aims, but children’s rights and their wellbeing should not be sacrificed in doing so.  Opaque, unlawful and unsafe practice must stop. A quid pro quo for getting access to millions of children’s intimate behaviour, should be transparent access to their product workings, and accepting standards on universal safe accountable practices. Families need to know what’s recorded. To have routes for redress when a daughter researching ‘cliff walks’ gets flagged as a suicide risk or an environmentally interested teenage son searching for information on ‘black rhinos’ is asked about his potential gang membership. The tools sold as solutions to online harms, shouldn’t create more harm like these reported real-life case studies.

Teachers are ‘involved in state surveillance’ as Fahy put it, through Prevent. Sunak was wrong to point away from the threats of the far right in his comments. But the far broader unspoken surveillance of children’s personal lives, behaviours and thoughts through general monitoring in schools, and what will be imposed through the Online Safety Bill more broadly, should concern us far more than was said.

Facebook View and Ray-Ban glasses: here’s looking at your kid

Ray-Ban (EssilorLuxxotica) is selling glasses with ‘Facebook View’. The questions have already been asked whether they  can be lawful in Europe, including in the UK, in particular in regards to enabling the processing of children’s personal data without consent.

The Italian data authority has asked the company to explain via the Irish regulator:

  • the legal basis on which Facebook processes personal data;
  • the measures in place to protect people recorded by the glasses, children in particular,
  • questions of anonymisation of the data collected; and
  • the voice assistant connected to the microphone in the glasses.

While the first questions in Europe may be bound to data protection law and privacy, there are also questions of why Facebook has gone ahead despite Google Glass that was removed from the market in 2013. You can see a pair displayed in a surveillance exhibit at the Victoria and Albert museum (September 2021).

We can’t wait to see the world from your perspective“, says Ray-ban Chief Wearables Officer Rocco Basilico in the promotional video together with Mark Zuckerberg.  I bet. But not as much as Facebook.

With cameras and microphones built-in, up to around 30 videos or 500 photos can be stored on the glasses, and shared with Facebook companion app. While the teensy light on a corner is supposed to be an indicator that recording is in progress, the glasses look much like any other and indistinguishable in the Ray-ban range. You can even buy them as prescription glasses, which intrigues me as to how that recording looks on playback, or shared via the companion apps.

While the Data Policy doesn’t explicitly mention Facebook View in the wording on how it uses data to “personalise and improve our Products,” and the privacy policy is vague on Facebook View, it seems pretty clear that Facebook will use the video capture to enhance its product development in augmented reality.

We believe this is an important step on the road to developing the ultimate augmented reality glasses“, says Mark Zuckerberg.(05:46)

The company needs a lawful basis to be able to process the data it receives for those purposes. It determines those purposes, and is therefore a data controller for that processing.

In the supplemental policy the company says that “Facebook View is intended solely for users who are 13 or older.” Data Protection law does not care about the age of the product user, but it does regulate under what basis a child’s data may be processed and that may be the user, setting up an account. It is also concerned about the data of the children who are recorded. By recognising  the legal limitations on who can be an account owner, it has a bit of a self-own here on what the law says on children’s data.

Personal privacy may have weak protection in data protection laws that offer the wearer exemptions for domestic** or journalistic purposes, but neither the user nor the company can avoid the fact that processing video and audio recordings may be without (a) adequately informing people whose data is processed or (b) appropriate purpose limitation for any processing that Facebook the company performs, across all of its front end apps and platforms or back-end processes.

I’ve asked Facebook how I would, as a parent or child, be able to get a wearer to destroy a child’s images and video or voice recorded in a public space, to which I did not consent. How would I get to see that content once held by Facebook, or request its processing be restricted by the company, or user, or the data destroyed?

Testing the Facebook ‘contact our DPO’ process as if I were a regular user, fails. It has sent me round the houses via automated forms.

Facebook is clearly wrong here on privacy grounds but if you can afford the best in the world on privacy law, why would you go ahead anyway? Might they believe after nearly twenty years of privacy invasive practice and a booming bottom line, that there is no risk to reputation, no risk to their business model, and no real risk to the company from regulation?

It’s an interesting partnership since Ray-Ban has no history in understanding privacy. Facebook has a well known controversial one.  Reputational risk shared, will not be reputational risk halved. And EssilorLuxottica has a share price to consider.  I wonder if they carried out any due diligence risk assessment for their investors?

If and when enforcement catches up and the product is withdrawn, regulators must act as the FTC did on the development of the product (in that case algorithms) from “ill gotten data”. (In the Matter of Everalbum and Paravision Commission File No. 1923172).

Destroy the data, destroy the knowledge gained, and remove it from any product development to  date.  All “Affected Work Product.”

Otherwise any penalty Facebook will get from this debacle, will be just the cost of doing business to have bought itself a very nice training dataset for its AR product development.

Ray-Ban of course, will take all the reputational hit if found enabling strangers to take covert video of our kids. No one expects any better from Facebook.  After all, we all know, Facebook takes your privacy, seriously.


Reference:  Rynes: On why your ring video doorbell may make you a controller under GDPR.

https://medium.com/golden-data/rynes-e78f09e34c52 (Golden Data, 2019)

Judgment of the Court (Fourth Chamber), 11 December 2014 František Ryneš v Úřad pro ochranu osobních údajů Case C‑212/13. Case file

exhibits from the Victoria and Albert museum (September 2021)

When the gold standard no longer exists: data protection and trust

Last week the DCMS announced that consultation on changes to Data Protection laws is coming soon.

  • UK announces intention for new multi-billion pound global data partnerships with the US, Australia and Republic of Korea
  • International privacy expert John Edwards named as preferred new Information Commissioner to oversee shake-up
  • Consultation to be launched shortly to look at ways to increase trade and innovation through data regime.

The Telegraph reported, Mr Dowden argues that combined, they will enable Britain to set the “gold standard” in data regulation, “but do so in a way that is as light touch as possible”.

It’s an interesting mixture of metaphors. What is a gold standard? What is light touch? These rely on assumptions in the reader to assume meaning, but don’t convey any actual content. Whether there will be substantive changes or not, we need to wait for the full announcement this month.

Oliver Dowden’s recent briefing to the Telegraph (August 25) was not the first trailer for changes that are yet to be announced. He wrote in the FT in February this year, that, “the UK has an opportunity to be at the forefront of global, data-driven growth,” and it looks like he has tried to co-opt the rights’ framing as his own.  …”the beginning of a new era in the UK — one  where we start asking ourselves not just whether we have the right to use data, but whether,  given its potential for good, we have the right not to.”

There was nothing more on that in this week’s announcement, but the focus was on international trade. The Government says it is prioritising six international agreements with “the US, Australia, Colombia, Singapore, South Korea and Dubaibut in the future it also intends to target the world’s fastest growing economies, among them, India, Brazil, Kenya and Indonesia.” (my bold)

Notably absent from the ‘fastest growing’ among them mentions’ list is China. What those included in the list have in common, is that they are countries not especially renowned for protecting human rights.

Human rights like privacy. The GDPR and in turn the UK-GDPR recognised that rights matter.  Data Protection is not designed in other regimes to be about prioritising the protection of rights but harmonisation of data in trade, and that may be where we are headed. If so, it would be out of step with how the digital environment has changed since those older laws were seen as satisfactory. But weren’t.  And the reason why the EU countries moved towards both better harmonisation *and* rights protection.

At the same time, while data protection laws increasingly align towards a high interoperable and global standard, data sovereignty and protectionism is growing too where transfers to the US remain unprotected from government surveillance.

Some countries are establishing stricter rules on the cross-border transfer of personal information, in the name of digital sovereignty, security or business growth. such as Hessen’s decision on Microsoft and “bring the data home” moves to German-based data centres.

In the big focus on data-for-trade post-Brexit fire sale,  the DCMS appears to be ignoring these risks of data distribution, despite having a good domestic case study on its doorstep in 2020. The Department for Education has been giving data away sensitive pupil data since 2012. Millions of people, including my own children, have no idea where it’s gone. The lack of respect for current law makes me wonder how I will trust that our own government, and those others we trade with, will respect our rights and risks in future trade deals.

Dowden complains in the Telegraph about the ICO that, “you don’t know if you have done something wrong until after you’ve done it”.  Isn’t that the way that enforcement usually works? Should the 2019-20 ICO audit have turned a blind eye to  the Department for Education lack of prioritisation of the rights of the named records of over 21 million pupils? Don’t forget even gambling companies had access to learners’ records of which the Department for Education claimed to be unaware. To be ignorant of law that applies to you, is a choice.

Dowden claims the changes will enable Britain to set the “gold standard” in data regulation. It’s an ironic analogy to use, since the gold standard while once a measure of global trust between countries, isn’t used by any country today. Our government sold off our physical gold over 20 years ago, after being the centre of the global gold market for over 300 years. The gold standard is a meaningless thing of the past that sounds good. A true international gold standard existed for fewer than 50 years (1871 to 1914). Why did we even need it? Because we needed a consistent trusted measure of monetary value, backed by trust in a commodity. “We have gold because we cannot trust governments,” President Herbert Hoover famously said in 1933 in his statement to Franklin D. Roosevelt. The gold standard was all about trust.

At defenddigitalme we’ve very recently been talking with young people about politicians’ use of language in debating national data policy.  Specifically, data metaphors. They object to being used as the new “oil” to “power 21st century Britain” as Dowden described it.

A sustainable national data strategy must respect human rights to be in step with what young people want. It must not go back to old-fashioned data laws only  shaped by trade and not also by human rights; laws that are not fit for purpose even in the current digital environment. Any national strategy must be forward-thinking. It otherwise wastes time in what should be an urgent debate.

In fact, such a strategy is the wrong end of the telescope from which to look at personal data at all— government should be focussing on the delivery of quality public services to support people’s interactions with the State and managing the administrative data that comes out of digital services as a by-product and externality. Accuracy. Interoperability. Registers. Audit. Rights’ management infrastructure. Admin data quality is quietly ignored while we package it up hoping no one will notice it’s really. not. good.

Perhaps Dowden is doing nothing innovative at all. If these deals are to be about admin data given away in international trade deals he is simply continuing a long tradition of selling off the family silver. The government may have got to the point where there is little left to sell. The question now would be whose family does it come from?

To use another bad metaphor, Dowden is playing with fire here if they don’t fix the issue of the future of trust. Oil and fire don’t mix well. Increased data transfers—without meaningful safeguards including minimized data collection to start with—will increase risk, and transfer that risk to you and me.

Risks of a lifetime of identity fraud are not just minor personal externalities in short term trade. They affect nation state security. Digital statecraft. Knowledge of your public services is business intelligence. Loss of trust in data collection creates lasting collective harm to data quality, with additional risk and harm as a result passed on to public health programmes and public interest research.

I’ll wait and see what the details of the plans are when announced. We might find it does little more than package up recommendations on Codes of Practice, Binding Corporate Contracts and other guidance that the EDPB has issued in the last 12 months. But whatever it looks like, so far we are yet to see any intention to put in place the necessary infrastructure of rights management that admin data requires. While we need data registers, those we had have been axed. Few new ones under the Digital Economy Act replaced them. Transparency and controls for people to exercise rights are needed if the government wants our personal data to be part of new deals.

 

img: René Magritte The False Mirror Paris 1929

=========

Join me at the upcoming lunchtime online event, on September 17th from 13:00 to talk about the effect of policy makers’ language in the context of the National Data Strategy: ODI Fridays: Data is not an avocado – why it matters to Gen Z https://theodi.org/event/odi-fridays-data-is-not-an-avocado-why-it-matters-to-gen-z/

The Rise of Safety Tech

At the CRISP hosted, Rise of Safety Tech, event  this week,  the moderator asked an important question: What is Safety Tech? Very honestly Graham Francis of the DCMS answered among other things, “It’s an answer we are still finding a question to.”

From ISP level to individual users, limitations to mobile phone battery power and app size compatibility, a variety of aspects within a range of technology were discussed. There is a wide range of technology across this conflated set of products packaged under the same umbrella term. Each can be very different from the other, even within one set of similar applications, such as school Safety Tech.

It worries me greatly that in parallel to the run up to the Online Harms legislation that their promotion appears to have assumed the character of a done deal. Some of these tools are toxic to children’s rights because of the policy that underpins them. Legislation should not be gearing up to make the unlawful lawful, but fix what is broken.

The current drive is towards the normalisation of the adoption of such products in the UK, and to make them routine. It contrasts with the direction of travel of critical discussion outside the UK.

Some Safety Tech companies have human staff reading flagged content and making decisions on it, while others claim to use only AI. Both might be subject to any future EU AI Regulation for example.

In the U.S. they also come under more critical scrutiny. “None of these things are actually built to increase student safety, they’re theater, Lindsay Oliver,  project manager for the Electronic Frontier Foundation was quoted as saying in an article just this week.

Here in the U.K. their regulatory oversight is not only startlingly absent, but the government is becoming deeply invested in cultivating the sector’s growth.

The big questions include who watches the watchers, with what scrutiny and safeguards? Is it safe, lawful, ethical, and does it work?

Safety Tech isn’t only an answer we are still finding a question to. It is a world view, with a particular value set. Perhaps the only lens through which its advocates believe the world wide web should be seen, not only by children, but by anyone. And one that the DCMS is determined to promote with “the UK as a world-leader” in a worldwide export market.

As an example one of the companies the DCMS champions in its May 2020 report, ‘‘Safer technology, safer users” claims to export globally already. eSafe Global is now providing a service to about 1 million students and schools throughout the UK, UAE, Singapore, Malaysia and has been used in schools in Australia since 2011.

But does the Department understand what they are promoting? The DCMS Minister responsible, Oliver Dowden said in Parliament on December 15th 2020: “Clearly, if it was up to individuals within those companies to identify content on private channels, that would not be acceptable—that would be a clear breach of privacy.”

He’s right. It is. And yet he and his Department are promoting it.

So how is this going to play out if at all, in the Online Harms legislation expected soon, that he owns together with the Home Office? Sadly the needed level of understanding by the Minister or in the third sector and much of the policy debate in the media, is not only missing, but is actively suppressed by the moral panic whipped up in emotive personal stories around a Duty of Care and social media platforms. Discussion is siloed about identifying CSAM, or grooming, or bullying or self harm, and actively ignores the joined-up, wider context within which Safety Tech operates.

That context is the world of the Home Office. Of anti-terrorism efforts. Of mass surveillance and efforts to undermine encryption that are as nearly old as the Internet. The efforts to combat CSAM or child grooming online, operate in the same space. WePROTECT for example, sits squarely amid it all, established in 2014 by the UK Government and the then UK Prime Minister, David Cameron. Scrutiny of UK breaches of human rights law are well documented in ECHR rulings. Other state members of the alliance including the UAE stand accused of buying spyware to breach activists’ encrypted communications. It is disingenuous for any school Safety Tech actors to talk only of child protection without mention of this context. School Safety Tech while all different, operate by tagging digital activity with categories of risk, and these tags can include terrorism and extremism.

Once upon a time, school filtering and blocking services meant only denying access to online content that had no place in the classroom. Now it can mean monitoring all the digital activity of individuals, online and offline, using school or personal devices, working around encryption, whenever connected to the school network. And it’s not all about in-school activity. No matter where a child’s account is connected to the school network, or who is actually using it, their activity might be monitored 24/7, 365 days a year. A user’s activity that matches with the thousands of words or phrases on watchlists and in keyword libraries gets logged, and profiles individuals with ‘vulnerable’ behaviour tags, sometimes creating alerts. Their scope has crept from flagging up content, to flagging up children. Some schools create permanent records including false positives because they retain everything in a risk-averse environment, even things typed that a child subsequently deleted, and may be distributed and accessible by an indefinite number of school IT staff and stored in further third parties’ systems like CPOMS or Capita SIMS.

A wide range of the rights of the child are breached by mass monitoring in the UK, such as outlined in the UN Committee on the Rights of the Child General Comment No.25 which states that, “Any digital surveillance of children, together with any associated automated processing of personal data, should respect the child’s right to privacy and should not be conducted routinely, indiscriminately or without the child’s knowledge or, in the case of very young children, that of their parent or caregiver; nor should it take place without the right to object to such surveillance, in commercial settings and educational and care settings, and consideration should always be given to the least privacy-intrusive means available to fulfil the desired purpose.” (para 75)

Even the NSPCC, despite their recent public policy that opposes secure messaging using end-to-send encryption, recognises on its own Childline webpage the risk for children from content monitoring of children’s digital spaces, and that such monitoring may make them less safe.

In my work in 2018, one school Safety Tech company accepted our objections from defenddigitalme, that this monitoring went too far in its breach of children’s confidentially and safe spaces, and it agreed to stop monitoring counselling services. But there are roughly fifteen active companies here in the UK and the data protection regulator, the ICO despite being publicly so keen to be seen to protect children’s rights, has declined to act to protect children from the breach of their privacy and data protection rights across this field.

There are questions that should be straightforward to ask and answer, and while some CEOs are more willing to engage constructively with criticism and ideas for change than others, there is reluctance to address the key question: what is the lawful basis for monitoring children in school, at home, in- or out-side school hours?

Another important question often without an answer, is how do these companies train their algorithms whether in age verification or child safety tech?  How accurate are the language inferences for an AI designed to catch children out who are being deceitful and where  are assumptions, machine or man-made, wrong or discriminatory? It is overdue that our Regulator, the ICO, should do what the FTC did with Paravision, and require companies that develop tools through unlawful data processing to delete the output from it, the trained algorithm, plus products created from it.

Many of the harms from profiling children were recognised by the ICO in the Met Police gangs matrix: discrimination, conflation of victim and perpetrator, notions of ‘pre-crime’ without independent oversight,  data distributed out of context, and excessive retention.

Harm is after all why profiling of children should be prohibited. And where, in exceptional circumstances, States may lift this restriction, it is conditional that appropriate safeguards are provided for by law.

While I believe any of the Safety Tech generated category profiles could be harmful to a child through mis-interventions, being treated differently by staff as a result, or harm a trusted relationship,  perhaps the potentially most devastating to a child’s prospects are from mistakes that could be made under the Prevent duty.

The UK Home Office has pushed its Prevent agenda through schools since 2015, and it has been built into school Safety Tech by-design. School Safety Tech while all different, operate by tagging digital activity with categories of risk, and these tags can include terrorism and extremism.  I know of schools that have flags attached to children’s records that are terrorism related, but who have had no Prevent referral. But there is no transparency of these numbers at all. There is no oversight to ensure children do not stay wrongly tagged with those labels. Families may never know.

Perhaps the DCMS needs to ask itself, are the values of the UK Home Office really what the UK should export to children globally from “the UK as a world-leader” without independent legal analysis, without safeguards, and without taking accountability for their effects?

The Home Office values are demonstrated in its approach to the life and death of migrants at sea, children with no recourse to public funds, to discriminatory stop and search, a Department that doesn’t care enough to even understand or publish the impact of its interventions on children and their families.

The Home Office talk is of safeguarding children, but it is opposed to them having safe spaces online. School Safety Tech tools actively work around children’s digital security, can act as a man-in-the-middle, and can create new risks. There is no evidence I have seen that on balance convinces me that school Safety Tech does in fact make children safer. But plenty of evidence that the Home Office appears to want to create the conditions that make children less secure so that such tools could thrive, by weakening the security of digital activity through its assault on end-to-end encryption. My question is whether Online Harms is to be the excuse to give it a lawful basis.

Today there are zero statutory transparency obligations, testing or safety standards required of school Safety Tech before it can be procured in UK state education at scale.

So what would a safe and lawful framework for operation look like? It would be open to scrutiny and require regulatory action, and law.

There are no published numbers of how many records are created about how many school children each year. There are no safeguards in place to protect children’s rights or protection from harm in terms of false positives, error retention, transfer of records to the U.S. or third party companies, or how many covert photos they have enabled to be taken of children via webcam by school staff.  There is no equivalent of medical device ‘foreseeable misuse risk assessment’  such as ISO 14971 would require, despite systems being used for mental health monitoring with suicide risk flags. Children need to know what is on their record and to be able to seek redress when it is wrong. The law would set boundaries and safeguards and both existing and future law would need to be enforced. And we need independent research on the effects of school surveillance, and its chilling effects on the mental health and behaviour of developing young people.

Companies may argue they are transparent, and seek to prove how accurate their tools are. Perhaps they may become highly accurate.

But no one is yet willing to say in the school Safety Tech sector, these are thousands of words that if your child types may trigger a flag, or indeed, here’s an annual report of all the triggered flags and your own or your child’s saved profile. A school’s interactions with children’s social care already offers a framework for dealing with information that could put a child at risk from family members, so reporting should be do-able.

At the end of the event this week, the CRISP event moderator said of their own work, outside schools, that, “we are infiltrating bad actor networks across the globe and we are looking at everything they are saying. […] We have a viewpoint that there are certain lines where privacy doesn’t exist anymore.”

Their company website says their work involves, “uncovering and predicting the actions of bad actor, activist, agenda-driven and interest groups“. That’s a pretty broad conflation right there.  Their case studies include countering social media activism against a luxury apparel brand. And their legal basis of ‘legitimate interests‘ for their data processing might seem flimsy at best, for such a wide ranging surveillance activity where, ‘privacy doesn’t exist anymore’.

I must often remind myself that the people behind Safety Tech may epitomise the very best of what some believe is making the world safer online as they see it. But it is *as they see it*.  And if  policy makers or CEOs have convinced themselves that because ‘we are doing it for good, a social impact, or to safeguard children’, that breaking the law is OK, then it should be a red flag that these self-appointed ‘good guys’ appear to think themselves above the law.

My takeaway time and time again, is that companies alongside governments, policy makers, and a range of lobbying interests globally, want to redraw the lines around human rights, so that they can overstep them. There are “certain lines” that don’t suit their own business models or agenda. The DCMS may talk about seeing its first safety tech unicorn, but not about the private equity funding, or where they pay their taxes. Children may be the only thing they talk about protecting but they never talk of protecting children’s rights.

In the school Safety Tech sector, there is activity that I believe is unsafe, or unethical, or unlawful. There is no appetite or motivation so far to fix it. If in upcoming Online Harms legislation the government seeks to make lawful what is unlawful today, I wonder who will be held accountable for the unsafe and the unethical, that come with the package dealand will the Minister run that reputational risk?


Mutant algorithms, roadmaps and reports: getting real with public sector data

The CDEI has published ‘new analysis on the use of data in local government during the COVID-19 crisis’ (the Report) and it features some similarities in discussing data that the Office for AI roadmap (the Roadmap) did in January on machine learning.

A notable feature is that the CDEI work includes a public poll. Nearly a quarter of 2,000 adults said that the most important thing for them, to trust the council’s use of data, would be “a guarantee that information is anonymised before being shared, so your data can’t be linked back to you.”

Both the Report and the Roadmap shy away from or avoid that problematic gap in their conclusions, between public expectations and reality in the application of data used at scale in public service provision, especially in identifying vulnerability and risk prediction.

Both seek to provide vision and aims around the future development of data governance in the UK.

The fact is that everyone must take off their rose-tinted spectacles on data governance to accept this gap, and get basics fixed in existing practice to address it. In fact, as academic Michael Veale wrote, often the public sector is looking for the wrong solution entirely.The focus should be on taking off the ‘tech goggles’ to identify problems, challenges and needs, and to not be afraid to discover that other policy options are superior to a technology investment.”

But used as it is, the public sector procurement and use of big data at scale, whether in AI and Machine Learning or other systems, require significant changes in approach.

The CDEI poll asked, If an organisation is using an algorithmic tool to make decisions, what do you think are the most important safeguards that they should put in place  68% rated, that humans have a key role in overseeing the decision-making process, for example reviewing automated decisions and making the final decision, in their top three safeguards.

So what is this post about? Why our arms length bodies and various organisations’ work on data strategy are hindering the attainment of the goals they claim to promote, and what needs fixed to get back on track. Accountability.

Framing the future governance of data

On Data Infrastructure and Public Trust, the AI Council Roadmap stated an ambition to, “Lead the development of data governance options and its uses. The UK should lead in developing appropriate standards to frame the future governance of data.”

To suggest we not only should be a world leader but imagine that there is the capability to do so, suggests a disconnect with current reality, none of which was mentioned in the Roadmap but is drawn out a little more in the CDEI Report from local authority workshops.

When it comes to data policy and Artificial Intelligence (AI) or Machine Learning (ML) based on data processing and therefore dependent on its infrastructure, suggesting we should lead on data governance, as if separate from the existing standards and frameworks set out in law, would be disastrous for the UK and businesses in it.  Exports need to meet standards in the receiving countries. You cannot just ‘choose your own’ adventure here.

The CDEI Report says both that participants in their workshops found a lack of legal clarity “in the collection and use of data” and, “Participants finished the Forum by discussing ways of overcoming the barriers to effective and ethical data use.”

Lack of understanding of the law is a lack of competence and capability that I have seen and heard time and time and time again in participants at workshops, events, webinars, some of whom are in charge of deciding what tools are procured and how to implement public policy using administrative data, over the last 5 years. The law on data processing is accessible and generally straightforward.

If your work involves “overcoming barriers” then either there is not competence to understand what is lawful to proceed with confidence using data protections appropriately, or you are trying to avoid doing so. Neither is a good place to be in for public authorities, and bodes badly for the safe, fair, transparent and lawful use of our personal data by them.

But it is also lack of data infrastructure that increases the skills gap and leaves a bigger need to know what is lawful or not, because if your data is held in “excessive use of excel spreadsheets” then you need to make decisions about ‘sharing’ done through distribution of data. Data access can be more easily controlled through role-based access models, that make it clear when someone is working around their assigned security role, and creates an audit trail of access. You reduce risk by distributing access, not distributing data.

The CDEI Report quotes as a ‘concern’ that data access granted under emergency powers in the pandemic will be taken away. This is a mistaken view that should be challenged. That access was *always* conditional and time limited. It is not something that will be ‘taken away’ but an exceptional use only granted because it was temporary, for exceptional purposes in exceptional times. Had it not been time limited, you wouldn’t have had access. Emergency powers in law are not ‘taken away’, but can only be granted at all in an emergency. So let’s not get caught up in artificial imaginings of what could change and what ifs, but change what we know is necessary.

We would do well to get away from the hyperbole of being world-leading, and aim for a minimum high standard of competence and capability in all staff who have any data decision-making roles and invest in the basic data infrastructure they need to do a good job.

Appropriate standards to frame the future governance of data

The AI Council Roadmap suggested that, “The UK should lead in developing appropriate standards to frame the future governance of data.”  Let’s stop and really think for a minute, what did the Roadmap writers think they meant by that?

Because we have law that frames ‘appropriate standards.’ The UK government just seems unable or unwilling to meet it. And not only in these examples, in fact I’d challenge all the business owners on the AI Council to prove their own products meet it.

You could start with the Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (wp251rev.01). Or consider any of the Policy, recommendations, declarations, guidelines and other legal instruments issued by Council of Europe bodies or committees on artificial intelligence. Or valuable for export standards, ensure respect for the Convention 108  standards to which we are a signed up State party among its over 50 countries, and growing. That’s all before the simplicity of the UK Data Protection Act 2018 and the GDPR.

You could start with auditing current practice for lawfulness. The CDEI Roadmap says, “The CDEI is now working in partnership with local authorities, including Bristol City Council, to help them maximise the benefits of data and data-driven technologies.” I might suggest that includes a good legal team, as I think the Council needs one.

The UK is already involved in supporting the development of guidelines (as I was alongside UK representatives of government and the data regulator the ICO among hundreds of participants in drawing out Convention 108 Guidelines on data processing in education) but to suggest as a nation state that we have the authority to speak on the future governance of data without acknowledging what we should already be doing and where we get it wrong, is an odd place to start.

The current state of reality in various sectors

Take for example the ICO audit of the Department for Education.

Failures to meet basic principles of data protection law include knowing what data they’ve got, appropriate controls on distribution and failure to fair process (tell people you process their data). This is no small stuff. And it’s only highlights from the eight page summary.

The DfE don’t adequately understand what data they hold and not having a record of processing leads to a direct breach of #GDPR. Did you know the Department is not able to tell you to which third parties your own or your child’s sensitive, identifying personal data (from over 21m records) was sent, among 1000s of releases?

The approach on data releases has been to find a way to fit the law to suit data requests, rather than assess if data distribution should be approved at all. This ICO assessment was of only 400 applications — there’s been closer to 2,000 approved since 2012. One refusal was to the US. Another the MOD.


For too long, the DfE ‘internal cultural barriers and attitudes’ has meant it hasn’t cared about your rights and freedoms or meeting its lawful obligations. That is a national government Department in charge of over fifty such mega databases, the NPD is only one of. This is a systemic and structural set of problems, as a direct result of Ministerial decisions that changed the law in 2012 to give our personal data away from state education. It was a choice made not to tell the people whom the data were about. This continues to be in breach of the law. And that is the same across many government departments.

Why does it even matter some still ask? Because there is harm to people today. There is harm in history that must not be possible to repeat. And some of the data held could be used in dangerous ways.

You only need to glance at other applications in government departments and public services to see bad policy, bad data and bad AI or machine learning outcomes. And all of those lead to breakdowns in trust and relations between people and the systems meant to support them, that in turn lead to bad data, and policy.

Unless government changes its approach, the direction of travel is towards less trust, and for public health for example, we see the consequences in disastrous responses from not attending for vaccination based on mistrust of proven data sharing, to COVID conspiracy theories.

Commercial reuse of pubic admin data is a huge mistake and the direction of travel is damaging.

“Survey responses collected from more than 3,000 people across the UK and US show that in late 2018, some 95% of people were not willing to share their medical data with commercial industries. This contrasts with a Wellcome study conducted in 2016 which found that half of UK respondents were willing to do so.” (July 2020, Imperial College)

Mutant algorithms

Summer 2020 first saw no human accountability for grades “derailed by a mutant #algorithm — then the resignation of two  Ofqual executives. What aspects of the data governance failures will be addressed this year? Where’s the *fairness* —there is a legal duty to tell people how what data is used especially in its automated aspects.

Misplaced data and misplaced policy aims

In June 2020 The DWP argued in a court case that, “to change the way the benefit’s online computer calculation system worked in line with the original court ruling would undermine the principle of universal credit” — Not only does it fail its public interest purpose, and does harm, but is lax on its own #data governance controls. World leading is far, far, far away.

Entrenched racism

In August 2020 “The Home Office [has] agreed to stop using a computer algorithm to help decide visa applications after allegations that it contained “entrenched racism”. How did it ever get approved for use?

That entrenched racism is found in policing too. The Gangs Matrix use of data required an Enforcement Notice from the ICO and how it continues to operate at all, given its recognised discrimination and harm to young lives, is shocking.

Policy makers seem fixated on quick fixes that for the most part exist only in the marketing speak of the sellers of the products, while ignoring real problems in ethics and law, and denying harm.

“Now is a good time to stop.”

The most obvious case for me, where the Office for AI should step in, and where the CDEI Report from workshops with Local Authorities was most glaringly remiss, is where there is evidence of failure of efficacy and proven risk of danger to life through the procurement of technology in public policy. Don’t forget to ask what doesn’t work.

In January 2020  a report from researchers at The Turing institute, Rees Centre and What Works Centre published a report on ethics in Machine Learning in Children’s Social Care (CSC) and raised the “dangerous blind spots” and “lurking biases” in application of machine learning in UK children’s social care— totally unsuitable for life and death situations. Its later evidence showed models that do not work or wuld reach the threshold they set for defining ‘success’.

Out of the thirty four councils who had said they had acute difficulties in recruiting children’s social workers in December 2020 Local Government survey, 50 per cent said they had both difficulty recruiting generally and difficulty recruiting the required expertise, experience or qualification. Can staff in such challenging circumstances really have capacity to understand the limitations of developing technology on top of their every day expertise?

And when it comes to focussing on the data, there are problems too. By focusing on the data held, and using only that to make policy decisions rather than on the ground expertise, we end up in situations where only “those who get measured, get helped”.

As Michael Sanders wrote, on CSC, “Now is a good time to stop. With the global coronavirus pandemic, everything has been changed, all our data scrambled to the point of uselessness in any case.

There is no short cut

If the Office for AI Roadmap is to be taken seriously outside its own bubble, the board need to be and be seen to be independent of government. It must engage with reality of applied AI in practice in public services, getting basics fixed first.  Otherwise all its talk of “doubling down” and suggesting the UK government can build public trust and position the UK as a ‘global leader’ on Data Governance is misleading and a waste of everyone’s time and capacity.

I appreciate that it says, “This Roadmap and its recommendations reflects the views of the Council as well as 100+ additional experts.” All of whom I imagine are more expert than me. If so, which of them is working on fixing the basic underlying problems with data governance within public sector data, how and by when? If they are not, why are they not, and who is?

The CDEI report published today identified in local authorities that, “public consultation can be a ‘nice to have’, as it often involves significant costs where budgets are already limited.” If it’s a position the CDEI does not say is flawed, it may as well pack up and go home. On page 27 it reports, “When asked about their understanding of how their local council is currently using personal data and presented with a list of possible uses, 39% of respondents reported that they do not know how their personal data is being used.” The CDEI should be flagging this with a great big red pen as an indicator of unlawful practice.

The CDEI Report also draws on the GDS Ethical Framework but that will be forever flawed as long as its own users, not the used, are the leading principle focus, underpinning the aims. It starts with “Define and understand public benefit and user need.” There’s very little about ethics and it’s much more about “justifying our project”.

The Report did not appear to have asked the attendees what impact they think their processes have on everyday lives, and social justice.

Without fixes in these approaches, we will never be world leading, but will lag behind because we haven’t built the safe infrastructure necessitated by our vast public administrative data troves. We must end bad data practice which includes getting right the basic principles on retention and data minimisation, and security (all of which would be helped if we started by reducing those ‘vast public administrative data troves’ much of which ranges from poor to abysmal data quality anyway). Start proper governance and oversight procedures. And put in place all the communication channels, tools, policy and training to make telling people how data are used and fair processing happen. It is not, a ‘nice to have’ but is required in all data processing laws around the world.

Any genuine “barriers” to data use in data protection law,  are designed as protections for people; the people the public sector, its staff and these arms length bodies are supposed to serve.

Blaming algorithms, blaming lack of clarity in the law, blaming “barriers” is avoidance of one thing. Accountability. Accountability for bad policy, bad data and bad applications of tools is a human responsibility. The systems you apply to human lives affect people, sometimes forever and in the most harmful ways.

What would I love to see led from any of these arms length bodies?

  1. An audit of existing public admin data held, by national and local government, and consistent published registers of databases and algorithms / AI / ML currently in use.
  2. Expose where your data system is nothing more than excel spreadsheets and demand better infrastructure.
  3. Identify the lawful basis for each set of data processes, their earliest records dates and content.
  4. Publish that resulting ROPA and the retention schedule.
  5. Assign accountable owners to databases, tools and the registers.
  6. Sort out how you will communicate with people whose data you unlawfully process to meet the law, or stop processing it.
  7. And above all, publish a timeline for data quality processes and show that you understand how the degradation of data accuracy, quality, and storage limitations all affect the rights and responsibilities in law that change over time, as a result.

There is no short cut, to doing a good job, but a bad one.

If organisations and bodies are serious about “good data” use in the UK, they must stop passing the buck and spreading the hype. Let’s get on with what needs fixed.

In the words of Gavin Freeguard, then let’s see how it goes.

Is the Online Harms ‘Dream Ticket’ a British Green Dam?

The legal duty in Online Harms government proposals is still vague.

For some it may sound like the ‘“dream ticket”.  A framework of censorship to be decided by companies, enabled through the IWF and the government in Online Safety laws. And ‘free’ to all. What companies are already doing today in surveillance of all outgoing  and *incoming* communications that is unlawful, made lawful. Literally, the nanny state could decide, what content will be blocked, if, “such software should “absolutely” be pre-installed on all devices for children at point of sale and “…people could run it the other side to measure what people are doing as far as uploading content.”

From Parliamentary discussion it was clear that the government will mandate platforms, “to use automated technology…, including, where proportionate, on private channels,” even when services are encrypted.

No problem, others might say, there’s an app for that. “It doesn’t matter what program the user is typing in, or how it’s encrypted.”

But it was less clear in the consultation outcome updated yesterday,  that closed in July 2019 and still says, “we are consulting on definitions of private communications, and what measures should apply to these services.” (4.8)

Might government really be planning to impose or incentivise surveillance on [children’s] mobile phones at the point of sale in the UK? This same ‘dream ticket’ company was the only company  mentioned by the Secretary of State for DCMS yesterday. After all, it is feasible. In 2009 Chinese state media reported that the Green Dam Youth Escort service, was only installed in 20 million computers in internet cafes and schools.

If government thinks it would have support for such proposals, it  may have overlooked the outrage that people feel about companies prying on our everyday lives. Or has already forgotten the summer 2020 student protests over the ‘mutant algorithm’.

There is conversely already incidental harm and opaque error rates from the profiling UK children’s behaviour while monitoring their online and offline computer activity, logged against thousands of words in opaque keyword libraries. School safeguarding services are already routine in England, and are piggy backed by the Prevent programme. Don’t forget one third of referrals to Prevent come from education and over 70% are not followed through with action.  Your child and mine might already be labelled with ‘extremism’, ‘terrorism’, ‘suicide’ or ‘cyberbullying’ or have had their photos taken by the webcam of their device an unlimited number of times, thanks to some of these ‘safeguarding’ software and services, and the child and parents never know.

Other things that were not clear yesterday, but will matter, is if the ‘harm’ of the Online Harms proposals will be measured by intent, or measured by the response to it. What is harm or hate or not, is contested across different groups online, and weaponised, at scale.

The wording of the Law Commission consultation closing on Friday on communications offences also matters, and asks about intention to harm a likely audience, where harm is defined as any non-trivial emotional, psychological, or physical harm, but should not require proof of actual harm. This together with any changes on hate crime and on intimate images in effect proposes changes on ‘what’ can be said, how, and ‘to whom’ and what is considered ‘harmful’ or ‘hateful’ conduct.  It will undoubtedly have massive implications for the digital environment once all joined up. It matters when ‘culture wars’ online, can catch children in the cross fire.

I’ve been thinking about all this, against the backdrop of the Bell v Tavistock [2020] EWHC 3274 judgement with implications from the consideration of psychological harm, children’s evolving capacity, the right to be heard and their autonomy, a case where a parent involved reportedly has not even told their own child.

We each have a right to respect for our private life, our family life, our home and our correspondence. Children are rights holders in their own right. Yet it appears the government and current changes in lawmaking may soon interfere with that right in a number of ways, while children are used at the heart of everyone’s defence.

In order to find that an interference is “necessary in a democratic society” any interference with rights and freedoms should be necessary and proportionate for each individual, not some sort of ‘collective’ harm that permits a rolling, collective interference.

Will the proposed outcomes prevent children from exercising their views or full range of rights, and restrict online participation? There may be a chilling effect on speech. There is in schools. Sadly these effects may well be welcomed by those who believe not only that some rights are more equal than others, but some children, more than others.

We’ll have to wait for more details. As another MP in debate noted yesterday, “The Secretary of State rightly focused on children, but this is about more than children; it is about the very status of our society ….”