All posts by jenpersson

The contest and clash of child rights and parent power

What does the U.S. election outcome mean for education here? One aspect is that while the ‘Christian right’ in the UK may not be as powerful as its US counterpart, it still exerts influence on public policy. While far from new, it has become more prominent in parliament since the 2019 election. But even in 2008, Channel 4 Dispatches broadcast an investigation into the growth of Christian fundamentalism in the UK. The programme, “In God’s Name” highlighted the political lobbying by pro-life groups behind changes to tighten abortion law in the Human Fertilisation Bill including work between their then key lobbyist, and the MP Nadine Dorries.

The programme highlighted the fears of some of their members based on the “great replacement” conspiracy theory of the rising power of Islam from the East, replacing Christianity in the West. And it also showed how the ADF from the U.S. was funding UK strategic litigation to challenge and change UK laws including the McClintock v Department of Constitutional Affairs [2008].

The work of Sian Norris today, highlights why this U.S. election result is likely to see more of all of that over here.  As we see the rights’ environment move towards an ever greater focus on protection and protectionism, I make the case why this is all relevant for the education sector in England, and we must far better train and support school staff in practice, to manage competing sources of authority, interests and rights.


Child rights supported by parent power

Over the last ten years, since I began working in this field, there has been a noticeable shift in public discourse in the UK parliament and media around child rights, shaping public policy. It is visible in the established print, radio and TV media. In social media. It is in the language used, the funding available, and the parliamentary space and time taken up by new stakeholder groups and individuals involved, to the detriment of crowding out more moderate or established voices. On the one hand it is a greater pluralism and democracy in action. On the other, where its organisation is orchestrated, are the aims and drivers transparent in the public interest?

When it comes to parents, those behind many seemingly grassroots small p “Parent Power” groups are opaque, often with large well funded, often U.S. organisations behind them.

The challenges for established academics and think tanks in this closed and crowded policy advisory space is that these new arrivals, astroturf  ‘grassroots’ and offshoots from existing groups bring with them loud voices who co-opt the language of child rights, who are adept in policy and media spaces that were previously given to expert and evidence-based child rights academics.

Emerging voices are given authority by a very narrow group of parliamentarians, and are lent support by institutional capture through an increasing number of individuals embedded from industry or with conservative religious views hired into positions of authority. There is a shift in the weight given to views and opinions compared with facts and research, and cherry picked evidence to inform institutional positions and consultations, as a result.

The new players bring no history of being interested in children’s rights —in fact, many act in opposition to equality rights, or access to information, and appear more interested in control of children than universal human rights and fundamental freedoms. The shift of a balance in discussion on child rights to child protection above all else is not only in the UK but mainland Europe, the U.S. and Australia which is the latest to plan a ban on under 16s access to social media.

Whose interests do these people serve really, while packaged in the language of child rights?

Taking back parent and teacher control

Parallel arguments made in the public sphere have grown: the first on why authority must be taken away from parents and teachers and returned to the State over fears of loss of parental control of children’s access to information and children’s ‘safety’ including calls for state-imposed bans on mobile phones for children or enforced parental surveillance control tools. And at the same time,  parents want fewer state interventions.  Arguments include that, “over the last few years the State has been assuming ever greater control, usurping the rights of parents over their children.”

The political football of the day seems to move regularly from ‘ban mobile phones in schools‘ or at all, to the content of classroom materials, ‘give parents a right to withdraw children from access to sex ed and relationships teaching’ (RSE not biology). But perhaps more important even than the substance, is that the essence of what the Brexit vote tapped into, a sense that BigTech and the State, ‘others’, interfere with everyday life in ways from which people want to ‘take back control’ is not going away.

Opening up classroom content opens a can of worms

The challenge for teachers can be in their schools every day. Parents have a right to request that their child is withdrawn from sex education, but not from relationships education. In 2023, the DfE published refreshed guidance saying, “parents should be able to see what their children are being taught in RSHE lessons. Schools must share teaching materials with parents.”

I often argue that there is too little transparency and parental control over what is taught and how, and that parents should be able to see what is being taught and its sources but not with regard to RSE, but when it comes to edTech.  We need a more open classroom when it comes to content from companies of all kinds.

But this means also addressing how far the rights of parents and the rights of the child complement or compete with one another, when it comes to Article 26 (3)(b) of the UDHR on education, “Parents have a prior right to choose the kind of education that shall be given to their children.” And how does this affect teachers agency and authority?

These clashes are starting to overlap in a troubling lack of ethical oversight in intrusive national pupil data gathering exercises in England and in Scotland both of which have left parents furious, to the data grab planned from GPs in Wales. Complaints will without a doubt become louder and more widespread, and public trust lost.

When interests are contested and not aligned, who decides what is in a child’s best interests for their protection in a classroom?

When does the public interest kick in as well as individual interests in the public good from having children attend school, present to health services, and how are collective losses taken into account?

In the law today, responsibility for fulfilling a child’s right to education rests with parents, not schools. So what happens when decisions by schools interfere with parents’ views? When I think about children in the context of AI, automated decisions and design in edTech shaping child development, I think about child protection from strangers engineering a child’s development in closed systems.  It matters to protect a child from an unknown and unlimited number of persons interfering with who they will become.

But even the most basic child protections are missing in the edtech environment today without any public sector standards or oversight. I might object to the school about a product. My child might have a right to object in data protection law. But in practice, objection is impossible to exercise.

The friction this creates is going to grow and there is no good way to deal with it right now. Because the education sector is being opened up to a wider range of commercial, outside parties, it is also being opened up to the risks and challenges that brings. It can no longer be something put in the box marked ‘too difficult’ but needs attention.

The backlash will only grow if the sense of ‘overreach’ continues.

Built-in political and cultural values

The values coming over here from the U.S. are not only coming through parents’ grassroots groups, the religious right, or anti-LGBTQ voices in the media of all kinds, but are coming in directly to the classroom embedded into edTech products. The values underpinning AI or any other technology used in the classroom are opaque because the people behind the product are usually hidden. We cannot therefore separate the products from their designers’ politics. If those products are primarily U.S. made, then it is unsurprising if the values from their education and their political systems are those embedded into their pedagogy. Many of which seem less about the UNCRC art. 29 aims of education, and far more about the purposes of education centred on creating human capital via, “an emphasis on the knowledge economy that can reduce both persons and education to economic actors and be detrimental to wider social and ethical goals. ”

This is nothing new.

In 2013, Michael Gove gave a keynote speech in the U.S. to the National Summit for Education Reform, an organisation set up by Governor Jeb Bush. He talked about edTech too, and the knowledge economy of education and needing “every pair of hands” to “rebuild our economies”. Aside from his normalisation of the acceptance of ‘badging’ children in the classroom with failure (32:15) (“rankings of the students in the test were posted with the students name with colour codes… and some of the lower performers would wear a sticker on a ribbon with the colour code of their performance“) he also shared his view with echoes of the “great replacement theory” that, “the 20th century may be the last American Century we face the fact that the West and the values that we associate with it, liberalism, openness, decency, democracy, the rule of law, risks being eclipsed by a Rising Sun from the East.” We could well ask, whose flavour of ‘liberalism’ is that?

The fight for or against a progressive future

Today, anti-foreign, anti-abortion, and pro-natalist pro-conservative Christian values all meet in a Venn diagram in organisations pushing to undermine classically liberal aspects of teaching in England’s education system. And before this sounds a bit extreme, consider how these conspiracy theories and polarised views have been normalised. Listen (25:00) to the end of discussion on “the nation state” at the 2023 NatCon UK Conference co-badged with the Edmund Burke Foundation.  Becoming a parent is followed by discussion on housing pressure *from migrants* as well as a more-than-slightly eugenic-themed discussion of longevity, and then in passing, AI.  At the same event, fellow MP Miriam Cates claimed the UK’s low birthrate is the most pressing policy issue of the generation and is caused in part by “cultural Marxism” as reported by the Guardian. Orbán in Hungary in 2022, claimed he was fighting against “the great European population exchange … a suicidal attempt to replace the lack of European, Christian children with adults from other civilisations – migrants”.

These debates are inextricably linked in a fight for or against a progressive future. We have a Westminster Opposition now fighting for its own future and the ‘culture wars’ have been routinely part of its frontbenchers’ media discussions for some time. Much of it that is likely to continue to be played out in the education system, starting with the challenge to the Higher Education Freedom of Speech Act 2023, which always seemed to me more about the control of content on campus than its freedoms.

In today’s information society, Castells arguments that cultural battles for power are primarily fought in the media, where identity plays a critical role in influencing public policy and societal norms, where politics becomes theatre and, “citizens around the world react defensively, voting to prevent harm from the state in place of entrusting it with their will,” seem timely. (End of Millenium, p.383). Companies and vested interests have actual power, and elected leaders are left only with influence. This undermines the spirit of a democratic society.

The future of authority and competing interests

After the U.S. election result, that influence coming from across the Pond into UK public policy will not only find itself more motivated and more empowered, but likely, better funded.

Why all this matters for schools is that we are likely to see more of its polarised values-sets imported from the U.S. and there is no handbook for school governors nor staff of all backgrounds, to manage parents and the strong feelings it can all create. Nor does the sector understand the legal framework it needs to withstand it.

Having opened up classrooms to outside interests on classroom content, some families are pulling children out of school because of these fundamental disagreements with their values and the vehicles for their delivery—from the contents of teaching, to intrusive data surveys, and concerns over commercialisation and screen time of tech-based tools without proven beneficial outcomes. Whose best interests does the system serve and who decides whose interests come first when they are in conflict? How are these to be assessed and explained to parents and children, together with their rights?

How do teachers remain in authority where they are perceived as overstepping what parents reasonably expect, or where AI manages curriculum content and teachers cannot explain its assessment scoring or benchmarking profile of a pupil? What should the boundaries be especially as edTech blurs them between school and home, teachers and parents. We need to far better train and support educational staff in practice, to be prepared to manage competing sources of authority, and the emerging fight for interests and rights.

Pirates and their stochastic parrots

It’s a privilege to have a letter published in the FT as I do today, and thanks to the editors for all their work in doing so.

I’m a bit sorry that it lost the punchline which was supposed to bring a touch of AI humour about pirates and their stochastic parrots. And its rather key point was cut that,

“Nothing in current European laws, including Convention 108 for the UK, prevents companies developing AI lawfully.”

So for the record, and since it’s (£), my agreed edited version was:

“The multi-signatory open letter advertisement, paid for by Meta, entitled “Europe needs regulatory certainty on AI” (September 19) was fittingly published on International Talk Like a Pirate Day.

It seems the signatories believe they cannot do business in Europe without “pillaging” more of our data and are calling for new law.

Since many companies lobbied against the General Data Protection Regulation or for the EU AI Act to be weaker, or that the Council of Europe’s AI regulation should not apply to them, perhaps what they really want is approval to turn our data into their products without our permission.

Nothing in current European laws, including Convention 108 for the UK, prevents companies developing AI lawfully. If companies want more consistent enforcement action, I suggest Data Protection Authorities comply and act urgently to protect us from any pirates out there, and their greedy stochastic parrots. “

Prior to print they asked to cut out a middle paragraph too.

“In the same week, LinkedIn sneakily switched on a ‘use me for AI development’ feature for UK users without telling us (paused the next day); Larry Ellison suggested at Oracle’s Financial Analyst Meeting  that more AI should usher in an era of mass citizen surveillance, and our Department for Education has announced it will allow third parties to exploit school children’s assessment data for AI product building, and can’t rule out it will include personal data.”

It is in fact the cumulative effect around the recent flurry of AI activities by various parties, state and commercial, that deserves greater attention rather than being only about this Meta-led complaint. Who is grabbing what data and what infrastructure contracts and creating what state dependencies and strengths to what end game?  While some present the “AI race” as China or India versus the EU or the US to become AI “super powers”, is what “Silicon Valley” offers, their way is the only way, a better offer?

It’s not in fact, “Big Tech” I’m concerned about, but the arrogance of so many companies that in the middle of regulatory scrutiny  would align themselves with one that would rather put out PR that omits the fact they are under it, instead only calling for the law to be changed, and frankly misleading the public by suggesting it is all for our own good than talk about how this serves their own interests.

Who do they think they are to dictate what new laws must look like when they seem simply unwilling to stick to those we have?

Perhaps this open letter serves as a useful starting point to direct DPAs to the companies in need of most scrutiny around their data practices. They seem to be saying they want weaker laws or more enforcement. Some are already well known for challenging both. Who could forget Meta (Facebook’s) secret emotional contagion study involving children in which friends’ postings were moved to influence moods, or the case of letting third parties, including Cambridge Analytica access users’ data? Then there’s the data security issues or the fine over international transfers or the anti-trust issues. And there’s the legal problems with their cookies. And all of this built from humble beginnings by the same founder of Facemash “a prank website” to rate women as hot or not.

As Congressman Long reportedly told Zuckerberg in 2018, “You’re the guy to fix this. We’re not. You need to save your ship.”

The Meta-led ad called for “harmonisation enshrined in regulatory frameworks like the GDPR” and I absolutely agree. The DPAs need to stand tall and stand up to OpenAI and friends (ever dwindling in number so it seems) and reassert the basic, fundamental principles of data protection laws from the GDPR to Convention 108 to protect fundamental human rights. Our laws should do so whether companies like them or not. After all, it is often abuse of data rights by companies, and states, that populations need protection from.

Data protection ‘by design and by default’ is not optional under European data laws established for decades. It is not enough to argue that processing is necessary because you have chosen to operate your business in a particular way, nor a necessary part of your chosen methods.

The Netherlands DPA is right to say scraping is almost always unlawful. A legitimate interest cannot be simply plucked from thin air by anyone who is neither an existing data controller nor processor and has no prior relationship to the data subjects who have no reasonable expectation of their re-use of data online that was not posted for the purposes that the scraper has grabbed it and without any informed processing and offer of an opt out. Instead the only possible basis for this kind of brand new controller should be consent. Having to break the law, hardly screams ‘innovation’.

Regulators do not exist to pander to wheedling, but to independently uphold the law in a democratic society in order to protect people, not prioritise the creation of products:

  • Lawfulness, fairness and transparency.
  • Purpose limitation.
  • Data minimisation.
  • Accuracy.
  • Storage limitation.
  • Integrity and confidentiality (security)
    and
  • Accountability.

In my view, it is the lack of dissausive enforcement as part of checks-and-balances on big power like this, regardless of where it resides, that poses one of the biggest data-related threats to humanity.

Not AI, nor being “left out” of being used to build it for their profit.

The “new normal” is not inevitable. (1/2)

Today Keir Starmer talked about us having more control in our lives. He said, “markets don’t give you control – that is almost literally their point.”

This week we’ve seen it embodied in a speech given by Oracle co-founder Larry Ellison, at their Financial Analyst Meeting 2024. He said that AI is on the verge of ushering in a new era of mass behavioural surveillance, of police and citizens. Oracle he suggested, would be the technological backbone for such applications, keeping everyone “on their best behaviour” through constant real-time machine-learning-powered monitoring.(LE FAQs 1:09:00).

Ellison’s sense of unquestionable entitlement to decide that *he* should be the company to control how all citizens (and police) behave-by-design, and his omission of any consideration of any democratic mandate for that, should shock us. Not least because he’s wrong in some of his claims. (There is no evidence that having a digital dystopia makes this difference in school safety, in particular given the numbers of people known to the school).

How does society trust in this direction of travel in how our behaviour is shaped and how corporations impose their choices? How can a government promise society more control in our lives and yet enable a digital environment, which plays a large part in our everyday life, of which we seem to have ever less control?

The new government sounds keen on public infrastructure investment as a route to structural transformation. But the risk is that the cost constraints mean they seek the results expected from the same plays in an industrial development strategy of old, but now using new technology and tools. It’s a big mistake, huge. And nothing less than national democracy is at stake because individuals cannot meaningfully hold corporations to account. The economic and political context in which an industrial strategy is defined is now behind paywalls and without parliamentary consensus, oversight or societal legitimacy in a formal democratic environment. The nature of the constraints on businesses’ power were once more tangible and localised and their effects easier to see in one place. Power has moved away from government to corporations considerably in the time Labour was out of it. We are now more dependent on being users of multiple private techno-solutions to everyday things often that we hate using, from paying for the car park, to laundry, to job hunting. All with indirect effects on national security, on service provsion at scale, as well as direct everyday effects for citizens and increasingly, our disempowerment and lack of agency in our own lives.

LinkedIn this week first chose not to ask users at all in what has become the OpenAI modus operandi of take first, and ask forgiveness later to grab our personal data from the platform to train “content creation AI models.” (And then it did a U-turn.)

Data Protection law is supposed to offer people protection from such misuse, but without enforcement, ever more companies are starting to copy each other on rinse and repeat.

The Convention 108 requires respect for rights and fundamental freedoms, and in particular the right to privacy, and special categories of data which may not be processed automatically unless domestic law provides appropriate safeguards. It must be obtained fairly. In the cases of many emerging [Generative] AI companies have disregarded fundamentals of European DP laws: Purpose limitation for incompatible uses, no relationship between the subject and company, lack of accuracy or even being actively offered a right to object when in fact it should be a consent basis, means there is unfair and unlawful processing. If we are to accept that anyone at all can take any personal data online and use it for an entirely different purpose to turn into commercial products ignoring all this, then frankly the Data Protection authorities may as well close. Are these commercial interests simply so large that they believe they can get away with steam-rollering over democratic voice and human rights, as well as the rule of (data protection) law? Unless there is consistent ‘cease and desist’ type enforcement, it seems to be rapidly becoming the new normal.

If instead regulators were to bring meaningful enforcement that is dissuasive as the law is supposed to be, what would change? What will shift practice on facial recognition in the world as foreseen by Larry Ellison, and shift public policy to sourcing responsibly? How is democracy to be saved from technocratic authoritarianism going global? If the majority of people in living in democracies are never asked for their views, and have changes imposed on their lives that they do not want how do we raise a right to object and take control?

While the influence in in our political systems of institutions, such as Oracle, and their financial interests are in growing ever more-, and ever larger -, AI models, the interests of our affected communities are not represented in state decisions at national or global levels.

While tools are being built to resist content scraping from artists, what is there for faces and facts, or even errors, about our lives?

Ted Chiang asked in the New Yorker in 2023, whether an alternative is possible to the  current direction of travel. “Some might say that it’s not the job of A.I. to oppose capitalism. That may be true, but it’s not the job of A.I. to strengthen capitalism, either. Yet that is what it currently does.” The greatest current risk of AI is not what we imagine from I, Robot he suggested, but “A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value.”

I remember giving a debate talk as a teen thirty years ago, about the risk of rising sea levels in Vanuatu. It is a reality that causes harm. Satellite data indicates the sea level has risen there by about 6 mm per year since 1993. Someone told me in an interesting recent Twitter exchange, when it comes to climate impacts and AI that they are, “not a proponent of ‘reducing consumption'”.  The reality of climate change in the UK today, only hints at the long-term consequences for the world from the effects of migration and extreme weather events. Only by restraining consumption might we change those knock-on effects of climate change in anything like the necessary timeframe, or the next thirty years worsen more quickly.

But instead of hearing meaningful assessment of what we need in public policy, we hear politicians talk about growth as what they want and that bigger can only be better. More AI. More data centres. What about more humanity in this machine-led environment?

 While some argue for an alternative, responsible, rights respecting path as the only sustainable path forwards, like Meeri Haataja, Chief Executive and Founder, Saidot, Helsinki, Finland in his August letter to the FT,  some of the largest companies appear to be suggesting this week, publishing ads in various European press, that they seem to struggle to follow the law (like Meta), so Europe needs a new one. And to paraphrase, they suggest it’s for our own good.

And that’s whether we like it, or not. You might not choose to use any of their Meta products but might still be being used by them, using your activity online to to create advertising market data that can be sold. Researchers at the University of Oxford analysed a million smartphone apps and found that “the average one contains third‑party code from 10 different companies that facilitates this kind of tracking. Nine out of 10 of them sent data to Google.  Four out of 10 of them sent data to Facebook. In the case of Facebook, many of them sent data automatically without the individual having the opportunity to say no to it.” Reuse for training AI is far more explicit and wrong and “it is for Meta [or any other company] to ensure and demonstrate ongoing compliance.” We have rights, and the rule of law, and at stake are our democratic processes.

“The balance of the interests of working people” must include respect for their fundamental rights and freedoms in the digital environment as well as supporting the interests of economic growth and they are mutually achievable not mutually exclusive.

But in the UK we put the regulator on a leash in 2015, constrained by a duty towards “economic growth”. It’s a constraint that should not apply to industry and market regulators, like the ICO.

While some companies plead to be let off with their bad behaviour, others expect to profit from encouraging the state to increase the monitoring of ours or asking that the law be written ever in their favour. Regulators need to stand tall, and stand up to it and the government needs to remove their leash.

Even in 1959, Labour MPs included housewives concerned with being misled by advertisers and manufacturers (06:40). Many of the electoral issues have stayed the same in over 65 years.  But the power grab going on in this era of information age is unprecedented.

We need not accept this techno-authoritarianism as a “new normal” and inevitable. If indeed as Starmer concluded, “Britain belongs to you“, then it needs MPs to act like it and defend fundamental rights and freedoms to uphold values like the rule of law, even with companies who believe it does not apply to them. With a growing swell of nationalism and plenty who may not believe Britain belongs to all of us, but that, ‘Tomorrow belongs to Me‘, it is indeed a time when “great forces demand a decisive government prepared to face the future.”


See also Part 2: Farming out our Children. AI AI Oh.

The video referenced above is of Dr. Sasha Luccioni, the research scientist and climate lead at HuggingFace, an open-source community and machine-learning platform for AI developers, and is part 1 of the TED Radio Hour episode, Our tech has a climate problem.

Farming out our children. AI AI Oh. (2/2)

Today Keir Starmer talked about us having more control in our lives. “Taking back control is a Labour argument”, he said. So let’s see it in education tech policy where parents told us in 2018, less than half felt they had sufficient control of their child’s digital footprint.

Not only has the UK lost control of which companies control large parts of the state education infrastructure and its delivery, the state is *literally* giving away control of our children’s lives recorded in identifiable data at national level, and since 2012 included giving it to journalists, think tanks, and companies.

Why it matters is less about the data per se, but what is done with it without our permission and how that affects our lives.

Politicians’ love affair with AI (undefined) seems to be as ardent as under the previous government. The State appears to have chosen to further commercialise children’s lives in data, having announced towards the end of the school summer holidays that  the DfE and DSIT will give pupils’ assessment data to companies for AI product development. I get angry about this, because the data is badly misunderstood, and not a product to pass around but the stories of children’s lives in data, and that belongs to them to control.

Are we asking the right questions today about AI and education?  In 2016 in a post for Nesta, Sam Smith foresaw the algorithmic fiasco that would happen in the summer of 2020  pointing out that exam-marking algorithms like any other decisions, have unevenly distributed consequences. What prevents that happening daily but behind closed doors and in closed systems? The answer is, nothing.

Both the adoption of AI in education and education about AI is unevenly distributed. Driven largely by commercial interests, some are co-opting teaching unions for access to the sector, others more cautious, have focused on the challenges of bias and discrimination and plagiarism. As I recently wrote in Schools Week, the influence of corporate donors and their interests in shaping public sector procurement, such as the Tony Blair Institute’s backing by Oracle owner Larry Ellison, therefore demands scrutiny.

Should society allow its public sector systems and laws to be shaped primarily to suit companies? The users of the systems are shaped by how those companies work, so who keeps the balance in check?

In a 2021 reflection here on World Children’s Day, I asked the question, Man or Machine, who shapes my child? Three years later, I am still concerned about the failure to recognize and address the question of redistribution of not only pupils’ agency but teachers’ authority; from individuals to companies (pupils and the teacher don’t decide what is ‘right’ you do next, the ‘computer’ does). From public interest institutions to companies (company X determines the curriculum content of what the computer does and how, not the school). And from State to companies (accountability for outcomes falls through the gap in outsourcing activity to the AI company).

Why it matters, is that these choices do not only influence how we are teaching and learning, but how children feel about it and develop.

The human response to surveillance (and that is what much of AI relies on, massive data-veillance and dashboards) is a result of the chilling effect of being ‘watched‘ by known or unknown persons behind the monitoring. We modify our behaviours to be compliant to their expectations. We try not to stand out from the norm, or to protect ourselves from resulting effects.

The second reason we modify our behaviours is to be compliant with the machine itself. Thanks to the lack of a responsible human in the interaction mediated by the AI tool, we are forced to change what we do to comply with what the machine can manage. How AI is changing human behaviour is not confined to where we walk, meet, play and are overseen in out- or indoor spaces. It is in how we respond to it, and ultimately, how we think.

In the simplest examples, using voice assistants shapes how children speak, and in prompting generative AI applications, we can see how we are forced to adapt how we think to put the questions best suited to getting the output we want. We are changing how we behave to suit machines. How we change behaviour is therefore determined by the design of the company behind the product.

There is limited public debate yet on the effects of this for education, on how children act, interact, and think using machines, and no consensus in the UK education sector whether it is desirable to introduce these companies and their steering that bring changes in teaching and learning and to the future of society, as a result.

And since then in 2021, I would go further. The neo-liberal approach to education and its emphasis on the efficiency of human capital and productivity, on individualism and personalisation,  all about producing ‘labour market value’, and measurable outcomes, is commonly at the core of AI in teaching and learning platforms.

Many tools dehumanise children into data dashboards, rank and spank their behaviours and achivements, punish outliers and praise norms, and expect nothing but strict adherence to rules (sometimes incorrect ones, like mistakes in maths apps). As some companies have expressly said, the purpose of this is to normalise such behaviours ready to be employees of the future, and the reason their tools are free is to normalise their adoption for life.

AI by the normalisation of values built-in by design to tools, is even seen by some as encouraging fascistic solutions to social problems.

But the purpose of education is not only about individual skills and producing human capital to exploit.  Education is a vital gateway to rights and the protection of a democratic society. Education must not only be about skills as an economic driver when talking about AI and learners in terms of human capital, but include rights, championing the development of a child’s personality to their fullest potential, and intercultural understanding, digital citizenship on dis-/misinformation, discrimination and the promotion and protection of democracy and the natural world. “It shall promote understanding, tolerance and friendship among nations, racial or religious groups, and shall further the activities of the United Nations for the maintenance of peace.”

Peter Kyle, the UK DSIT’s Secretary of State said last week, that, “more than anything else, it is growth that will shape those young people’s future.” But what will be used to power all this growth in AI, at what environmental and social costs, and will we get a say?

Don’t forget, in this project announcement the Minister said, “This is the first of many projects that will transform how we see and use public sector data.” That’s our data, about us. And when it comes to schools, that’s not only the millions of learners who’ve left already but who are school children today. Are we really going to accept turning them into data fodder for AI without a fight? As Michael Rosen summed up so perfectly in 2018, “First they said they needed data about the children  to find out what they’re learning… then the children became data.”  If this is to become the new normal, where is the mechanism for us to object? And why this, now, in such a hurry?

Purpose limitation should also prevent retrospective reuse of learners’ records and data, but it hasn’t so far on general identifying and sensitive data distribution from the NPD at national level or from edTech in schools. The project details, scant as they are, suggest parents were asked for consent in this particular pilot, but the Faculty AI notice seems legally weak for schools, and when it comes to using pupil data for building into AI products the question is whether consent can ever be valid — since it cannot be withdrawn once given, and the nature of being ‘freely given’ is affected by the power imbalance.

So far there is no field to record an opt out in any schools’ Information Management Systems though many discussions suggest it would be relatively straightforward to make it happen. However it’s important to note their own DSIT public engagement work on that project says that opt-in is what those parents told the government they would expect. And there is a decade of UK public engagement on data telling government opt-in is what we want.

The regulator has been silent so far on the DSIT/DfE announcement despite lack of fair processing and failures on Articles 12, 13 and 14 of the GDPR being one of the key findings in its 2020 DfE audit. I can use a website to find children’s school photos, scraped without our permission. What about our school records?

Will the government consult before commercialising children’s lives in data to feed AI companies and ‘the economy’ or any of the other “many projects that will transform how we see and use public sector data“?  How is it different from the existing ONS, ADR, or SAIL databank access points and processes? Will the government evaluate the impact on child development, behaviour or mental health of increasing surveillance in schools? Will MPs get an opt-in or even -out, of the commercialisation of their own school records?

I don’t know about ‘Britain belongs to us‘, but my own data should.


See also Part 1: The New Normal is Not Inevitable.

School absence: Bums-in-seats-thinking is the wrong problem, and AI the wrong solution

What school absence trends do you believe AI can spot that current methods cannot, to improve coordination between education, social care and the wider services that support families?

Since the pandemic, conservative leaning organisations and most notably the CSJ, have been banging a drum very loudly about children not in school. This is not about private-school educated children, but children they expect be in state educational settings age 5-18. Often, other people’s children.

In recent debate, various sub-categories of children not in school are conflated together. In 2022, various stakeholders got together and with Defend Digital Me, we worked to point out in non-partisan way, where that thinking was wrong.

It is wrong on who is not counted. (And who is). It is wrong on assumptions built into conflating the differences between children not in school and children ‘not in receipt of suitable education’.  It is wrong on assumptions that children not in school are not already on registers and recorded on Local Authority databases. It is wrong to ignore that the definition of persistent absence means children are classified as persistently absent more quickly now than previously. The same number of children could be absent for the same number of school days as were a decade ago, but it will be reported as having doubled.

And it is arguably morally wrong, in ignoring a very significant part of the problem as they see it: why children are not in school once you remove all that is wrong with the counting and assumptions. And what are the consequences for children of forcing them to be without their consent, or respecting  families choice or agency?

Absence data (on school roll, not attending)

In the 2023/24 academic year up to 8 December 2023, DfE data shows that the attendance rate across the academic year to date was 93.4%. The absence rate was, therefore, 6.6% across all schools and the unauthorised rate was far less. By school type, the absence rates across the academic year 2023/24 to date were:

  • 5.1% in state-funded primary schools (3.7% authorised and 1.4% unauthorised)
  • 8.3% in state-funded secondary schools (5.2% authorised and 3.1% unauthorised)
  • 12.6% in state-funded special schools (9.5% authorised and 3.0% unauthorised)

Over 1.5 million pupils in England have special educational needs (SEN)

An increase of 87,000 from 2022. Both the number of pupils with an education, health and care (EHC plan) and the number of pupils [recorded] with SEN support have increased:

  • The percentage of pupils with an EHC plan has increased to 4.3%, from 4.0% in 2022.
  • The percentage of pupils with SEN but no EHC plan (SEN support) has increased to 13.0%, from 12.6% in 2022.

Both continue a trend of increases since 2016. As do the number of stories you hear of parents asked to bring in children only part-time because the schools cannot get EHC plans approved, (Local Councils have no money) without which, schools cannot access funds or allocate the staff and resources needed for that child in a school.

Which children are not in school?

The concept of children-not-in-school should be nothing at all to do with Elective Home Education (“EHE”). The premise of ‘not in school’ is that they are not attending school ‘but they should be’ and that action taken will be as a result. Elective Home Education (“EHE”) children are not on a school roll and not expected to be. No action should be taken as a result, to get them into schools.

A Guardian article today (Jan 9th) quotes Wendy Charles-Warner, chair of home education charity Education Otherwise, who sums up one problem here: “Yet again we see an inappropriate and frankly mangled conflation of [elective] home education and absenteeism.

“Home education is of equal legal status to school education and it is certainly not ‘non-attendance’. Home educated children are in full-time education, they are not school pupils let alone absent school pupils.

“A register of home-educated children will make no difference whatsoever to school absenteeism and, before proposing such a significant step, the Labour party should educate itself to the very basic facts of the matter.”

It was frustrating to hear Bridget Phillipson give the same impression today as many other MPs had in the 2022 debates on the Schools Bill but using selective evidence, that no one knows how many children are home educated. Every Local Authority we asked data for in 2021-22 that replied, already had a register of EHE.

Proposals to legislate for a new national register of children not in school were part of the Government’s now-scrapped Schools Bill and were hotly contested and debated in the House of Lords. There is no compelling case to have one.

We assessed the plans at Defend Digital Me as part of the Counting Children coalition, and not only were the policy issues pretty fundamentally flawed, but practically flawed too. The legislation on the database as set out would have meant for example, double counting a whole swathe of children already on school registers but in Alternative Provision or part-time.

The plans conflated Home Education (“EHE”) (not on school roll) with absenteeism (pupils registered on school roll but absent), and would have doubled counted some children who were part time at alternative school settings, and conflated these children with Children Missing Education (“CME”, not on a school roll and it has been assessed that they are ‘not in receipt of a suitable education otherwise’, so should be on a school roll but are not and who are known to the State); and further conflated those three groups with children not on any database at all (unknown to the state education system).

Piling in elective home educators with at-home children waiting for places or suitable state school services, with children already on roll but part-time and truancy, would have geared up to conflate a toxic mix of ‘victims’ and ‘perpetrators’ style narrative like the Met Police Gangs matrix, people to be treated with suspicion and requiring additional (often centralised) state surveillance, all as a result of what would have been bad numbers.

The not-in-school register plans angered many, among others, the Charedi community. Home Educators protested outside parliament and filled the public gallery in the House of Lords on the day it was most specifically debated in the Schools Bill.

I know of cases where children are wrongly labeled CME by Local Authorities. Some LAs cannot (for what seems like nothing but stubborn jobsworth bureaucracy) accept that children who are Home Educated can be in receipt of suitable education, even if an LA’s opaque methods of measuring ‘suitability’ are so arbitrary and intrusive and out of step with the law as to be only understandable to them. Such records create fundamentally flawed and inaccurate family portraits, turning EHE records into CME records mid-year, (spot the double counted child) without recourse or route for redress. Some of these data are therefore opinions, not facts. Automating any of this would be a worse mess.

Known-to-the state CME children are on Local Authority databases even if If they were never enrolled at a school. If they were in school and left and even if after doing so they ‘fall off the radar’, one of sixteen tick-box reasons-for-leaving is recorded on their detailed named record and kept. If they disappear without a known destination of the next educational setting, they will in addition be added into the part of the Common Transfer File system that posts children’s records into the Lost Pupils Database (LPD). They are pulled out of that LPD once a state school ‘receives’ them again.

Children about whom nothing is known by the state, the so called ‘invisible children’ cannot be magically added to any database. If they were known today, they would already be on the existing databases. It is believed there are very few of these, but of course, it is unknown. It also by default a number that cannot ever be known. The NCB and Children’s Commissioner have made guesstimates of around 3,000 individual children.

The CSJ has in my view whether accidentally or by intent, wrongly hyped up the perception of the numbers of those children, by inventing the new term “ghost children”. This has made the everyday listener or MP think of these as ‘unseen by the state’. This CSJ term has been sweepingly used in the media and parliament to cover any child not in school and means the perceived “problem” is wrongly seen as (a) one and the same thing and (b) much larger than in reality.

That reshaping of this reality matters. It’s been a persistently retold half-truth since 2021 (the Telegraph published my letter to the editor on it in March 2022).  Still it seems not easily fixed by fact alone. The costs of new databases duplicating data that already exist, would be far better spent on patching up the 70% cuts to Local Authorities youth services, CAMHS, or Early Intervention Grant, or basically anything else for children and young people or families.

Which children do we know should be in school?

Remembering that the claims are that we need new registers of children not-in-school, how many children do you think are known to be missing education, recorded as CME, in any one Local Authority? Yes, these children are recorded by name already at LAs.

Local authorities have a duty under section 436A of the Education Act 1996 to make arrangements to identify, as far as it is possible to do so, children missing education. What is possible, is already done.

There are currently 152 local education authorities in England and through the dedicated volunteer effort from the Counting Children coalition, we asked all of them for their data (as of June 30, 2021).

There were zero Children Missing Education (“CME”) in Powys, Wales. In Blackpool by contrast lots. There were 45 Children Missing Education (“CME”) (in the area waiting for provision to start, mainly recently arrived), 112 Children missing “Out” (left area being tracked) (of which 61 had been located) and 307 Elective Home Education (“EHE”). And across the academic year September 2020 – July 2021, the Isle of Wight recorded 49 children as Children Missing Education (CME), in East Riding there were 17 Children Missing Education (“CME”). In Leicester they even noted that children on their registers have been recorded in these ways since 2003.

Do those numbers surprise you? Local Authorities also collect a lot of data already about each child out of school. For example, Harrow’s central database on children not in school already includes Family Name, Forename, Middle name, DOB, Unique Pupil Number (“UPN”), Former UPN (For adopted children and children-at-risk this should not be so, but who knows if it is respected see 6.5 and 6.6 due to risks the UPNs create for those children — The UPN is supposed to be secure state identifier, and as such, has special protections including being a blind identifier and it should lapse when children leave school (see page 6-8).  There is also the  Unique Learner Number (ULN).

Any new number policymakers suggest inventing, would need to be subject to the same protections for the child (and throughout their adult life), and therefore it would serve little purpose to create yet another new number.

The list goes on, of what is collected in CME Local Authority databases on each named child. Address (multi-field), Chosen surname, Chosen given name, NCY (year group), Gender, Ethnicity, Ethnicity source, Home Language, First Language, EAL (English as additional language), Religion, Medical flag, Connexions Assent, School name, School start date, School end date, Enrol Status, Ground for Removal, Reason for leaving, Destination school, Exclusion reason, Exclusion start date, Exclusion end date, SEN Stage, SEN Needs, SEN History, Mode of travel, FSM History, Attendance, Student Service Family, Carer details, Carer address details, Carer contract details, Hearing Impairment And Visual Impairment, Education Psychology support, and Looked After status. (For in school children, the list is even longer, it lasts a lifetime, and it’s given away too).

Yet the Schools Bill would have granted Local Authorities powers to expand this already incredibly intrusive list to any further data at all of their choosing, without any limitation.

The CSJ perhaps most accurately, states in one report, that it is vulnerable children who are affected most by missing school time but this must not be conflated with Children Missing Education.

“In Autumn 2022, the latest term for which data is available, children in receipt of Free School Meals (FSM) had a severe absence rate which was triple the rate for children who were not eligible for FSM. Children in receipt of special educational needs (SEN) support are also more likely to be severely absent than their peers.”

Absenteeism and Children Missing Education are NOT the same. From the numbers above, I hope it is clear why.

The perception of reality matters in this topic area specifically because it is portrayed by the CSJ as an outcome of the pandemic. The CSJ is not politically neutral given its political founders, steering group and senior leadership with strong ties to the lockdown-skeptic COVID Recovery Group. That matters because it influences, and enables other influencers, to set the agenda on what is seen as cause and solution to a set of problems and the public policy interventions that are taken or funded as a result. In 2022 at the Tory party conference event on this subject which I also wrote up afterwards here, Iain Duncan Smith failed to acknowledge, even once, that thousands of people in the UK have died and continue to die or have lasting effects as a result of and with COVID-19.

It was such a contrast and welcome difference from the tone of Bridget Phillipson MPs speech today at CSJ, that she acknowledged what the pandemic reality was for thousands of families.

And after all, according to a King’s Fund report,Overall, the number of people who have died from Covid-19 to end-July 2022 is 180,000, about 1 in 8 of all deaths in England and Wales during the pandemic.” Furthermore in England and Wales, “The pandemic has resulted in about 139,000 excess deaths“. “Among comparator high-income countries (other than the US), only Spain and Italy had higher rates of excess mortality in the pandemic to mid-2021 than the UK.”

At the 2022 Conservative Conference fringe event, chaired by IDS, while there were several references made by the panel of the impact of the pandemic on children’s poor mental health, no one mentioned the cuts to youth services’ funding by 70% over ten years, that has allowed CAMHS funding and service provision to wither and fail children well before 2020. The pandemic exacerbated children’s pre-existing needs that the government has not only failed to meet since, but actively rationed and  reduced provision for. Event chair, Ian Duncan Smith, is also the architect of Universal Credit. And this matters in this very closely connected policy area for measuring and understanding the effectiveness of all these interventions.

Poverty and school attendance can but do not always have causes and correlations. But while we focus on the (inaccurately presented) number of children not in school we fail to pay attention to the Big Picture and conflated causes of children not in school. Missing bums on seats is not the problem, but a symptom. In some cases, literally. Historically, the main driver for absence is illness. In 2020/21, this was 2.1% across the full year. This was a reduction on the rates seen before the pandemic (2.5% in 2018/19).

What does persistent absentee mean?

Another part of these numbers often presented in the media is the ‘persistent absence’ rate. But is it meaningful? In 2018/19 the rate of persistent absentees (missing 10% of possible sessions, or the equivalent of one morning or one afternoon every week) was 10.8%. Now it is reported as around double that. But bear in mind that at any point in time, the label ‘persistent absentee’ may be misleading to the average man-on-the-street.

Don’t forget this is also a movable numbers game — understanding the definition of persistent absence and that it has changed three times since 2010, are both critical to appreciate the numbers and data used in discussing this subject.  Children are classified as persistently absent more quickly now than previously. The same number of children could be absent for the same number of school days as was a decade ago, but will be reported as having doubled.

A child who misses a full day in the term for unauthorised absence (illness) now stays marked as ‘persistently’ absent until their percentage of possible sessions in school outweighs the 10% missed.  So if they have 3.5 days of tummy bug or flu which every parent knows is pretty much the norm on “Back-to-School” then even the most dedicated pupils look ‘persistently’ absent for quite a while. It’s not only guilt inducing to the students who care, it creates stress to go-in-at-all-costs (including before recovery with risk of infection) for the ill, and stigmatises the disabled and those with long term health conditions.

A label unfit for purpose, it could usefully be re-named and the topic re-framed to rebuild trust with learners and families.

A table outlining the definitions by daily missed sessions to the persistent absence definition
Credit to George Stephenson High School, Newcastle upon Tyne. https://www.gshs.org.uk/attendance/what-persistent-absence-student

Another technical thing that could usefully be updated in terms of data collection is the inconsistency across Local Authorities for what age group they record which data. For example, children missing education (“CME”) is often Reception through to age 16 but for other categories it is for children to age 18, and age 25 for children with a special educational needs and disability (SEND) plan into young adulthood. Many Local Authorities use the definition in section 8 of the Education Act 1996 that is out of step with the more recently revised school leaving age in England.

Data-led decisions are not smart solutions

This is fundamentally not about data, but children’s lives.  In these debates there is a grave risk that a focus on the numbers, because of the way the data is presented, perceived, or used, means that reducing the numbers themselves becomes the goal.  The data is there. Joining it all up may feel like doing ‘something’ but it’s not going to contribute anything to getting bums on seats or deliver a quality education to every child. It won’t contribute to a solution except perhaps to some AI company CEO’s bottom line. And at what cost to children both by what you choose not to instead or direct harm? AI is not the solution or even a reliable tool when it comes to children’s social issues.

Entering data on a system with the hope of ‘spotting patterns’ without precise asks of data is rather like gazing at a crystal ball.  The computer cannot ‘guess’ what you are looking for. Rather than designing for the effective outcomes of what you want it to achieve, is symptomatic of the analysis of the problem. Lack of human authority and accountability.

It was said in the Laming report of the Victoria Climbié enquiry, there could be referrals coming in by fax, streaming on the floor and nobody picking them up. “It was not my job to pick up the fax from the fax machine. It was not my role. I had other things to do.” (5.22) All staff working in children and families’ services don’t need to be on one database to record data, but can work on decentralised systems with single role-based access. Systems can draw together data and present it without the need for a single record. Those are design questions, not a justification for building more databases and more national identifiers which may do nothing but duplicate the existing dysfunctions.

The Laming Review found that by the late 1990s social services had lost all of its human resources and training staff. They were overworked and missed things the computer showed them. Computers cannot make people do their job. Access to information does not create accountabilty for action.

In Victoria’s case, one example given was of a computer printout displaying a unique child reference number, that noted physical bruising.  The links were there for anyone who had access to see:

“but it seems likely that as the administrative staff were struggling to cope with the backlog of work at the time, it was simply overlooked.”

Children can experience the concentrated harms of poverty more than many in society. Some by government policy design. The data on child poverty is sometimes contradictory. But it is obvious to see that child poverty is not only more widespread but deeper now, than it was when the Conservative party took power in 2010. And it is important to consider a third further factor in the overlap between children’s school attendance and child poverty. Amos Toh, senior tech and human rights researcher at Human Rights Watch wrote recently on AI and public policy,

“As part of the welfare system since 2010, the government has ceded control of the country’s social assistance system to algorithms that often shrink people’s benefits in unpredictable ways. The system, known as Universal Credit, was rolled out in 2013 to improve administrative efficiency and save costs, primarily through automating benefits calculations and maximizing digital “self-service” by benefit claimants.”

Universal Credit has had a range of contested outcomes but  what should be uncontested is that the AI, the algorithms it uses, are flawed in various ways in various parts of the system.

So when I heard Shadow Education Secretary Bridget Phillipson say today that, “artificial intelligence (AI) will be used by Labour to spot absence trends to improve coordination between education, social care and the wider services that support families,” and, “plans to legislate for a new register of children in home education,” what I think to myself is this. Regardless of which political party is in power, imagine if there were a (duplicated) new database of children who are home educated and/ or known to be missing education at national level. Once these statistics are available at national level in one database (after all, only statistics not named records might be seen as necessary and proportionate beyond direct care, and practically the data will always be out of synch with local data), and imagine the money has not been spent on youth services, or Early Years intervention, but on ‘fix-it-all’ AI. What will change?

Fixing Britain is a People Problem

If you have followed the BBC Radio 4 Louise Casey ‘Fixing Britain’ series, you may or may not agree with all the suggestions but in the episode on Universal Credit, the summary is relevant for all of them. Often a public policy focus on money and legislation, forgets what it is about, people. “Policy disconnected from its purpose [people] is going to fail.” Policy makers often fail to understand most people’s lives whom the policy is intended to affect. Some of that today looks like this:

On poverty: There are far more food banks in the UK than branches of McDonalds.

On school leavers’ aspirations and opportunity: One third of children fail to get a pass in maths and English GCSE that is the gatekeeper to many jobs. AI-supported recruiting tools simply sort out and remove those who don’t have the qualifications in the applications process. Today one third of children are excluded from education and job opportunities not because they are necessarily unsuitable applicants, but because the grade boundaries are set so that one-third get D or lower.

On bad parenting: too often conflated into this debate by the Children’s Commissioner, children not-in-school is not a sign of bad parenting, any more than a child sent into school is a good one. Even joined up professional services and home visits can still be fobbed off and still fail to act on signs of neglect and abuse.  Parents and children on the radar of social services and on school rolls and in-school are known to the system and yet still it fails due to ‘underfunding of social services and the court system’.

On cuts to human support: Social workers vacancies were reported at a record high of 7,900 in 2022, a 21% rise on 2021. “The risks have been shown in safeguarding reviews after a series of scandals. A review of Bradford’s children’s services following Star Hobson’s death found record levels of vacancies and sickness among social workers.” No amount of data or Artificial intelligence can plug that hole in human capacity.

On policy aims: Much of this has been debated again and again. From twenty years ago, to the 2023 House of Commons Committee report on Persistent absence and support for disadvantaged pupils.

On children: Above all, contrary to some narratives, what is in a child’s best interests is not always being in school. If you ask primary children what they like and don’t like about school it may have changed little over time because what matters to them most is how it makes them feel. Some love sport, drama, music and art and are frustrated there is so little of it, and none at all from age 13 where the curriculum narrows to KS4 too early. For many it is not safe or supportive of their needs. Some are not fine in school and some need specialist support.  Expert individuals and oganisations identify those needs and are there to help. Children may more rarely be offered a choice or asked if they want to be in school, but without a consensual part in it, it doesn’t work.

On fault: Blame is then too often laid at parents’ feet, whether it is the narrative of parents are either feckless or failing to teach children to brush their teeth. In the year of a General Election how will this land with people who voted for the narrative, “Take back control”?

Control and choice

It was refreshing to hear Phillipson move at least a bit away from blame to responsibility and trust. The role of responsibility and trust in the system are, however, unevenly distributed and possibly under appreciated. Perhaps coincidentally, many parents and teachers today, are the first who went through the biggest costs and still have the largest debts owing from their Higher Education. As reported by politics.co.uk, “the results of the annual Higher Education Policy Unit and the Higher Education Academy student experience study in 2017 showed that just 35% of respondents believed their higher education experience represented ‘good’ or ‘very good’ value for money.” If parents see and act as if education is less of a gift in life or a public good, but more of a package that comes with consumer rights attached, then can you blame them? It was Labour that introduced the first student fees for Higher Ed.

It was the Conservatives who made ‘choice’ in the schools market central to their messaging on the role of parents in education for a decade. In the US they are now seeing the results of that ‘choice’ message, made even more extreme through per pupil cash transfers,  and by the political culture war divisions driven between communities and state schools that has helped steer state money away from the mainstream state school system.

Phillipson is right on why the current government approach isn’t working, “that broader reality is why the government’s approach – an Attendance Action Alliance – falls so far short of the challenge. Insofar as it tackles anything, it tackles the symptom, not the causes.”

But tackling things via different but wrong tools, won’t be better.

Failing to heed lessons from infrastructure projects, on pupil data, and AIs past and present, dooms us to repeat the same mistakes.  Who is in school is an outcome of the experience of the system at individual level, and if it delivers in the context of each child’s, family, and community life and the aims and quality of education. Focusing only on getting children into the classroom is of little value without understanding what the experience is like for them once there.  The outcomes of children not-in-school is not only a societal question, but one of long term sustainability for England’s state school system as a whole.


 

Waste products: bodily data and the datafied child

Recent conversations and the passage of the Data Protection and Digital Information Bill in parliament, have made me think once again about what the future vision for UK children’s data could be.

Some argue that processing and governance should be akin to a health model, first do no harm, professional standards, training, ISO lifecycle oversight, audits and governance bodies to approve exceptional releases and re-use.

Education data is health and body data

Children’s personal data in the educational context is remarkably often health data directly (social care, injury, accident, self harm, mental health) or indirectly (mood and emotion or eating patterns).

Children’s data in education is increasingly bodily data. An AI education company CEO was even reported to have considered, “bone-mapping software to track pupils’ emotions” linking a child’s bodily data and data of the mind. For a report written by Pippa King and myself in 2021, The State of Biometrics 2022: A Review of Policy and Practice in UK Education, we mapped the emerging prevalence of biometrics in educational settings. Published on the ten-year anniversary of the Protection of Freedoms Act 2012, we challenged the presumption that the data protection law is complied with well, or is effective enough alone in the protection of children’s data or digital rights.

We mustn’t forget, when talking about data in education, children do not go to school in order to produce data or to have their lives recorded, monitored or profiled through analytics. It’s not the purpose of their activity. They go to school to exercise their right in law to receive education, that data production is a by-product of the activity they are doing.

Education data as a by product of the process

Thinking of these together as children’s lives in by-products used by others, reminded me of the Alder Hey scandal published over twenty years ago, but going back decades.  In particular, the inquiry considered the huge store of body parts and residual human tissue of dead children accumulated between 1988 to 1995.

“It studied the obligation to establish ‘lack of objection’ in the event of a request to retain organs and tissue taken at a Coroner’s post-mortem for medical education and research.” (2001)

Thinking about the parallels of children’s personal data produced and extracted in education as a by-product, and organ and tissue waste a by-product of routine medical procedures in the living, highlights several lessons that we could be drawing today about digital processing of children’s lives in data and child/parental rights.

Digital bodies of the dead less protected than their physical parts

It also exposes gaps between the actual scenario today that the bodily tissue and the bodily data about deceased children could be being treated differently, since the data protection regime only applies to the living. We should really be forward looking and include rights here for all that go beyond the living “natural persons”, because our data does, and that may affect those who we leave behind. It is insufficient for researchers and others who wish to use data without restriction to object, because this merely pushes off the problem, increasing the risk of public rejection of ‘hidden’ plans later. (see DDM second reading briefing on recital 27, p 30/32),

What could we learn from handling body parts for the digital body?

In the children’s organ and tissue scandal, management failed to inform or provide suitable advice and support necessary to families.

Recommendations were made for change on consent to post-mortem examinations of children, and a new approach to consent and an NHS hospital post-mortem consent form for children and all residual tissue were adopted sector-wide.

The retention and the destruction of genetic material is considered in the parental consent process required for any testing that continues to use the bodily material from the child. In the Alder Hey debate this was about deceased children, but similar processes are in place now for obtaining parental consent to research re-use and retention for waste or ‘surplus’ tissue that comes from everyday operations on the living.

But new law in the Data Protection and Digital Information Bill is going to undermine current protections for genetic material in the future and has experts in that subject field extremely worried.

The DPDI Bill will consider the data of the dead for the first time

To date it only covers the data of or related to the living or “natural persons” and it is ironic that the rest of the Bill does the polar opposite, not about living and dead, but by redefining both personal data and research purposes it takes what is today personal data ‘in scope’ of data protection law and places it out of scope and beyond its governance due to exemptions, or changes in controller responsibility over time. Meaning a whole lot of data about children and the rest of us) will not be covered by DP law at all. (Yes, those are bad things in the Bill).

Separately, the new law as drafted, will also create a divergence from its generally accepted scope, and will start to bring data into scope the ‘personal data’ of the dead.

Perhaps as a result of limited parliamentary time, the DPDI Bill (see col. 939) is being used to include amendments on, “Retention of information by providers of internet services in connection with death of child,” to amend the Online Safety Act 2023 to enable OFCOM to give internet service providers a notice requiring them to retain information in connection with an investigation by a coroner (or, in Scotland, procurator fiscal) into the death of a child suspected to have taken their own life. The new clause also creates related offences.”

While primarily for the purposes of formal investigation into the role of social media in children’s suicide, and directions from Ofcom to social media companies to retain information for the period of one year beginning with the date of the notice, it highlights the difficulty of dealing with data after the death of a loved one.

This problem is perhaps no less acute where a child or adult has left no ‘digital handover’ via a legacy contact eg at Apple you can assign someone to be this person in the event of your own death from any cause. But what happens if your relation has not set this up and has been the holder of the digital key to your entire family photo history stored on a company’s cloud?  Is this a question of data protection, or digital identity management, or of physical product ownership?

Harvesting children’s digital bodies is not what people want

In our DDM research and report, “the words we use in data policy: putting people back in the picture” we explored how the language used to talk about personal data, has a profound effect on how people think about it.

In the current digital landscape personal data can often be seen as a commodity, a product to mine, extract and exploit and pass around to others. More of an ownership and IP question and the broadly U.S. approach. Data collection is excessive in “Big Data” mountains and “data lakes”, described just like the EU food surpluses of the 1970s. Extraction and use without effective controls creates toxic waste, is polluting and met with resistance. This environment is not sustainable and not what young people want. Enforcement of the data protection principles of purpose limitation and data minimisation should be helping here, but young people don’t see it.

When personal data is considered as ‘of the body’ or bodily residue, data as part of our life, the resulting view was that data is something that needs protecting. That need is generally held to be true, and represented in European human rights-based data laws and regulation. A key aim of protecting data is to protect the person.

In a workshop for that report preparation, teenagers expressed unease that data about them being ‘harvested’ to exploit as human capital and find their rights are not adequately enabled or respected. They find data can be used to replace conversation with them, and mean they are misrepresented by it, and at the same time there is a paradox that a piece of data can be your ‘life story’ and single source of truth advocating on your behalf.

Parental and children’s rights are grafted together and need recognised processes that respect this, as managed in health

Children’s competency and parental rights are grafted together in many areas of a child’s life and death, so why not by default in the digital environment? What additional mechanisms in a process are needed where both views carry legal weight? What are the specific challenges that need extra attention in data protection law due to the characteristics of data that can be about more than one person, be controlled by and not only be about the child, and parental rights?

What might we learn for the regulation of practice of a child’s digital footprint from how health manages residual tissue processing? Who is involved, what are the steps of the process and how is it communicated onwardly accompanying data flows around a system?

Where data protection rules do not apply, certain activities may still constitute an interference with Article 8 of the European Convention on Human Rights, which protects the right to private and family life. (WP 29 Opinion 4/2007 on the concept of personal data p24).

Undoubtedly the datafied child is an inseparable ‘data double’ of the child. Data users about children, who do so without their permission, without informing them or their families, without giving children and parents the tools to exercise their rights to have a say and control their digital footprint in life and in death, might soon find themselves being treated in the same way as accountable individuals in the Alder Hey scandal were, many years after the events took place.

 


Minor edits and section sub-headings added on 18/12 for clarity plus a reference to the WP29 opinion 04/2007 on personal data.

Automated suspicion is always on

In the Patrick Ness trilogy, Chaos Walking, the men can hear each others’ every thought but not the women.

That exposure of their bodily data and thought, means almost impossible privacy,  and no autonomy over their own bodily control of movement or of action. Any man that tries to block access to their thoughts is treated with automatic suspicion.

It has been on my mind since last week’s get together at FIPR. We were tasked before the event to present what we thought would be the greatest risk to rights [each pertinent to the speaker’s focus area] in the next five years.

Wendy Grossman said at the event and in her blog, “I’d look at the technologies being deployed around European and US borders to surveil migrants. Migrants make easy targets for this type of experimentation because they can’t afford to protest and can’t vote. “Automated suspicion,” Euronews.next calls it. That habit of mind is dangerous.” Those tools often focus on control of humans’ bodies. They infringe on freedom of movement.

In education, technology companies sell automated suspicion detection tools to combat plagiarism and cheating in exams. Mood detection to spot outliers in concentration. Facial detection to bar the excluded from premises or the lunch queue, or normalise behavioural anomalies, control physical attendance and mental presence. Automated suspicion is the opposite of building trusted human relationships.

I hadn’t had much space to think in the weeks before the event, between legislation, strategic litigation and overdue commitments to reports, events, and to others. But on reflection, I failed to explain why the topic area I picked above all others matters. It really matters.

It is the combination of a growth of children’s bodily data processing and SafetyTech deployed in schools. It’s not only because such tools normalise the surveillance of everything children do, send, share or search for on a screen, or that many enable the taking of covert webcam photos,  or even the profiles and labels it can create on terrorism and extremism or that can out LGBTQ+ teens. But that at its core, lies automated suspicion and automated control. Not only of bodily movement and actions, but of thought. Without any research or challenge to what that does to child development or their experience of social interactions and of authority.

First let’s take suspicion.

Suspicion of harms to self, harms to others, harms from others.

The software / systems / tools inspect the text or screen content the users enter into  devices (including text the users delete and text before it is encrypted) assuming a set of risks all of the time. When a potential risk is detected, the tools can capture and store a screenshot of the users’ screen. Depending on the company design and option bought, human company moderators may or may not first review the screenshots (recorded on a rolling basis also ‘without’ any trigger so as to have context ahead of the event) and text captures to verify the triggered events before sending to the school’s designated safeguarding lead. An estimated 1% of all triggered material might be sent on to a school to review and choose whether or not to act on. But regardless of that, the children’s data (including screenshots, text, and redacted text) may be stored for more than a year by the company before being deleted. Even content not seen as necessary but, “content which poses no risk on its own but is logged in case it becomes relevant in the future”.

Predictive threat, automated suspicion

In-school technology is not only capturing what is done by children but what they say they do, or might do, or think of doing. SafetyTech enables companies and school staff to police what children do and what they think, and it is quite plainly designed to intervene in actions and thoughts before things happen. It is predictively policing pupils in schools.

Safeguarding-in-schools systems were already one of my greatest emerging concerns but I suspect coinciding with recent wars, that the keywords in topics seen as connected to the Prevent programme will find a match rate at an all time high since 2016 and the risks it brings due to being wrong will have increased with it. But while we have now got various company CEOs talking about shared concerns, not least outing LGBTQ students as the CDT reported this year in the U.S. and a whistleblower who wanted to talk about the sensitive content the staff can see from their company side, there is not yet appetite to fix this across the sector. The ICO returned our case for sectoral attention, with no enforcement. DfE guidance still ignores the at home, out of hours contexts and those among the systems that can enable school staff or company staff to take photos of the children and no one might know. We’ve had lawyers write letters and submitted advice in consultations and yet it’s ignored to date.

Remember the fake bomb detectors that were golf ball machines? That’s the potential scenario we’ve got in education in “safeguarding in schools” tech. Automated decision making in black boxes that no one has publicly tested, no one can see inside, and we’ve no data on its discriminatory effects through language matching or any effective negative or false positives, and the harms it is or is not causing. We’ve risk averse institutions made vulnerable to scams. It may be utterly brilliant technology, with companies falling over independent testing that proves it ‘works’. I’ve just not seen any.

Some companies themselves say they need better guidance and agree there are significant gaps. Opendium, one leading provider of internet filtering and monitoring solutions, blogged about views expressed at a 2019 conference held by the Police Service’s Counter Terrorism Internet Referral Unit that schools need better advice .

Freedom of Thought

But it’s not just about what children do, but any mention of what they *might* do or their opinions of themselves, others or anything else. We have installed systems of thought surveillance into schools, looking for outliers or ‘extremists’ in different senses, and in its now everyday sense, underpinned by the Prevent programme and British Values. These systems do not only expose and create controls of children’s behaviours in what they do, but in their thoughts, their searches, what they type and share, send, or even, don’t and delete.

Susie Algere, human rights lawyer, describes, Freedom of Thought as, “protected absolutely in international human rights law. This means that, if an activity interferes with our right to think for ourselves inside our heads (the so-called “forum internum”) it can never be justified for any reason. The right includes three elements:

the right to keep our thoughts private
the right to keep our thoughts free from manipulation, and
the right not to be penalised for our thoughts.”

These SafetyTech systems don’t respect any of that. They infringe on freedom of thought.

Bodily data and contextual collapse

Depending on the company, SafetyTech may be built on keyword matching technology commonly used in the gaming tech industry.

Gaming data collected from children is a whole field in its own right – bodily data from haptics, and neuro data. Personal data from immersive environments that in another sector would be classified clearly as “health” data, and in the gaming sector too, will fall under the same “special category” or “sensitive data” due to its nature, not its context. But it is being collected at scale by companies that aren’t used to dealing with the demands of professional confidentiality and concept of ‘first do no harm’ that the health sector are founded on. Perhaps we’re not quite at the everyday for everyone in society, Ready Player One stage yet, but for those in communities who are creating a vast amount of data about themselves the questions over its oversight its retention, and perhaps its redistribution with authorities in particular with policing should be of urgent consideration. And those tools are on the way into the classroom.

At school level the enormous growth in the transfer of bodily data is not yet haptics but of bodily harm. A vast sector has grown up to support the digitisation of children’s safety, physical harms noticed by staff on children picked up at home, or accidents and incidents recorded at school. Often including marking full body outlines with where the injury has been.

The issues here again, are in part created by taking this data  beyond the physical environment of a child’s direct care and beyond the digital firewalls of child protection agencies and professionals. There are no clear universal policies on sealed records. ie not releasing the data of children-at-risk or those who undergo a name change, once it’s been added into school information management systems or into commercial company products like CPOMS, MyConcern, or Tootoot.

Similarly there is no clear national policy on the onward distribution into the National Pupil Database of the records of children in need (CiN) of child protection, which in my opinion, are inadequately shielded. The CIN census is a statutory social care data return made by every Local Authority to the Department for Education (DfE). It captures information about all children who have been referred to children’s social care regardless of whether further action is taken or not.

As of September 2022, there were only 70 individuals flagged for shielding and that includes both current and former pupils in the entire database. There were 23 shielded pupil records collected by the Department via the 2022 January censuses alone (covering early years, schools and alternative provision).

No statement or guidance is given direct to settings about excluding children from returns to the DfE. As of September 2022, there were 2,538,656 distinct CiN (any ‘child in need’ referred to children’s social care services within the year) / LAC ([state] looked after child) child records (going back to 2006), regardless of at-risk status, able to be matched to some home address information via other sources, (non CiN / LAC) all included in the NPD. The data is highly highly sensitive and detailed, including “categories of abuse” not only monitoring and capturing what has been done to children, but what is done by children.

Always on, always watching

The challenge for rights work in this sector is not primarily a technical problem but one of mindset. Do you think this is what schools are for? Are they aligned with the aims of education? One SafetyTech company CEO at a conference certainly marketed their tool as something that employers want children to get used to, to normalise the gaze of authority and monitoring of your attention span. In real Black Mirror stuff, you could almost hear him say, “their eyeballs belong to me for fifteen million merits”.

Monitoring in-class attendance is moving not only towards checking are you physically in school,  but are you present in focus as well.

Education is moving towards an always-on mindset for many, whether it be data monitoring and collection with the stated aims of personalising learning or the claims by companies that have trialed mood and emotion tech on pupils in England. Facial scanning is sold as a way of seeing if the class mood is “on point” with learning. Are they ‘engaged’?  After Pippa King spotted a live-trial in the wild starting in UK schools, we at Defend Digital Me had a chat with one company CEO who agreed after discussion, and the ICO blogpost on ’emotion tech’ hype, to stop that product rollout and cut it altogether from their portfolio. Under the EU AI Act it would soon be banned too, to protect children from its harms (children in the UK included, were Britain still under EU laws but now post-Brexit, they’re not).

The Times Education Commission reported in 2021 that Priya Lakhani told one of the Education Commission’s oral evidence sessions that Century Tech, “decided against using bone-mapping software to track pupils’ emotions through the cameras on their computers. Teachers were unhappy about pupils putting their cameras on for safeguarding reasons but there were also moral problems with supplying such technology to autocratic regimes around the world.”

But would you even consider this in an educational context at all?

Apps that blame and shame behaviours using RAG scores exposed to peers on wall projected charts are certainly already here. How long before such ’emotion’ and ‘mood’ tech emerges in Britain seeking a market beyond the ban in the EU, joined up with that which can blame and shame for lapses in concentration?

Is this simply the world now, that children are supposed to normalize third-party bodily surveillance and behavioural nudge?

That same kind of thinking in ‘estimation’ ‘safety’ and ‘blame’ might well be seen soon in eye scanning drivers in “advanced driver distraction warning systems”. Drivers staying ‘on track’ may be one area we will be expected to get used to monitoring our eyeballs, but will it be used to differentiate and discriminate between drivers for insurance purposes, or redirect blame for accidents? What about monitoring workers at computer desks, with smoking breaks and distraction costing you in your wage packet?

Body and Mind belong ‘on track’ and must be overseen

This routine monitoring of your face is expanding at pace in policing but policing the everyday to restrict access is going to affect the average person potentially far more than the use of facial detection and recognition in every public space. Your face is your passport and the computer can say no. Age as the gatekeeper of identity to participation and public and private spaces is already very much here online and will be expanded online in the UK by the Online Safety Act (noting other countries have realised its flaws and foolishness). Age verification and age assurance if given any weight, will inevitably lead to the balkanisation of the Internet, to throttling of content through prioritisation of who is permitted to do or see what, and control ofy content moderation.

In UK night clubs age verification is being normalised through facial recognition. Soon the only permitted Digital ID in what are (for now) purposes limited to rental and employment checks, will be the accredited government ID if the Data Protection and Digital Information Bill passes as drafted. But scope creep will inevitably move from what is possible, to what is required, across every aspect of our lives where identity is made an obligation for proof of eligibility.

Why all this matters is that we see a direction  of travel over and over again. Once “the data” is collected and retained there is an overwhelming desire down the line to say, well now we’ve got it, how can we use it? Increasingly that means joining it all up. And then passing it around to others. And the DPDI Bill takes away the safeguards around that over time (See KC opinion para 20, p.6).

It is something data protection law and lack of enforcement are already failing to protect us from adequately, because excessive data retention should be impossible under the data minimisation principle and purpose limitation, but controllers argue linked data ‘is not new data’. What we should see instead in enforcement is against the excessive retention of data that creates ‘new knowledge’ that goes beyond our reasonable expectations we see the government and companies gaining ever greater power to intervene in the lives of the data subjects, the people. The draft new law does the opposite.

Who decides what ‘on track’ looks like?

School SafetyTech is therefore the current embodiment of my greatest areas of concern for children’s rights in educational settings right now. Because it is an overlapping tech that monitors both what you do when, and claims to be able to put the thinking behind it in context. Tools in schools are moving towards prediction and interventions and the combinations of bodily control, thought, mood and emotion. They are shifting from on the server to on device and go with you everywhere your phone goes. ‘Interventions’ bring a whole new horizon of the potential infringements of rights and outcomes and questions of who decides what can be used for what purposes in a classroom, in loco parentis.

Filtering and monitoring technology in school “safetyTech”, blocks content and profiles the user over time. This monitoring of bodily behaviours, monitoring actions and thoughts, leads to staff acting on automated suspicion. It can lead to imposing control of bodily movement and of thoughts and actions. It’s adopted at scale for millions of children and students across the UK. It’s without oversight or published universal safety standards.

This is not a single technology, it’s a market and a mindset.

Who decides what is ‘suitable’, ‘on track’, and where ‘intervention’ is required is built into design?  It is not a problem of technology causing harm, but social and political choices and values embodied in technology that can be used to cause harm. For example in identifying and enabling the persecution of Muslim students that are fasting during Ramadan, based on their dining records. In the UK we have all the same tools already in place.

Who does any technology serve? is a question we have not yet resolved in education in England. The best interests of the child, the teacher, the institution, the State or company that built it?  Interests and incentives may overlap or may be contradictory. But who decides, and who is given the knowledge of how that was decided? As tech is becoming increasingly designed to run without any human intervention the effects of the automated decisions, in turn, can be significant, and happen at speed and scale.

Patrick Ness coined the phrase,”The Noise is a man unfiltered, and without a filter, a man is just chaos walking”. Controlling chaos may be a desirable government aim, but at what cost to whose freedoms?

AI in the public sector today, is the RAAC of the future

Reinforced Autoclaved Aerated Concrete (RAAC) used in the school environment is giving our Education Minister a headache. Having been the first to address the problem most publicly, she’s coming under fire as responsible for failure; for Ministerial failure to act on it in thirteen years of a Conservative government since 2010, and the failure of the fabric of educational settings itself.

Decades after buildings’ infrastructure started using RAAC, there is now a parallel digital infrastructure in educational settings. It’s worth thinking about what’s caused the RAAC problem and how it was identified. Could we avoid the same things in the digital environment and in the design, procurement and use of edTech products, and in particular, Artificial Intelligence?

Where has it been used?

In the procurement of school infrastructure, RAAC has been integrated into some parts of the everyday school system, especially in large flat roofs built around the 1960s-80s. It is now hard to detect and remedy or remove without significant effort. There was short-term thinking, short-term spending, and no strategy for its full life cycle or end-of-life expectations. It’s going to be expensive, slow, and difficult to find it and fix.

Where is the risk and what was the risk assessment?

Both most well-known recent cases, the 2016 Edinburgh School masonry collapse and the 2018 roof incident, happened in the early morning when no pupils were present, but, according to the 2019 safety alert by SCOSS, “in either case, the consequences could have been more severe, possibly resulting in injuries or fatalities. There is therefore a risk, although its extent is uncertain.”

That risk has been known for a long time, as today’s education minister Gillian Keegan rightly explained in that interview before airing her frustration. Perhaps it was not seen as a pressing priority because it was not seen as a new problem. In fact locally it often isn’t seen much at all, as it is either hidden behind front-end facades or built into hard-to-see places, like roofs. But already, ‘in the 1990s structural deficiencies became apparent’. (Discussed in papers by the Building Research Establishment (BRE) In the 1990s and again in 2002).

What has changed, according to expert reports, is that those visible problems are no longer behaving as expected in advance,  giving time for mitigation in what had previously been one-off catastrophic incidents. What was only affecting a few, could now affect the many at scale, and without warning. The most recent failures show there is no longer a reliable margin to act, before parts of the mainstream state education infrastructure pose children a threat to life.

Where is the similarity in the digital environment?

AI is the RAAC of another Minister’s future—it’s often similarly sold today as cost-saving, quick and easy to put in place.  You might need fewer people to install it rather than the available alternatives.

AI is being widely introduced at speed into children’s private and family life in England through its procurement and application in the infrastructure of public services; in education and children’s services and policing and in welfare; and some companies claim to be able to identify mood or autism or to be able to profile and influence mental health. Children rarely have any choice or agency to control its often untested effects or outcomes on them, in non-consensual settings.

If you’re working in AI “safety” right now, consider this a parable.

  • There are plenty of people pointing out risk in the current adoption of AI into UK public sector infrastructure; in schools, in health, in welfare, and in prisons and the justice system;
  • There are plenty of cases where harm is very real, but first seen by those in power as affecting the marginalised and minority;
  • There are no consistent published standards or obligations on transparency or of accountability to which AI sellers must hold their products before procurement and affect on people;
  • And there are no easily accessible records of where what type of AI is being procured and built into which public infrastructure, making tracing and remedy even harder in case of product recall.

The objectives of any company, State, service users, the public and investors may not be aligned. Do investors have a duty to ensure that artificial intelligence is developed in an ethical and responsible way? Prioritising short term economic gain and convenience, ahead of human impact or the long term public interest, has resulted in parts of schools’ infrastructure collapsing. And some AI is already going the same way.

The Cardiff Data Justice Lab together with Carnegie Trust have published numerous examples of cancelled systems across public services. “Pressure on public finances means that governments are trying to do more with less. Increasingly, policymakers are turning to technology to cut costs. But what if this technology doesn’t work as it should?” they asked.

In places where similar technology has been in place longer, we already see the impact and harm to people. In 2022, the Chicago Sun Times published an article noting that, “Illinois wisely stopped using algorithms in child welfare cases, but at least 26 states and Washington, D.C., have considered using them, and at least 11 have deployed them. A recent investigation found they are often unreliable and perpetuate racial disparities.” And the author wrote, “Government agencies that oversee child welfare should be prohibited from using algorithms.”

Where are the parallels in the problem and its fixes?

It’s also worth considering how AI can be “removed” or stopped from working in a system. Often not through removal at all, but simply throttling, shutting off that functionality. The problematic parts of the infrastructure remains in situ, but can’t easily be taken out after being designed-in. Whole products may also be difficult to remove.

The 2022 Institution of Structural Engineers’ report summarises the challenge now how to fix the current RAAC problems. Think about what this would mean doing to fix a failure of digital infrastructure:

  • Positive remedial supports and Emergency propping, to mitigate against known deficiencies or unknown/unproven conditions
  • Passive, fail safe supports, to mitigate catastrophic failure of the panels if a panel was to fail
  • Removal of individual panels and replacement with an alternative solution
  • Entire roof replacement to remove the ongoing liabilities
  • Periodic monitoring of the panels for their remaining service life

RAAC has not become a risk to life. It already was from design. While still recognised as a ‘good construction material for many purposes’ it has been widely used in unsafe ways in the wrong places.

RAAC planks made fifty years ago did not have the same level of quality control as we would demand today and yet was procured and put in place for decades after it was known to be unsafe for some uses, and risk assessments saying so.

RAAC was given an exemption from the commonly used codes of practice of reinforced concrete design (RC).

RAAC is scattered among non-RAAC infrastructure, making finding and fixing it, or its removal, very much harder than if it had been recorded in a register, making it easily traceable.

RAAC developers and sellers may no longer exist or have gone out of business without any accountability.

Current AI discourse should be asking not only for retrospective accountability or even life-cycle accountability, but also what does accountable AI look like by design and how do you guarantee it?

  • How do we prevent risk of harm to people from poor quality of systems designed to support them, what will protect people from being affected by unsafe products in those settings in the first place?
  • Are the incentives correct in procurement to enable adequate Risk Assessment be carried out by those who choose to use it?
  • Rather than accepting risk and retroactively expecting remedial action across all manner of public services in future—ignoring a growing number of ticking time bombs—what should public policy makers be doing to avoid putting them in place?
  • How will we know where the unsafe products were built into, if they are permitted then later found to be a threat-to-life?
  • How is safety or accountability upheld for the lifecycle of the product if companies stop making it, or go out of business?
  • How does anyone working with systems applied to people, assess their ongoing use and ensure it promotes human flourishing?

In the digital environment we still have margin to act, to ensure the safety of everyday parts of institutional digital infrastructure in mainstream state education and prevent harm to children. Whether that’s from parts of a product’s code, or use in the wrong way, or entire products. AI is already used in the infrastructure of school’ curriculum planning, curriculum content, or steering children’s self-beliefs and behaviours, and the values of the adult society these pupils will become. Some products have been oversold as AI when they weren’t, overhyped, overused and under explained,  their design is hidden away and kept from sight or independent scrutiny– some with real risks and harms. Right now, some companies and policy makers are making familiar errors and ‘safety-washing’ AI harms, ignoring criticism and pushing it off as someone else’s future problem.

In education, they could learn lessons from RAAC.


Background references

BBC Newsnight Timeline: reports from as far back as 1961 about aerated concrete concerns. 01/09/2023

BBC Radio 4 The World At One: Was RAAC mis-sold? 04/09/2023

Pre-1980 RAAC roof planks are now past their expected service life. CROSS. (2020) Failure of RAAC planks in schools.

A 2019 safety alert by SCOSS, “Failure of Reinforced Autoclaved Aerated Concrete (RAAC) Planks” following the sudden collapse of a school flat roof in 2018.

The Local Government Association (LGA) and the Department for Education (DfE) then contacted all school building owners and warned of ‘risk of sudden structural failure.’

In February 2022, the Institution of Structural Engineers published a report, Reinforced Autoclaved Aerated Concrete (RAAC) Panels Investigation and Assessment with follow up in April 2023, including a proposed approach to the classification of these risk factors and how these may impact on the proposed remediation and management of RAAC. (p.11)

image credit: DALL·E 2 OpenAI generated using the prompt “a model of Artificial Intelligence made from concrete slabs”.

 

Ensuring people have a say in future data governance

Based on a talk prepared for an event in parliament, hosted by Connected By Data and chaired by Lord Tim Clement-Jones, focusing on the Data Protection and Digital Information Bill, on Monday 5th December 17:00-19:00. “Ensuring people have a say in future data governance”.

Some reflections on Data in Schools (a) general issues; (b) the direction of travel the Government going in and; (c) what should happen, in the Bill or more widely.

Following Professor Sonia Livingstone who focussed primarily on the issues connected with edTech, I focussed on the historical and political context of where we are today, on ‘having a say’ in education data and its processing in, across, and out of, the public sector.


What should be different with or without this Bill?

Since I ran out of time yesterday I’m going to put first what I didn’t get around to: the key conclusions that point to what is possible with or without new Data Protection law. We should be better at enabling the realisation of existing data rights in the education sector today. The state and extended services could build tools for schools to help them act as controllers and for children to realise rights like a PEGE (a personalized exam grade explainer to show exam candidates what data was used to calculate their grade and how), Data usage reports should be made available at least annually from schools to help families understand what data about their children has gone where; and methods that enable the child or family to correct errors or express a Right to Object should be mandatory in schools’ information management systems.  Supplier standards on accuracy and error notifications should be made explicit and statutory, and supplier service level agreements affected by repeated failures.

Where is the change needed to create the social license for today’s practice, even before we look to the future?

“Ensuring people have a say in future data governance”. There has been a lot of asking lots of people for a say in the last decade. When asked, the majority of people generally want the same thingsboth those who are willing and less willing to have personal data about them re-used that was collected for administrative purposes in the public sectorto be told what data is collected for and how it is used, opt-in to re-use, to be able to control distribution, and protections for redress and against misuse strengthened in legislation.

Read Doteveryone’s public attitudes work. Or the Ipsos MORI polls or work by Wellcome. (see below). Or even the care.data summaries.

The red lines in the “Dialogues on Data” report from workshops carried out across different devolved regions of the UK for the 2013 ADRN remain valid today (about the reuse of deidentified linked public admin datasets by qualified researchers in safe settings not even raw identifying data), in particular with relation to:

  • Creating large databases containing many variables/data from a large number of public sector sources
  • Allowing administrative data to be linked with business data
  • Linking of passively collected administrative data, in particular geo-location data

“All of the above were seen as having potential privacy implications or allowing the possibility of reidentification of individuals within datasets. The other ‘red-line’ for some participants was allowing researchers for private companies to access data, either to deliver a public service or in order to make profit. Trust in private companies’ motivations were low.”

Much of this reflects what children and young people say as well. RAENG (2010) carried out engagement work with children on health data Privacy and Prejudice: young people’s views on the development and use of Electronic Patient Records (911.18 KB). They are very clear about wanting to keep their medical details under their own control and away from the ‘wrong hands’ which includes potential employers, commercial companies and parents.

Our own engagement work with a youth group aged 14-25 at a small scale was published in 2020 in our work, The Words We Use in Data Policy: Putting People Back in the Picture, and reflected what the Office for the Regulation of National Statistics went to publish in their own 2022 report, Visibility, Vulnerability and Voice (as a framework to explore whether the current statistics are helping society to understand the experiences of children and young people in all aspects of their lives). Young people worry about misrepresentation, about the data being used in place of conversations about them to take decisions that affect their lives, and about the power imbalance it creates without practical routes for complaint or redress. We all agree children’s voice is left out of the debate on data about them.

Parents are left out too. Defenddigitalme commissioned a parental survey via Survation (2018) under 50% felt they had sufficient control of their child’s digital footprint, and 2/3rds had not heard of the National Pupil Database or its commercial reuse.

So why is it that the public voice, loud and clear, is ignored in public policy and ignored in the drafting of the Data Protection and Digital Information Bill?

When it comes to education, debate should start with children’s and family rights in education, and education policy, not about data produced as its by-product.

The Universal Declaration of Human Rights Article 26 grafts a parent’s right onto child’s right to education, to choose the type of that education and it defines the purposes of education.

Education shall be directed to the full development of the human personality and to the strengthening of respect for human rights and fundamental freedoms. It shall promote understanding, tolerance and friendship among all nations, racial or religious groups, and shall further the activities of the United Nations for the maintenance of peace. Becoming a set of data points for product development or research is not the reason children go to school and hand over their personal details in the admissions process at all.

The State of the current landscape
To realise change, we must accept the current state of play and current practice. This includes a backdrop of trying to manage data well in the perilous state of public infrastructure, shrinking legal services and legal aid for children, ever-shrinking educational services in and beyond mainstream education, staff shortages and retention issues, and the lack of ongoing training or suitable and sustainable IT infrastructure for staff and learners.

Current institutional guidance and national data policy in the field is poor and takes the perspective of the educational setting but not the person.

Three key issues are problems from top-down and across systems:

  • Data repurposing i.e. SATS Key Stage 2 tests which are supposed to be measures of school performance not individual attainment are re-used as risk indicators in Local Authority datasets used to identify families for intervention, which it’s not designed for.
  • Vast amount of data distribution and linkage with other data: policing, economic drivers (LEO) and Local Authority broad data linkage without consent for purposes that exceed the original data collection purpose parents are told and use it like Kent, or Camden, “for profiling the needs of the 38,000 families across the borough”  plus further automated decision-making.
  • Accuracy in education data is a big issue, in part because families never get to see the majority of data created about a child much of which is opinion, and not submitted by them: ie the Welsh government fulfilled a Subject Access Request to one parent concerned with their own child’s record, and ended up revealing that every child in 2010 had been wrongly recorded thanks to a  Capita SIMS coding error, as having been in-care at some point in the past. Procurement processes should build penalties for systemic mistakes and lessons learned like this, into service level agreements, but instead we seem to allow the same issues to repeat over and over again.

What the DfE Does today

Government needs to embrace the fact it can only get data right, if it does the right thing. That includes policy that upholds the law by design. This needs change in its own purposes and practice.

National Pupil Data is a bad example from the top down. The ICO 2019-20 audit of the Department for Education — it is not yet published in full but findings included failings such as no Record of Processing Activity (ROPA), Not able to demonstrate compliance, and no fair processing. All of which will be undermined further by the Bill.

The Department for Education has been giving away 15 million people’s personal confidential data since 2012 and never told them. They know this. They choose to ignore it. And on top of that, didn’t inform people who were in school since then, that Mr Gove changed the law. So now over 21 million people’s pupil records are being given away to companies and other third parties, for use in ways we do not expect, and it is misused too. In 2015, more secret data sharing began, with the Home Office. And another pilot in 2018 with the DWP.

Government wanted to and changed the law on education admin data in 2012 and got it wrong. Education data alone is a sin bin of bad habits and complete lack of public and professional engagement, before even starting to address data quality and accuracy and backwards looking policy built on bad historic data.

The Commercial department do not have appropriate controls in place to protect personal data being processed on behalf of the DfE by data processors.” (ICO audit of the DfE , 2020)

Gambling companies ended up misusing access to learner records for over two years exposed in 2020 by journalists at the Sunday Times.

The government wanted nationality data from the Department for Education to be collected for the purposes of another (the Home Office) and got it very wrong. People boycotted the collection until it was killed off and data later destroyed.

Government changed the law on Higher Education in 2017 and got it wrong.  Now  third parties pass around named equality monitoring records like religion, sexual orientation, and disability and it is stored forever on named national pupil records. The Department for Education (DfE) now holds sexual orientation data on almost 3.2 million, and religious belief data on 3.7 million people.

After the summary findings published by the ICO of their compulsory audit of the Department for Education,  the question now is what will the Department and government do to address the 139 recommendations for improvement, with over 60% classified as urgent or high priority. Is the government intentional about change? We don’t think so at defend digital me, so we are, and welcome any support of our legal challenge.

Before we write new national law we must recognise and consider UK inconsistency and differences across education

Existing frameworks law and statutory guidance and recommendations need understood in the round (eg devolved education, including the age of a child and their capacity to undertake a contract in Scotland (at 16), the geographical applications of the Protection of Freedoms Act 2012, also the Prevent Duty since 2015 and its wider effects as a result of profiling children in counter-terrorism that reach beyond poor data protection and impacts on privacy (see The UN Special Rapporteur 2014 report on children’s rights and freedom of expression) – a plethora of Council of Europe work is applicable here in education that applies to UK as a member state: guidelines on data protection, AI, human rights, rule of law and the role of education in the promotion of democratic citizenship and a protection against authoritarian regimes and extreme nationalism.

The Bill itself
The fundamental principles of the GDPR and Data Protection law are undermined further from an already weak starting point since the 2018 Bill adopted exemptions that were not introduced by other countries in immigration and law enforcement.

  • The very definitions of personal and biometric data need close scrutiny.
  • Accountability is weakened (DPO, DPIA and prior consultation for high risk no longer necessary, ROPA)
  • Purpose limitation is weakened (legitimate interests and additional conditions for LI)
  • Redress is missing (Children and routes for child justice)
  • Henry VIII powers on customer data and business data must go.
  • And of course it only covers the living. What about children’s data misuse that causes distress and harms to human dignity but that is not covered strictly by UK Data Protection law, such as the children whose identities were used for undercover police in the SpyCops scandal. Recital 27 under the GDPR permits a possible change here.

Where are the Lessons Learned reflected in the Bill?

This Bill should be able to look at recent ICO enforcement action or judicial reviews to learn where and what is working and not working in data protection law. Lessons learned should be plentiful on public communications and fair processing, on the definitions of research, on discrimination, accuracy and bad data policy decisions. But where are those lessons in the Bill learned from health data sharing, why the care.data programme ran into trouble and similar failures repeated in the most recent GP patient data grab, or Google DeepMind and the RoyalFree? In policing from the Met Police Gangs Matrix?  In Home Affairs from the judicial review launched to challenge the lawfulness of an algorithm used by the Home Office to process visa applications? Or in education from the summer of 2020 exams fiasco?

The major data challenges as a result of government policy are not about data at all, but bad policy decisions which invariably mean data is involved because of ubiquitous digital first policy, public administration, and the nature of digital record keeping. In education examples include:

  • Partisan political agendas: i.e. the narrative of absence numbers makes no attempt to disaggregate the “expected” absence rate from anything on top, and presenting the idea as fact, that 100,000 children have not returned to school, “as a result of all of this”, is badly misleading to the point of being a lie.
  • Policy that ignores the law. The biggest driver of profiling children in the state education sector, despite the law that profiling children should not be routine, is the Progress 8 measure: about which Leckie & late Harvey Goldstein (2017) concluded in their work on the evolution of school league tables in England 1992-2016: ‘Contextual value-added’, ‘expected progress’ and ‘progress 8’ that, “all these progress measures and school league tables more generally should be viewed with far more scepticism and interpreted far more cautiously than have often been to date.”

The Direction of Travel
Can any new consultation or debate on the changes promised in data protection reform, ensure people have a say in future data governance, the topic for today, and what if any difference would it make?

Children’s voice and framing of children in National Data Strategy is woeful, either projected as victims or potential criminals. That must change.

Data protection law has existed in much similar form to today since 1984. Yet we have scant attention paid to it in ways that meet public expectations, fulfil parental and children’s expectations, or respect the basic principles of the law today. We have enabled technologies to enter into classrooms without any grasp of scale or risks in England that even Scotland has not with their Local Authority oversight and controls over procurement standards. Emerging technologies: tools that claim to be able to identify emotion and mood and use brain scanning, the adoption of e-proctoring, and mental health prediction apps which are treated very differently from they would be in the NHS Digital environment with ethical oversight and quality standards to meet — these are all in classrooms interfering with real children’s lives and development now, not some far-off imagined future.

This goes beyond data protection into procurement, standards, safety, understanding pedagogy, behavioural influence, and policy design and digital strategy. It is furthermore, naive to think this legislation, if it happens at all, is going to be the piece of law that promotes children’s rights when the others in play from the current government do not: the revision of the Human Rights Act, the recent PCSC Bill clauses on data sharing, and the widespread use of exemptions and excuses around data for immigration enforcement.

Conclusion
If policymakers who want more data usage treat people as producers of a commodity, and continue to ignore the publics’ “say in future data governance” then we’ll keep seeing the boycotts and the opt-outs and create mistrust in government as well as data conveners and controllers widening the data trust deficit**. The culture must change in education and other departments.

Overall, we must reconcile the focus of the UK national data strategy, with a rights-based governance framework to move forward the conversation in ways that work for the economy and research, and with the human flourishing of our future generations at its heart. Education data plays a critical role in social, economic, democratic and even security policy today and should be recognised as needing urgent and critical attention.


References:

Local Authority algorithms

The Data Justice Lab has researched how public services are increasingly automated and government institutions at different levels are using data systems and AI. However, our latest report, Automating Public Services: Learning from Cancelled Systems, looks at another current development: The cancellation of automated decision-making systems (ADS) that did not fulfil their goals, led to serious harm, or met caused significant opposition through community mobilization, investigative reporting, or legal action. The report provides the first comprehensive overview of systems being cancelled across western democracies.

New Research Report: Learning from Cancelled Systems

The Children of Covid: Where are they now? #CPC22

At Conservative Party Conference (“CPC22”) yesterday, the CSJ Think Tank hosted an event called, The Children of Lockdown: Where are they now?

When the speakers were finished, and other questions had been asked, I had the opportunity to raise the following three points.

They matter to me because I am concerned that bad policy-making for children will come from the misleading narrative based on bad data. The data used in the discussion is bad data for a number of reasons, based on our research over the last 4 years at defenddigitalme, and previously as part of the Counting Children coalition with particular regard to the Schools Bill.

The first is a false fact that has been often bandied about over the last year in the media and in Parliamentary debate, and that the Rt Hon Sir Iain Duncan Smith MP repeated in opening the panel discussion, that 100,000 children have not returned to school, “as a result of all of this“.

Full Fact has sought to correct this misrepresentation by individuals and institutions in the public domain several times, including one year ago today, when a Sunday Times article, published on 3 October 2021, claimed new figures showed “that between 95,000 and 135,000 children did not return to school in the autumn term, credited to the Commission on Young Lives, a task force headed up by former Children’s Commissioner for England.” Anne Longfield had then told Full Fact, that on 16 September 2021, “the rate of absence was around 1.5 percentage points higher than would normally be expected in the autumn term pre-pandemic.

Full Fact wrote, “This analysis attempts to highlight an estimated level of ‘unexplained absence’, and comes with a number of caveats—for example it is just one day’s data, and it does not record or estimate persistent absence.”

There was no attempt made in the CPC22 discussion to disaggregate the “expected” absence rate from anything on top, and presenting the idea as fact, that 100,000 children have not returned to school, “as a result of all of this”, is misleading.

Suggesting this causation for 100,000 children is wrong for two reasons. The first, is not talking about the number of children within that number who were out of school before the pandemic and reasons for that. The CSJ’s own report published in 2021, said that, “In the autumn term of 2019, i.e pre-Covid 60,244 pupils were labeled as severely absent.”

Whether it is the same children or not who were out of school before and afterwards also matters to apply causation. This named pupil-level absence data is already available for every school child at national level on a termly basis, alongside the other personal details collected termly in the school census, among other collections.

Full Fact went on to say, “The Telegraph reported in April 2021 that more than 20,000 children had “fallen off” school registers when the Autumn 2020 term began. The Association of Directors of Children’s Services projected that, as of October 2020, more than 75,000 children were being educated at home. However, as explained above, this is not the same as being persistently absent.”

The second point I made yesterday, was that the definition of persistent absence has changed three times since 2010, so that children are classified as persistently absent more quickly now at 10%, than when it meant 20% or more of sessions were missed.

(It’s also worth noting that data are inconsistent over time in another way too. The 2019 Guide to Absence Statistics draws attention to the fact that, “Year on year comparisons of local authority data may be affected by schools converting to academies.”)

And third and finally, I pointed out where we have found a further problem in counting children correctly. Local Authorities do this in different ways. Some count each actual child once in the year in their data, some count each time a child changes status (i.e a move from mainstream into Alternative Provision to Elective Home Education could see the same child counted three times in total, once in each dataset across the same year), and some count full-time equivalent funded places (i.e. if five children each have one day a week outside mainstream education, they would be counted only as one single full-time child in total in the reported data).

Put together, this all means not only that the counts are wrong, but the very idea of “ghost children” who simply ‘disappear’ from school without anything known about them anywhere at all, is a fictitious and misleading presentation.

All schools (including academies and independent schools) must notify their local authority when they are about to remove a pupil’s name from the school admission register under any of the fifteen grounds listed in Regulation 8(1) a-n of the Education (Pupil Registration) (England) Regulations 2006. On top of that, children are recorded as Children Missing Education, “CME” where the Local Authority decides a child is not in receipt of suitable education.

For those children,  processing of personal data of children not-in-school by Local Authorities is already required under s436Aof the The Education Act 1996, Duty to make arrangements to identify children not receiving education.

Research done as part of the Counting Children coalition with regards to the Schools Bill, has found every Local Authority that has replied to date (with a 67% response rate to FOI on July 5, 2022) upholds its statutory duty to record these children who either leave state education, or who are found to be otherwise missing education. Every Local Authority has a record of these children, by name, together with much more detailed data.**  The GB News journalist on the panel said she had taken her children out of school and the Local Authority had not contacted her. But as a home-educating audience member then pointed out, that does not mean therefore the LA did not know about her decision, since they would already have her child-/ren’s details recorded. There is law in place already on what LAs must track. Whether or not and how the LA is doing its job, was beyond this discussion, but the suggestion that more law is needed to make them collect the same data as is already required is superfluous.

This is not only about the detail of context and nuance in the numbers and its debate, but substantially alters the understanding of the facts. This matters to have correct, so that bad policy doesn’t get made based on bad data and misunderstanding the conflated causes.

Despite this, in closing Iain Duncan Smith asked the attendees to go out from the meeting and evangelise about these issues. If they do so based on his selection of ‘facts’ they will spread misinformation.

At the event, I did not mention two further parts of this context that matter if policy makers and the public are to find solutions to what is no doubt an important series of problems, and that must not be manipulated to present as if they are entirely as a result of the pandemic. And not only the pandemic, but lockdowns specifically.

Historically, the main driver for absence is illness. In 2020/21, this was 2.1% across the full year. This was a reduction on the rates seen before the pandemic (2.5% in 2018/19).

A pupil on-roll is identified as a persistent absentee if they miss 10% or more of their possible sessions (one school day has two sessions, morning and afternoon.)  1.1% of pupil enrolments missed 50% or more of their possible sessions in 2020/21. Children with additional educational and health needs or disability, have higher rates of absence. During Covid, the absence rate for pupils with an EHC plan was 13.1% across 2020/21.

Authorised other reasons has risen to 0.9% from 0.3%, reflecting that vulnerable children were prioritised to continue attending school but where parents did not want their child to attend, schools were expected to authorise the absence.” (DfE data, academic year 2020/21)

While there were several references made by the panel to the impact of the pandemic on children’s poor mental health, no one mentioned the cuts to youth services’ funding by 70% over ten years, that has allowed CAMHS funding and service provision to wither and fail children well before 2020. The pandemic has exacerbated children’s pre-existing needs that the government has not only failed to meet since, but actively reduced provision for.

It was further frustrating to hear, as someone with Swedish relatives, of their pandemic approach presented as comparable with the UK and that in effect, they managed it ‘better’. It seems absurd to me, to compare the UK uncritically with a country with the population density of Sweden. But if we *are* going to do comparisons with other countries, it should be with fuller understanding of context, and all of their data, and caveats if comparison is to be meaningful.

I was somewhat surprised that Iain Duncan Smith also failed to acknowledge, even once, that thousands of people in the UK have died and continue to die or have lasting effects as a result of and with COVID-19. According to the King’s Fund report,Overall, the number of people who have died from Covid-19 to end-July 2022 is 180,000, about 1 in 8 of all deaths in England and Wales during the pandemic.” Furthermore in England and Wales, “The pandemic has resulted in about 139,000 excess deaths“. “Among comparator high-income countries (other than the US), only Spain and Italy had higher rates of excess mortality in the pandemic to mid-2021 than the UK.” I believe that if we’re going to compare ‘lockdown success’ at all, we should look at the wider comparable data before making it. He might also have chosen to mention alongside this, the UK success story of research and discovery, and the NHS vaccination programme.

And there was no mention at all made of the further context, that while much was made of the economic harm of the impact of the pandemic on children, “The Children of Lockdown” are also, “The Children of Brexit”. It is non-partisan to point out this fact, and, I would suggest, disingenuous to leave out entirely in any discussion of the reasons for or impact of economic downturn in the UK in the last three years. In fact, the FT recently called it a “deafening silence.”

At defenddigitalme, we raised the problem of this inaccurate “counting” narrative numerous times including with MPs, members of the House of Lords in the Schools Bill debate as part of the Counting Children coalition, and in a letter to The Telegraph in March this year. More detail is here, in a blog from April.


Update May 23, 2023

Today I received the DfE held figures of he number of children who leave an educational setting for an unknown onward destination, a section of the Common Transfer Files holding space, in effect a digital limbo after leaving an educational setting until the child is ‘claimed’ by the destination. It’s  known as, the Lost Pupils Database.

Furthermore, the DfE has published exploratory statistics on EHE
and ad hoc stats on CME too.

October 2022. More background:

The panel was chaired by the Rt Hon Sir Iain Duncan Smith MP and other speakers included Fraser Nelson, Editor of The Spectator Magazine; Kieron Boyle, Chief Executive Officer of Guy’s & St Thomas Foundation; the Rt Hon Robert Halfon MP, Education Select Committee Chair; and Mercy Muroki, Journalist at GB News.

We have previously offered to share our original research data and discuss with the Department for Education, and repeated this offer to the panel to help correct the false facts. I look forward in the hope they will take it up.

** Data collected in the record by Local Authorities when children are deregistered from state education (including to move to private school) may include a wide range of personal details, including as an example in Harrow: Family Name, Forename, Middle name, DOB, Unique Pupil Number (“UPN”), Former UPN, Unique Learner Number, Home Address (multi-field), Chosen surname, Chosen given name, NCY (year group), Gender, Ethnicity, Ethnicity source, Home Language, First Language, EAL (English as an additional language), Religion, Medical flag, Connexions Assent, School name, School start date, School end date, Enrol Status, Ground for Removal, Reason for leaving, Destination school, Exclusion reason, Exclusion start date, Exclusion end date, SEN Stage, SEN Needs, SEN History, Mode of travel, FSM History, Attendance, Student Service Family, Carer details, Carer address details, Carer contract details, Hearing Impairment And Visual Impairment, Education Psychology support, and Looked After status.