Category Archives: AI

Leading AI literacy to further the common good

The UK Department for Science and Technology has been criticised online for its publication of a links list to commercial AI resources packaged as practical AI skills for work.

There are two major problems if you enable AI “literacy” and policy to be led this way. The first, is the framing as something prioritised for employment, and it’s notable that many of these providers are the employers who are often the very same companies seeking to increase their profits through cost reduction from increased efficiency, or having fewer humans in their workforce. A position the UK government has accepted as caused by AI and an inevitability.

The second, is that the subject, and what society understands about its salience and meaning, is steered by the same hands of Big Tech, it plays into, through the effect of consolidation of power.

To present ‘teaching about AI’ as being about skills for the workforce (and a narrow range of workplaces at that), is not only misguided because it narrows learning to being only about technical skills, but because it misdirects us all to look away, more broadly, from what “AI” is being used for, how, why, and by whom.

The critique is therefore important to understand not just about quality of courses, but about the narrowing of AI literacy itself.

AI Literacy is in fact, vital democratic infrastructure.

Problem 1: AI Literacy as Workforce Optimisation

The recommendation of the AI Skills for Life and Work: Rapid Evidence Review, published on January 28th seems not to have been taken here, to involve professional organisations, such as the British Computer Society (BCS) and Royal Academy of Engineering (RAE),  in defining and policing standards that training courses should meet. These expert organisations are notably absent from the list of new and founding partners.

Though the announcements claimed these courses were checked against Skills England’s AI foundation skills for work benchmark, also published on January 28th, something seems to have gone badly wrong in any basic due diligence to check even that the links all worked. That should have included checks being done for claims that free courses were actually at zero cost to users, before the public was steered towards those providers in media coverage.

If Skills England wants to restore both its own credibility and public trust in the providers, it could publish its criteria and findings about how the courses chosen for the AI Skills Boost programme, and evaluation of their assessment against Skills England’s new AI foundation skills for work benchmark and how that was designed.

The second challenge, is the Westminster government is focussed only on skills for some work, and ‘the rest’ of life is vague at best.

Problem 2: Narrative Capture by Big Tech distorts the big picture

Evidence from organisations that have scrutinised UK real-world AI in practice; one recent synthesis is by Data Justice Lab for example, of cancelled systems in the public sector, may not fit the narrow scope of AI skills for some types of work, but it does offer valuable lessons to learn from for other areas, in particular how AI affects public sector services, which in turn affects so many of us on a daily basis.

The government has repeatedly disagreed on AI policy, with recommendations from peers, from experts, and with what the public is saying. In stark contrast with other European countries approach, the UK refuses to legislate on unacceptable risk levels.

The public are already paying the price for this. The prioritisation of move fast and break things “route to impact” has so far come at the cost to citizens and broken everyday lives in welfare systems. Loss of agency and everyday friction are making life harder, less efficient, more stressful in many ways, the opposite of what many felt was the promise of technology and early Internet.

AI is already shaping the justice system through police surveillance, legal research, and citizens advice bots and making AI the cornerstone of its approach, while the basic courts’ IT tools are totally dysfunctional and those in charge won’t listen and won’t invest in the infrastructure to fix it.

[Notable aside, don’t let this put you off having your say and speaking out. There are a few days left to have your say in a consultation on the Wild West of facial recognition used for law enforcement.]

The youth backlash to AI slop has become incessant and the average older person in the street is fed up they need a multitude of apps and a smartphone to perform everyday tasks that used to be simpler to get done. (40% of drivers said that paying for parking with cash was their preferred choice in a 2025 poll of 13,755 drivers for The AA.)

Thousands of workers are run ragged by the algorithmic slave drivers of gig-economy apps, in precarious jobs, and less protected than European counterparts with weaker workers rights post Brexit, so tragically dramatised in the Loach film, Sorry we missed you.

The question is not, do we need literacy to live in a world of AI vs human? It is, how do we live everyday life well, under powerful, undemocratic, often unaccountable, corporate control that is being accelerated and intensified by tech tools we have no say over?

Any AI literacy approach that fails to address this, fails full stop.

Why we must prioritise AI Literacy as democratic infrastructure

How do you democratize a technology that itself, in the form we’re seeing it now, is a product of concentrated power?”

The AI media narrative will, given time, not be driven by what government says about AI, but by how it makes us feel. Increasingly, that is, more vulnerable under uncertainty over income; fear of losing our jobs; increased surveillance; and loss of freedom; indeed a loss of power over our everyday life and need to “take back control“. We saw where that led in 2016. The government will pay the price for those feelings again, if it does not act now to address them.

We now have choices about whose version of AI literacy we follow in the UK. I have the privilege of contributing to work at the Council of Europe, in an approach that I hope will be adopted by the UK later this year, and we could lead on, instead of following ‘what tech says’.

It is an alternative comprehensive framework that addresses all the dimensions of AI literacy—particularly the human dimension— not only to more holistically train technologically skilled citizens to design or use AI, but prepares everyone for living with AI, with a focus on the values of democracy, human rights, and the rule of law.

Being AI literate means understanding how technology and companies affect fundamental economic, human, social and political rights and how we can protect ourselves, so that we can act in ways we choose.

Our parliamentary sovereignty and democratic processes, depend on the power to control our own national narratives and parliamentary processes, including the outcome of elections.

The media and public’s ability to be informed in an election and beyond, depends on the ability to identify and challenge misinformation, to use independent critical thought; to question power; and that depends on an informed and critical citizenry empowered with our own social agency.

We cannot centre these things if the government direction of travel is steered by U.S. led Open AI, Accenture, Google, IBM and Microsoft. Narrow media messaging is conflicted, both saying ‘use AI for furthering economic growth’ and at the same time, excusing those same companies for making job cuts as if they really can’t help it and it is in fact they who have no choice thanks to AI. ‘Blame the AI, don’t blame us (but please forget we chose to build / buy / use it)’.

Education and the role of AI and literacy in the Public Interest

The public interest depends on the state to offer education free from commercial influence and gain, and to objectively understand the implications of AI, not as products that may become obsolete from one day to the next, but with a human-centric, technology neutral approach that looks for outcomes rather than product skills.

We also need a UK government that is committed to doing what it says it will do on AI, not one that simply tells others how to do it.

Whitehall departments are not adequately transparent over the ways they use AI and algorithms and the use of the (perhaps overly complex) AI register is low, despite it being “a requirement for all government departments”.

As AI systems become increasingly embedded in social, economic, and political systems, we must ensure everyone has the necessary level of awareness and critical understanding, to navigate an AI-transformed world in everyday life. Not only to use AI effectively, but to ensure that those responsible for AI development and deployment can respect and enhance human dignity, rights, and democratic values.

We need to protect those people who are excluded in life, or over-policed, without the freedom necessary to what being fully human requires, especially for those who are marginalised, “the outliers” in society and often excluded in the biometric training data from which AI are built—by race, language, gender, age, health or disability.

We need to protect our biometric data, our faces and voices, to be able to show up and speak up when it matters.

As the Pope summed up in his recent World Communications Day message, AI literacy must prioritise understanding, “how algorithms shape our perception of reality, how AI biases work, what mechanisms determine the presence of certain content in our feeds, what the economic principles and models of the AI economy are and how they might change.

The future of freedom in society in the UK, our humanity, our democracy, our trust, depend not on a handful of companies who strive for a brave new world, nor on AI infrastructure they are selling us well-packaged in hype. Our collective future depends on one digital Minister having the courage to take a new direction.

Confusion over GenAI in the classroom

Apparently, there’s been online confusion recently, around what Google does and does not use from Gmail to train Gemini. But it’s not really clear what the clarification means. “No Gmail users’ emails are used to train Google’s Gemini AI,” is very specific wording and merits closer attention. I was certainly told in person by Google Execs in a group meeting around two years ago, that school pupil data was used to train and develop its products. (They also said when I mentioned use of Forms by schools to transfer pupils’ passport and health data, “oh I wouldn’t do that”.) That appears to still be true for some product lines, but it’s less clear for others.

Misunderstandings about pupils using GenAI in schools abound too. Mistaken claims that teachers can, “consent on behalf of children” to wave through as the data protection lawful basis for using AI products. Omissions and inaccurate information on IP rights.  Inaccurate definitions of closed and open AI systems, with blanket claims that pupils can more safely use the first over the latter.

Broadly speaking, UK guidance on using AI in the classroom has focussed on Generative AI.  The same is true of many “How To” guides published as OpEds or articles, or even books by popular ex-RE teachers turned AI experts. But these often fail to state the fact that it is highly likely many of these off-the-shelf GenAI tools cannot be used lawfully in a classroom, asking children to set up individual accounts and using them directly. Just as importantly, other edTech tools that integrate them into their front end, might depend on the same GenAI company policies. This needs thorough understanding of how both products work together. It is misleading for guidance to suggest that pupils can use these tools if schools just ensure they do so thoughtfully or under supervision.

So let’s take a look at the companies’ own published policies on how user data is used by the company and their publicly offered GenAI. Between Google Gemini, Anthropic Claude, and OpenAI ChatGPT, some are more complex or opaque than others. Interestingly, no company states why any service is not permitted for children.

Google Gemini and Education

Google’s policies around Gemini in education are extraordinarily complex and interlink. Even after extensive reading, it’s still unclear how they’re meant to work—let alone how they could be understood by pupils. Google’s “responsible AI” training” guidance cannot easily be reconciled with the many products and sub-products through which Gemini can be used, including Workspace for Education.

Using the tools requires understanding:

  • which version of Google products or Workspace you have,

  • the distinction between “core” and “additional” services,

  • how Gemini features layer on top of those, and

  • different age-gating rules, defaults, and admin-controlled settings.

The Gemini app is a standalone AI assistant. Google Workspace with Gemini, on the other hand, integrates AI directly into Google Workspace applications like Gmail, Docs, Sheets, Slides, and Meet. Since June, Gemini is included in the Workspace for Education edition free of charge by default, as an admin-managed core Workspace service.

Since only June this year, Google Workspace for Education users have “added data protection” in the Gemini app, meaning their chats with Gemini are not human reviewed or used to train AI models. Qualifying [my added stress] Google Workspace for Education editions, including Education Standard and Education Plus, have the same privacy assurances.

What those data protection standards were prior to June, why it changed, and what they are for “non-qualifying” products, remain unclear.

1. Does Google not understand European data protection law?

However, before we even get into “the AI part”, Google’s own guidance for UK schools claims that data processing is lawful if schools collect consent from parents for minors’ use of “Additional Services” such as YouTube or Maps:

Admins must provide or obtain consent for the use of the services by their minor users.”

“Additional Services (like YouTube, Google Maps, and Applied Digital Skills) are designed for consumer users and can optionally be used with Google Workspace for Education accounts if allowed for educational purposes by a school’s domain administrator. “ 

It is not explained why, but it might be because Google or its sub-processors use the data in these Additional Services to “provide, maintain, protect and improve” services and “to develop new ones”.

Source: https://support.google.com/a/answer/6356441?sjid=7831273918566805521-EU

However:

  • Consent in schools is rarely valid because it cannot be “freely given”: parents and pupils face a clear power imbalance, and opting out may disadvantage the child. Routine educational processing cannot rely on consent;
  • Developing “new” services is new product development, which requires valid consent under the EU/UK GDPR and therefore means current practice is without a lawful basis;
  • Google’s approach therefore sets up schools as well as itself, for unlawful practice under European data-protection law.

2. Unclear and overlapping terms

Google’s T&Cs vary between its education tiers and versions Core and Additional, Free and Paid, fundamentals, standard, and plus, like Google Workspace for Education and Gemini products, or its generic Workspace for Education terms. For staff, parents (or the school child themselves) is very difficult to determine:

  • what data is processed where,

  • how it connects to Gemini and AI features (e.g., voice transcription or agentic AI), or

  • what changes at age 18.

For example, the Gemini Apps Privacy Hub states that for users aged 18+, call and text history may be imported into Gemini activity. It is unclear whether this includes data generated before turning 18, or whether it affects children who become 18 while using Workspace for Education.

3. Age controls depend on the administrator

Google relies heavily on institutions to understand, or even configure users’ and organisational age settings. For example:

“Workspace for Education users designated as under 18 will not be able to use Gemini in Classroom…” (Source: Google support answers.)

This appears to conflict with the latest June 2025 product announcements above, but it’s hard to be sure.

Unlike primary and secondary education, Higher-education institutions users not actively designated as under the age of 18 have no additional restrictions for Google services. Admins must ensure any under-18s are placed in an organisational unit with the correct age settings. This shifts responsibility—and risk—onto administrators who may not fully understand the implications.

In summary, it is unclear how different education product offerings, act together with various Gemini offerings. And Google seems to want to push accountability down to the institutional Admin. The simplistic answer appears to be, if you don’t use a paid version of Google tools in education, Google reuses the activity from users of any age as training data for the company to provide, maintain, protect and improve additional services, and to develop new ones.

Given Google’s world-class legal and communications resources, it is striking how opaque both the legal basis and the explanations remain. Clearer, simpler company guidance is urgently needed. Any company clarification and simplifications would be welcome.

Claude is not intended for use by children under age 18.

“Our Services are not directed towards, and we do not knowingly collect, use, disclose, sell, or share any information from children under the age of 18. ” [Source https://www.anthropic.com/legal/privacy]

Any guidance seen elsewhere for educational settings may also be misleading where it suggests that if a school user does not directly “put” personal data “into” the LLM, the tool will not be processing personal data.

Claude’s policy, for example, contradicts that, because other usage data collected indirectly but not “put in” by the user is still personal data, such as IP and other identifiers:

“Consistent with your device or browser permissions, your device or browser automatically sends us information about when and how you install, access, or use our Services. This includes information such as your device type, operating system information, browser information and web page referrers, mobile network, connection information, mobile operator or internet service provider (ISP), time zone setting, IP address (including information about the location of the device derived from your IP address), identifiers (including device or advertising identifiers, probabilistic identifiers, and other unique personal or online identifiers), and device location.” [source: https://www.anthropic.com/legal/privacy]

Open AI ChatGPT and children

The OpenAI terms of use require users to be at least 13 years old, and those under 18 must have parental or guardian permission.

Our Service is not directed to children under the age of 13. OpenAI does not knowingly collect Personal Information from children under the age of 13. […] If you are 13 or older, but under 18, you must have permission from your parent or guardian to use our Services.”

Like Google’s Additional Services, this means it is unsuitable and unlawful to use in schools. If parents are told their child should be or are required to use the LLM and the school is asking for tick box, it may be an acknowledgement of use, but it is not consent.

The company website ‘help’ goes on to add, “We advise caution with exposure to kids, even those who meet our age requirements, and if you are using ChatGPT in the education context for children under 13, the actual interaction with ChatGPT must be conducted by an adult.”

Overall

To sum up, wording from the guidance for schools in Wales, says, The age ratings of generative AI tools must be considered before using them. Age ratings can vary, and some tools are only designed for use by over-18s. Many generative AI tools are not designed for education.”

Which begs the question, why does so much effort and guidance for school children’s AI use in the classroom, focus on how to use Generative AI at all?

I am hopeful that instead we can soon include better guidance and knowledge as part of “digital literacy” or “citizenship” skills in the curriculum, starting with teacher training about, not with “AI”.

Pirates and their stochastic parrots

It’s a privilege to have a letter published in the FT as I do today, and thanks to the editors for all their work in doing so.

I’m a bit sorry that it lost the punchline which was supposed to bring a touch of AI humour about pirates and their stochastic parrots. And its rather key point was cut that,

“Nothing in current European laws, including Convention 108 for the UK, prevents companies developing AI lawfully.”

So for the record, and since it’s (£), my agreed edited version was:

“The multi-signatory open letter advertisement, paid for by Meta, entitled “Europe needs regulatory certainty on AI” (September 19) was fittingly published on International Talk Like a Pirate Day.

It seems the signatories believe they cannot do business in Europe without “pillaging” more of our data and are calling for new law.

Since many companies lobbied against the General Data Protection Regulation or for the EU AI Act to be weaker, or that the Council of Europe’s AI regulation should not apply to them, perhaps what they really want is approval to turn our data into their products without our permission.

Nothing in current European laws, including Convention 108 for the UK, prevents companies developing AI lawfully. If companies want more consistent enforcement action, I suggest Data Protection Authorities comply and act urgently to protect us from any pirates out there, and their greedy stochastic parrots. “

Prior to print they asked to cut out a middle paragraph too.

“In the same week, LinkedIn sneakily switched on a ‘use me for AI development’ feature for UK users without telling us (paused the next day); Larry Ellison suggested at Oracle’s Financial Analyst Meeting  that more AI should usher in an era of mass citizen surveillance, and our Department for Education has announced it will allow third parties to exploit school children’s assessment data for AI product building, and can’t rule out it will include personal data.”

It is in fact the cumulative effect around the recent flurry of AI activities by various parties, state and commercial, that deserves greater attention rather than being only about this Meta-led complaint. Who is grabbing what data and what infrastructure contracts and creating what state dependencies and strengths to what end game?  While some present the “AI race” as China or India versus the EU or the US to become AI “super powers”, is what “Silicon Valley” offers, their way is the only way, a better offer?

It’s not in fact, “Big Tech” I’m concerned about, but the arrogance of so many companies that in the middle of regulatory scrutiny  would align themselves with one that would rather put out PR that omits the fact they are under it, instead only calling for the law to be changed, and frankly misleading the public by suggesting it is all for our own good than talk about how this serves their own interests.

Who do they think they are to dictate what new laws must look like when they seem simply unwilling to stick to those we have?

Perhaps this open letter serves as a useful starting point to direct DPAs to the companies in need of most scrutiny around their data practices. They seem to be saying they want weaker laws or more enforcement. Some are already well known for challenging both. Who could forget Meta (Facebook’s) secret emotional contagion study involving children in which friends’ postings were moved to influence moods, or the case of letting third parties, including Cambridge Analytica access users’ data? Then there’s the data security issues or the fine over international transfers or the anti-trust issues. And there’s the legal problems with their cookies. And all of this built from humble beginnings by the same founder of Facemash “a prank website” to rate women as hot or not.

As Congressman Long reportedly told Zuckerberg in 2018, “You’re the guy to fix this. We’re not. You need to save your ship.”

The Meta-led ad called for “harmonisation enshrined in regulatory frameworks like the GDPR” and I absolutely agree. The DPAs need to stand tall and stand up to OpenAI and friends (ever dwindling in number so it seems) and reassert the basic, fundamental principles of data protection laws from the GDPR to Convention 108 to protect fundamental human rights. Our laws should do so whether companies like them or not. After all, it is often abuse of data rights by companies, and states, that populations need protection from.

Data protection ‘by design and by default’ is not optional under European data laws established for decades. It is not enough to argue that processing is necessary because you have chosen to operate your business in a particular way, nor a necessary part of your chosen methods.

The Netherlands DPA is right to say scraping is almost always unlawful. A legitimate interest cannot be simply plucked from thin air by anyone who is neither an existing data controller nor processor and has no prior relationship to the data subjects who have no reasonable expectation of their re-use of data online that was not posted for the purposes that the scraper has grabbed it and without any informed processing and offer of an opt out. Instead the only possible basis for this kind of brand new controller should be consent. Having to break the law, hardly screams ‘innovation’.

Regulators do not exist to pander to wheedling, but to independently uphold the law in a democratic society in order to protect people, not prioritise the creation of products:

  • Lawfulness, fairness and transparency.
  • Purpose limitation.
  • Data minimisation.
  • Accuracy.
  • Storage limitation.
  • Integrity and confidentiality (security)
    and
  • Accountability.

In my view, it is the lack of dissausive enforcement as part of checks-and-balances on big power like this, regardless of where it resides, that poses one of the biggest data-related threats to humanity.

Not AI, nor being “left out” of being used to build it for their profit.

Farming out our children. AI AI Oh. (2/2)

Today Keir Starmer talked about us having more control in our lives. “Taking back control is a Labour argument”, he said. So let’s see it in education tech policy where parents told us in 2018, less than half felt they had sufficient control of their child’s digital footprint.

Not only has the UK lost control of which companies control large parts of the state education infrastructure and its delivery, the state is *literally* giving away control of our children’s lives recorded in identifiable data at national level, and since 2012 included giving it to journalists, think tanks, and companies.

Why it matters is less about the data per se, but what is done with it without our permission and how that affects our lives.

Politicians’ love affair with AI (undefined) seems to be as ardent as under the previous government. The State appears to have chosen to further commercialise children’s lives in data, having announced towards the end of the school summer holidays that  the DfE and DSIT will give pupils’ assessment data to companies for AI product development. I get angry about this, because the data is badly misunderstood, and not a product to pass around but the stories of children’s lives in data, and that belongs to them to control.

Are we asking the right questions today about AI and education?  In 2016 in a post for Nesta, Sam Smith foresaw the algorithmic fiasco that would happen in the summer of 2020  pointing out that exam-marking algorithms like any other decisions, have unevenly distributed consequences. What prevents that happening daily but behind closed doors and in closed systems? The answer is, nothing.

Both the adoption of AI in education and education about AI is unevenly distributed. Driven largely by commercial interests, some are co-opting teaching unions for access to the sector, others more cautious, have focused on the challenges of bias and discrimination and plagiarism. As I recently wrote in Schools Week, the influence of corporate donors and their interests in shaping public sector procurement, such as the Tony Blair Institute’s backing by Oracle owner Larry Ellison, therefore demands scrutiny.

Should society allow its public sector systems and laws to be shaped primarily to suit companies? The users of the systems are shaped by how those companies work, so who keeps the balance in check?

In a 2021 reflection here on World Children’s Day, I asked the question, Man or Machine, who shapes my child? Three years later, I am still concerned about the failure to recognize and address the question of redistribution of not only pupils’ agency but teachers’ authority; from individuals to companies (pupils and the teacher don’t decide what is ‘right’ you do next, the ‘computer’ does). From public interest institutions to companies (company X determines the curriculum content of what the computer does and how, not the school). And from State to companies (accountability for outcomes falls through the gap in outsourcing activity to the AI company).

Why it matters, is that these choices do not only influence how we are teaching and learning, but how children feel about it and develop.

The human response to surveillance (and that is what much of AI relies on, massive data-veillance and dashboards) is a result of the chilling effect of being ‘watched‘ by known or unknown persons behind the monitoring. We modify our behaviours to be compliant to their expectations. We try not to stand out from the norm, or to protect ourselves from resulting effects.

The second reason we modify our behaviours is to be compliant with the machine itself. Thanks to the lack of a responsible human in the interaction mediated by the AI tool, we are forced to change what we do to comply with what the machine can manage. How AI is changing human behaviour is not confined to where we walk, meet, play and are overseen in out- or indoor spaces. It is in how we respond to it, and ultimately, how we think.

In the simplest examples, using voice assistants shapes how children speak, and in prompting generative AI applications, we can see how we are forced to adapt how we think to put the questions best suited to getting the output we want. We are changing how we behave to suit machines. How we change behaviour is therefore determined by the design of the company behind the product.

There is limited public debate yet on the effects of this for education, on how children act, interact, and think using machines, and no consensus in the UK education sector whether it is desirable to introduce these companies and their steering that bring changes in teaching and learning and to the future of society, as a result.

And since then in 2021, I would go further. The neo-liberal approach to education and its emphasis on the efficiency of human capital and productivity, on individualism and personalisation,  all about producing ‘labour market value’, and measurable outcomes, is commonly at the core of AI in teaching and learning platforms.

Many tools dehumanise children into data dashboards, rank and spank their behaviours and achivements, punish outliers and praise norms, and expect nothing but strict adherence to rules (sometimes incorrect ones, like mistakes in maths apps). As some companies have expressly said, the purpose of this is to normalise such behaviours ready to be employees of the future, and the reason their tools are free is to normalise their adoption for life.

AI by the normalisation of values built-in by design to tools, is even seen by some as encouraging fascistic solutions to social problems.

But the purpose of education is not only about individual skills and producing human capital to exploit.  Education is a vital gateway to rights and the protection of a democratic society. Education must not only be about skills as an economic driver when talking about AI and learners in terms of human capital, but include rights, championing the development of a child’s personality to their fullest potential, and intercultural understanding, digital citizenship on dis-/misinformation, discrimination and the promotion and protection of democracy and the natural world. “It shall promote understanding, tolerance and friendship among nations, racial or religious groups, and shall further the activities of the United Nations for the maintenance of peace.”

Peter Kyle, the UK DSIT’s Secretary of State said last week, that, “more than anything else, it is growth that will shape those young people’s future.” But what will be used to power all this growth in AI, at what environmental and social costs, and will we get a say?

Don’t forget, in this project announcement the Minister said, “This is the first of many projects that will transform how we see and use public sector data.” That’s our data, about us. And when it comes to schools, that’s not only the millions of learners who’ve left already but who are school children today. Are we really going to accept turning them into data fodder for AI without a fight? As Michael Rosen summed up so perfectly in 2018, “First they said they needed data about the children  to find out what they’re learning… then the children became data.”  If this is to become the new normal, where is the mechanism for us to object? And why this, now, in such a hurry?

Purpose limitation should also prevent retrospective reuse of learners’ records and data, but it hasn’t so far on general identifying and sensitive data distribution from the NPD at national level or from edTech in schools. The project details, scant as they are, suggest parents were asked for consent in this particular pilot, but the Faculty AI notice seems legally weak for schools, and when it comes to using pupil data for building into AI products the question is whether consent can ever be valid — since it cannot be withdrawn once given, and the nature of being ‘freely given’ is affected by the power imbalance.

So far there is no field to record an opt out in any schools’ Information Management Systems though many discussions suggest it would be relatively straightforward to make it happen. However it’s important to note their own DSIT public engagement work on that project says that opt-in is what those parents told the government they would expect. And there is a decade of UK public engagement on data telling government opt-in is what we want.

The regulator has been silent so far on the DSIT/DfE announcement despite lack of fair processing and failures on Articles 12, 13 and 14 of the GDPR being one of the key findings in its 2020 DfE audit. I can use a website to find children’s school photos, scraped without our permission. What about our school records?

Will the government consult before commercialising children’s lives in data to feed AI companies and ‘the economy’ or any of the other “many projects that will transform how we see and use public sector data“?  How is it different from the existing ONS, ADR, or SAIL databank access points and processes? Will the government evaluate the impact on child development, behaviour or mental health of increasing surveillance in schools? Will MPs get an opt-in or even -out, of the commercialisation of their own school records?

I don’t know about ‘Britain belongs to us‘, but my own data should.


See also Part 1: The New Normal is Not Inevitable.

AI in the public sector today, is the RAAC of the future

Reinforced Autoclaved Aerated Concrete (RAAC) used in the school environment is giving our Education Minister a headache. Having been the first to address the problem most publicly, she’s coming under fire as responsible for failure; for Ministerial failure to act on it in thirteen years of a Conservative government since 2010, and the failure of the fabric of educational settings itself.

Decades after buildings’ infrastructure started using RAAC, there is now a parallel digital infrastructure in educational settings. It’s worth thinking about what’s caused the RAAC problem and how it was identified. Could we avoid the same things in the digital environment and in the design, procurement and use of edTech products, and in particular, Artificial Intelligence?

Where has it been used?

In the procurement of school infrastructure, RAAC has been integrated into some parts of the everyday school system, especially in large flat roofs built around the 1960s-80s. It is now hard to detect and remedy or remove without significant effort. There was short-term thinking, short-term spending, and no strategy for its full life cycle or end-of-life expectations. It’s going to be expensive, slow, and difficult to find it and fix.

Where is the risk and what was the risk assessment?

Both most well-known recent cases, the 2016 Edinburgh School masonry collapse and the 2018 roof incident, happened in the early morning when no pupils were present, but, according to the 2019 safety alert by SCOSS, “in either case, the consequences could have been more severe, possibly resulting in injuries or fatalities. There is therefore a risk, although its extent is uncertain.”

That risk has been known for a long time, as today’s education minister Gillian Keegan rightly explained in that interview before airing her frustration. Perhaps it was not seen as a pressing priority because it was not seen as a new problem. In fact locally it often isn’t seen much at all, as it is either hidden behind front-end facades or built into hard-to-see places, like roofs. But already, ‘in the 1990s structural deficiencies became apparent’. (Discussed in papers by the Building Research Establishment (BRE) In the 1990s and again in 2002).

What has changed, according to expert reports, is that those visible problems are no longer behaving as expected in advance,  giving time for mitigation in what had previously been one-off catastrophic incidents. What was only affecting a few, could now affect the many at scale, and without warning. The most recent failures show there is no longer a reliable margin to act, before parts of the mainstream state education infrastructure pose children a threat to life.

Where is the similarity in the digital environment?

AI is the RAAC of another Minister’s future—it’s often similarly sold today as cost-saving, quick and easy to put in place.  You might need fewer people to install it rather than the available alternatives.

AI is being widely introduced at speed into children’s private and family life in England through its procurement and application in the infrastructure of public services; in education and children’s services and policing and in welfare; and some companies claim to be able to identify mood or autism or to be able to profile and influence mental health. Children rarely have any choice or agency to control its often untested effects or outcomes on them, in non-consensual settings.

If you’re working in AI “safety” right now, consider this a parable.

  • There are plenty of people pointing out risk in the current adoption of AI into UK public sector infrastructure; in schools, in health, in welfare, and in prisons and the justice system;
  • There are plenty of cases where harm is very real, but first seen by those in power as affecting the marginalised and minority;
  • There are no consistent published standards or obligations on transparency or of accountability to which AI sellers must hold their products before procurement and affect on people;
  • And there are no easily accessible records of where what type of AI is being procured and built into which public infrastructure, making tracing and remedy even harder in case of product recall.

The objectives of any company, State, service users, the public and investors may not be aligned. Do investors have a duty to ensure that artificial intelligence is developed in an ethical and responsible way? Prioritising short term economic gain and convenience, ahead of human impact or the long term public interest, has resulted in parts of schools’ infrastructure collapsing. And some AI is already going the same way.

The Cardiff Data Justice Lab together with Carnegie Trust have published numerous examples of cancelled systems across public services. “Pressure on public finances means that governments are trying to do more with less. Increasingly, policymakers are turning to technology to cut costs. But what if this technology doesn’t work as it should?” they asked.

In places where similar technology has been in place longer, we already see the impact and harm to people. In 2022, the Chicago Sun Times published an article noting that, “Illinois wisely stopped using algorithms in child welfare cases, but at least 26 states and Washington, D.C., have considered using them, and at least 11 have deployed them. A recent investigation found they are often unreliable and perpetuate racial disparities.” And the author wrote, “Government agencies that oversee child welfare should be prohibited from using algorithms.”

Where are the parallels in the problem and its fixes?

It’s also worth considering how AI can be “removed” or stopped from working in a system. Often not through removal at all, but simply throttling, shutting off that functionality. The problematic parts of the infrastructure remains in situ, but can’t easily be taken out after being designed-in. Whole products may also be difficult to remove.

The 2022 Institution of Structural Engineers’ report summarises the challenge now how to fix the current RAAC problems. Think about what this would mean doing to fix a failure of digital infrastructure:

  • Positive remedial supports and Emergency propping, to mitigate against known deficiencies or unknown/unproven conditions
  • Passive, fail safe supports, to mitigate catastrophic failure of the panels if a panel was to fail
  • Removal of individual panels and replacement with an alternative solution
  • Entire roof replacement to remove the ongoing liabilities
  • Periodic monitoring of the panels for their remaining service life

RAAC has not become a risk to life. It already was from design. While still recognised as a ‘good construction material for many purposes’ it has been widely used in unsafe ways in the wrong places.

RAAC planks made fifty years ago did not have the same level of quality control as we would demand today and yet was procured and put in place for decades after it was known to be unsafe for some uses, and risk assessments saying so.

RAAC was given an exemption from the commonly used codes of practice of reinforced concrete design (RC).

RAAC is scattered among non-RAAC infrastructure, making finding and fixing it, or its removal, very much harder than if it had been recorded in a register, making it easily traceable.

RAAC developers and sellers may no longer exist or have gone out of business without any accountability.

Current AI discourse should be asking not only for retrospective accountability or even life-cycle accountability, but also what does accountable AI look like by design and how do you guarantee it?

  • How do we prevent risk of harm to people from poor quality of systems designed to support them, what will protect people from being affected by unsafe products in those settings in the first place?
  • Are the incentives correct in procurement to enable adequate Risk Assessment be carried out by those who choose to use it?
  • Rather than accepting risk and retroactively expecting remedial action across all manner of public services in future—ignoring a growing number of ticking time bombs—what should public policy makers be doing to avoid putting them in place?
  • How will we know where the unsafe products were built into, if they are permitted then later found to be a threat-to-life?
  • How is safety or accountability upheld for the lifecycle of the product if companies stop making it, or go out of business?
  • How does anyone working with systems applied to people, assess their ongoing use and ensure it promotes human flourishing?

In the digital environment we still have margin to act, to ensure the safety of everyday parts of institutional digital infrastructure in mainstream state education and prevent harm to children. Whether that’s from parts of a product’s code, or use in the wrong way, or entire products. AI is already used in the infrastructure of school’ curriculum planning, curriculum content, or steering children’s self-beliefs and behaviours, and the values of the adult society these pupils will become. Some products have been oversold as AI when they weren’t, overhyped, overused and under explained,  their design is hidden away and kept from sight or independent scrutiny– some with real risks and harms. Right now, some companies and policy makers are making familiar errors and ‘safety-washing’ AI harms, ignoring criticism and pushing it off as someone else’s future problem.

In education, they could learn lessons from RAAC.


Background references

BBC Newsnight Timeline: reports from as far back as 1961 about aerated concrete concerns. 01/09/2023

BBC Radio 4 The World At One: Was RAAC mis-sold? 04/09/2023

Pre-1980 RAAC roof planks are now past their expected service life. CROSS. (2020) Failure of RAAC planks in schools.

A 2019 safety alert by SCOSS, “Failure of Reinforced Autoclaved Aerated Concrete (RAAC) Planks” following the sudden collapse of a school flat roof in 2018.

The Local Government Association (LGA) and the Department for Education (DfE) then contacted all school building owners and warned of ‘risk of sudden structural failure.’

In February 2022, the Institution of Structural Engineers published a report, Reinforced Autoclaved Aerated Concrete (RAAC) Panels Investigation and Assessment with follow up in April 2023, including a proposed approach to the classification of these risk factors and how these may impact on the proposed remediation and management of RAAC. (p.11)

image credit: DALL·E 2 OpenAI generated using the prompt “a model of Artificial Intelligence made from concrete slabs”.

 

Man or machine: who shapes my child? #WorldChildrensDay 2021

A reflection for World Children’s Day 2021. In ten years’ time my three children will be in their twenties. What will they and the world around them have become? What will shape them in the years in between?


Today when people talk about AI, we hear fears of consciousness in AI. We see, I, Robot.  The reality of any AI that will touch their lives in the next ten years is very different. The definition may be contested but artificial intelligence in schools already involves automated decision making at speed and scale, without compassion or conscience, but with outcomes that affect children’s lives for a long time.

The guidance of today—in policy documents, and well intentioned toolkits and guidelines and oh yes yet another ‘ethics’ framework— is all fairly same-y in terms of the issues identified.

Bias in training data. Discrimination in outcomes. Inequitable access or treatment. Lack of understandability or transparency of decision-making. Lack of routes for redress. More rarely thoughts on exclusion, disability and accessible design, and the digital divide. In seeking to fill it, the call can conclude with a cry to ensure ‘AI for all’.

Most of these issues fail to address the key questions in my mind, with regards to AI in education.

Who gets to shape a child’s life and the environment they grow up in? The special case of children is often used for special pleading in government tech issues. Despite this, in policy discussion and documents, govt. fails over and over again to address children as human beings.

Children are still developing. Physically, emotionally, their sense of fairness and justice, of humor, of politics and who they are.

AI is shaping children in ways that schools and parents cannot see.  And the issues go beyond limited agency and autonomy. Beyond the UNCRC articles 8 and 18, the role of the parent and lost boundaries between schools and home, and 23 and 29. (See at the end in detail).

Concerns about accessibility published on AI are often about the individual and inclusion, in terms of design to be able to participate. But once they can participate, where is the independent measurement and evaluation of impact on their educational progress, or physical and mental development? What is their effect?

From overhyped like Edgenuity, to the oversold like ClassCharts (that didn’t actually have any AI in it but it still won Bett Show Awards), frameworks often mention but still have no meaningful solutions for the products that don’t work and fail.

But what about the harms from products that work as intended? These can fail human dignity or create a chilling effect, like exam proctoring tech. Those safety tech that infer things and cause staff to intervene even if the child was only chatting about ‘a terraced house.’ Punitive systems that keep profiles of behaviour points long after a teacher would have let it go. What about those shaping the developing child’s emotions and state of mind by design and claim to operate within data protection law? Those who measure and track mental health or make predictions for interventions by school staff?

Brain headbands to transfer neurosignals aren’t biometric data in data protection terms if not used to or able to uniquely identify a child.

“Wellbeing” apps are not being regulated as medical devices and yet are designed to profile and influence mental health and mood and schools adopt them at scale.

If AI is being used to deliver a child’s education, but only in the English language, what risk does this tech-colonialism create in evangelising  children in non-native English speaking families through AI, not only in access to teaching, but on reshaping culture and identity?

At the institutional level, concerns are only addressed after the fact. But how should they be assessed as part of procurement when many AI are marketed as , it never stops “learning about your child”? Tech needs full life-cycle oversight, but what companies claim their products do is often only assessed to pass accreditation at a single point in time.

But the biggest gap in governance is not going to be fixed by audits or accreditation of algorithmic fairness. It is the failure to recognize the redistribution of not only agency but authority; from individuals to companies (teacher doesn’t decide what you do next, the computer does). From public interest institutions to companies (company X determines the curriculum content, not the school). And from State to companies (accountability for outcomes has fallen through the gap in outsourcing activity to the AI company). We are automating authority, and with it the shirking of responsibility, the liability for the machine’s flaws, and accepting it is the only way, thanks to our automation bias. Accountability must be human, but whose?

Around the world the rush to regulate AI, or related tech in Online Harms, or Digital Services, or Biometrics law, is going to embed, not redistribute power, through regulatory capitalism.

We have regulatory capture including on government boards and bodies that shape the agenda; unrealistic expectations of competition shaping the market; and we’re ignoring transnational colonialisation of whole schools or even regions and countries shaping the delivery of education at scale.

We’re not regulating the questions: Who does the AI serve and how do we deal with conflicts of interest between child’s rights, family, school staff, the institution or State, and the company’s wants? Where do we draw the line between public interest, private interests, and who decides what are the best interests of each child?

We’re not managing what the implications are of the datafied child being mined and analysed in order to train companies’ AI. Is it ethical or desirable to use children’s behaviour as sources of business intelligence, to donate free labour in school systems performed for companies to profit from, without any choice (see UNCRC Art 32)?

We’re barely aware as parents, if a company will decide how a child is tested in a certain way, asked certain questions about their mental health, given nudges to ‘improve’ their performance or mood.  It’s not a question of ‘is it in the best interests of a child’, but rather, who designs it and can schools assess compatibility with a child’s fundamental rights and freedoms to develop free from interference?

It’s not about protection of ‘the data’ although data protection should be about the protection of the person, not only enabling data flows for business.

It’s about protection from strangers engineering a child’s development in closed systems.

It is about child protection from unknown and unlimited number of persons interfering with who they will become.

Today’s laws and debate are too often about regulating someone else’s opinion; how it should be done, not if it should be done at all.

It is rare we read any challenge of the ‘inevitability’ of AI [in education] narrative.

Who do I ask my top two questions on AI in education:
(a) who gets and grants permission to shape my developing child, and
(b) what happens to the duty of care in loco parentis as schools outsource authority to an algorithm?


UNCRC

Article 8

1. States Parties undertake to respect the right of the child to preserve his or her identity, including nationality, name and family relations as recognised by law without unlawful interference.

Article 18

1. States Parties shall use their best efforts to ensure recognition of the principle that both parents have common responsibilities for the upbringing and development of the child. Parents or, as the case may be, legal guardians, have the primary responsibility for the upbringing and development of the child. The best interests of the child will be their basic concern.

Article 29

1. States Parties agree that the education of the child shall be directed to:

(a) The development of the child’s personality, talents and mental and physical abilities to their fullest potential;

(c) The development of respect for the child’s parents, his or her own cultural identity, language and values, for the national values of the country in which the child is living, the country from which he or she may originate, and for civilizations different from his or her own;

Article 30

In those States in which ethnic, religious or linguistic minorities or persons of indigenous origin exist, a child belonging to such a minority or who is indigenous shall not be denied the right, in community with other members of his or her group, to enjoy his or her own culture

 

Data-Driven Responses to COVID-19: Lessons Learned OMDDAC event

A slightly longer version of a talk I gave at the launch event of the OMDDAC Data-Driven Responses to COVID-19: Lessons Learned report on October 13, 2021. I was asked to respond to the findings presented on Young People, Covid-19 and Data-Driven Decision-Making by Dr Claire Bessant at Northumbria Law School.

[ ] indicates text I omitted for reasons of time, on the day.

Their final report is now available to download from the website.

You can also watch the full event here via YouTube. The part on young people presented by Claire and that I follow, is at the start.

—————————————————–

I’m really pleased to congratulate Claire and her colleagues today at OMDDAC and hope that policy makers will recognise the value of this work and it will influence change.

I will reiterate three things they found or included in their work.

  1. Young people want to be heard.
  2. Young people’s views on data and trust, include concerns about conflated data purposes

and

3. The concept of being, “data driven under COVID conditions”.

This OMDDAC work together with Investing in Children,  is very timely as a rapid response, but I think it is also important to set it in context, and recognize that some of its significance is that it reflects a continuum of similar findings over time, largely unaffected by the pandemic.

Claire’s work comprehensively backs up the consistent findings of over ten years of public engagement, including with young people.

The 2010 study with young people conducted by The Royal Academy of Engineering supported by three Research Councils and Wellcome, discussed attitudes towards the use of medical records and concluded: These questions and concerns must be addressed by policy makers, regulators, developers and engineers before progressing with the design, and implementation of record keeping systems and the linking of any databases.

In 2014, the House of Commons Science and Technology Committee in their report, Responsible Use of Data, said the Government has a clear responsibility to explain to the public how personal data is being used

The same Committee’s Big Data Dilemma 2015-16 report, (p9) concluded “data (some collected many years before and no longer with a clear consent trail) […] is unsatisfactory left unaddressed by Government and without a clear public-policy position.

Or see

2014, The Royal Statistical Society and Ipsos Mori work on the data trust deficit with lessons for policymakers, 2019  DotEveryone’s work on Public Attitudes or the 2020 The ICO Annual Track survey results.

There is also a growing body of literature to demonstrate what the implications are being a ‘data driven’ society, for the datafied child, as described by Deborah Lupton and Ben Williamson in their own research in 2017.

[This year our own work with young people, published in our report on data metaphors “the words we use in data policy”, found that young people want institutions to stop treating data about them as a commodity and start respecting data as extracts from the stories of their lives.]

The UK government and policy makers, are simply ignoring the inconvenient truth that legislation and governance frameworks such as the UN General Comment no 25 on Children in the Digital Environment, that exist today, demand people know what is done with data about them, and it must be applied to address children’s right to be heard and to enable them to exercise their data rights.

The public perceptions study within this new OMDDAC work, shows that it’s not only the views of children and young people that are being ignored, but adults too.

And perhaps it is worth reflecting here, that often people don’t tend to think about all this in terms of data rights and data protection, but rather human rights and protections for the human being from the use of data that gives other people power over our lives.

This project, found young people’s trust in use of their confidential personal data was affected by understanding who would use the data and why, and how people will be protected from prejudice and discrimination.

We could build easy-reporting mechanisms at public points of contact with state institutions; in education, in social care, in welfare and policing, to produce reports on demand of the information you hold about me and enable corrections. It would benefit institutions by having more accurate data, and make them more trustworthy if people can see here’s what you hold on me and here’s what you did with it.

Instead, we’re going in the opposite direction. New government proposals suggest making that process harder, by charging for Subject Access Requests.

This research shows that current policy is not what young people want. People want the ability to choose between granular levels of control in the data that is being shared. They value having autonomy and control, knowing who will have access, maintaining records accuracy, how people will be kept informed of changes, who will maintain and regulate the database, data security, anonymisation, and to have their views listened to.

Young people also fear the power of data to speak for them, that the data about them are taken at face value, listened to by those in authority more than the child in their own voice.

What do these findings mean for public policy? Without respect for what people want; for the fundamental human rights and freedoms for all, there is no social license for data policies.

Whether it’s confidential GP records or the school census expansion in 2016, when public trust collapses so does your data collection.

Yet the government stubbornly refuses to learn and seems to believe it’s all a communications issue, a bit like the ‘Yes Minister’ English approach to foreigners when they don’t understand: just shout louder.

No, this research shows data policy failures are not fixed by, “communicate the benefits”.

Nor is it fixed by changing Data Protection law. As a comment in the report says, UK data protection law offers a “how-to” not a “don’t-do”.

Data protection law is designed to be enabling of data flows. But that can mean that when state data processing rightly often avoids using the lawful basis of consent in data protection terms, the data use is not consensual.

[For the sake of time, I didn’t include this thought in the next two paragraphs in the talk, but I think it is important to mention that in our own work we find that this contradiction is not lost on young people. — Against the backdrop of the efforts after the MeToo movement and lots said by Ministers in Education and at the DCMS about the Everyone’s Invited work earlier this year to champion consent in relationships, sex and health education (RSHE) curriculum; adults in authority keep saying consent matters, but don’t demonstrate it, and when it comes to data, use people’s data in ways they do not want.

The report picks up that young people, and disproportionately those communities that experience harm from authorities, mistrust data sharing with the police. This is now set against the backdrop of not only the recent, Wayne Couzens case, but a series of very public misuses of police power, including COVID powers.]

The data powers used, “Under COVID conditions” are now being used as a cover for the attack on data protections in the future. The DCMS consultation on changing UK Data Protection law, open until November 19th, suggests that similarly reduced protections on data distribution in the emergency, should become the norm. While DP law is written expressly to permit things that are out of the ordinary in extraordinary circumstances, they are limited in time. The government is proposing that some things that were found convenient to do under COVID, now become commonplace.

But it includes things such as removing Article 22 from the UK GDPR with its protections for people in processes involving automated decision making.

Young people were those who felt first hand the risks and harms of those processes in the summer of 2020, and the “mutant algorithm” is something this Observatory Report work also addressed in their research. Again, it found young people felt left out of those decisions about them despite being the group that would feel its negative effects.

[Data protection law may be enabling increased lawful data distribution across the public sector, but it is not offering people, including young people, the protections they expect of their human right to privacy. We are on a dangerous trajectory for public interest research and for society, if the “new direction” this government goes in, for data and digital policy and practice, goes against prevailing public attitudes and undermines fundamental human rights and freedoms.]

The risks and benefits of the power obtained from the use of admin data are felt disproportionately across different communities including children, who are not a one size fits all, homogenous group.

[While views across groups will differ — and we must be careful to understand any popular context at any point in time on a single issue and unconscious bias in and between groups — policy must recognise where there are consistent findings across this research with that which has gone before it. There are red lines about data re-uses, especially on conflated purposes using the same data once collected by different people, like commercial re-use or sharing (health) data with police.]

The golden thread that runs through time and across different sectors’ data use, are the legal frameworks underpinned by democratic mandates, that uphold our human rights.

I hope the powers-at-be in the DCMS consultation, and wider policy makers in data and digital policy, take this work seriously and not only listen, but act on its recommendations.


2024 updates: opening paragraph edited to add current links.
A chapter written by Rachel Allsopp and Claire bessant discussing OMDDAC’s research with children will also be published on 21st May 2024 in Governance, democracy and ethics in crisis-decision-making: The pandemic and beyond (Manchester University Press) as part of its Pandemic and Beyond series https://manchesteruniversitypress.co.uk/9781526180049/ and an article discussing the research in the open access European Journal of Law and Technology is available here https://www.ejlt.org/index.php/ejlt/article/view/872.

Views on a National AI strategy

Today was the APPG AI Evidence Meeting – The National AI Strategy: How should it look? Here’s some of my personal views and takeaways.

Have the Regulators the skills and competency to hold organisations to account for what they are doing? asked Roger Taylor, the former Chair of Ofqual the exams regulator, as he began the panel discussion, chaired by Lord Clement-Jones.

A good question was followed by another.

What are we trying to do with AI? asked Andrew Strait, Associate Director of Research Partnerships at Ada Lovelace Institute and formerly of DeepMind and Google. The goal of a strategy should not be to have more AI for the sake of having more AI, he said, but an articulation of values and goals. (I’d suggest the government may be in fact in favour of exactly that, more AI for its own sake where its appplication is seen as a growth market.) And interestingly he suggested that the Scottish strategy has more values-based model, such as fairness. [I had, it seems, wrongly assumed that a *national* AI strategy to come, would include all of the UK.]

The arguments on fairness are well worn in AI discussion and getting old. And yet they still too often fail to ask whether these tools are accurate or even work at all. Look at the education sector and one company’s product, ClassCharts, that claimed AI as its USP for years, but the ICO found in 2020 that the company didn’t actually use any AI at all. If company claims are not honest, or not accurate, then they’re not fair to anyone, never mind across everyone.

Fairness is still too often thought of in terms of explainability of a computer algorithm, not the entire process it operates in. As I wrote back in 2019, “yes we need fairness accountability and transparency. But we need those human qualities to reach across thinking beyond computer code. We need to restore humanity to automated systems and it has to be re-instated across whole processes.”

Strait went on to say that safe and effective AI would be something people can trust. And he asked the important question: who gets to define what a harm is? Rightly identifying that the harm identified by a developer of a tool, may be very different from those people affected by it. (No one on the panel attempted to define or limit what AI is, in these discussions.) He suggested that the carbon footprint from AI may counteract the benefit it would have to apply AI in the pursuit of climate-change goals. “The world we want to create with AI” was a very interesting position and I’d have liked to hear him address what he meant by that, who is “we”, and any assumptions within it.

Lord Clement-Jones asked him about some of the work that Ada Lovelace had done on harms such as facial recognition, and also asked whether some sector technologies are so high risk that they must be regulated?  Strait suggested that we lack adequate understanding of what harms are — I’d suggest academia and civil society have done plenty of work on identifying those, they’ve just been too often  ignored until after the harm is done and there are legal challenges. Strait also suggested he thought the Online Harms agenda was ‘a fantastic example’ of both horizontal and vertical regulation. [Hmm, let’s see. Many people would contest that, and we’ll see what the Queen’s Speech brings.]

Maria Axente then went on to talk about children and AI.  Her focus was on big platforms but also mentioned a range of other application areas. She spoke of the data governance work going on at UNICEF. She included the needs for driving awareness of the risks for children and AI, and digital literacy. The potential for limitations on child  development, the exacerbation of the digital divide,  and risks in public spaces but also hoped for opportunities. She suggested that the AI strategy may therefore be the place for including children.

This of course was something I would want to discuss at more length, but in summary the last decade of Westminster policy affecting children, even the Children’s Commissioner most recent Big Ask survey, bypass the question of children’s *rights* completely. If the national AI strategy by contrast would address rights, [the foundation upon which data laws are built] and create the mechanisms in public sector interactions with children that would enable them to be told if and how their data is being used (in AI systems or otherwise) and be able to exercise the choices that public engagement time and time again says is what people want, then that would be a *huge* and positive step forward to effective data practice across the public sector and for use of AI. Otherwise I see a risk that a strategy on AI and children will ignore children as rights holders across a full range of rights in the digital environment, and focus only on the role of AI in child protection, a key DCMS export aim, and ignore the invasive nature of safety tech tools, and its harms.

Next Dr Jim Weatherall from Astra Zeneca tied together  leveraging “the UK unique strengths of the NHS” and “data collected there” wanting a close knitting together of the national AI strategy and the national data strategy, so that healthcare, life sciences and biomedical sector can become “an international renowned asset.”  He’d like to see students doing data science modules in studies and international access to talent to work for AZ.

Lord Clement-Jones then asked him how to engender public trust in data use. Weatherall said a number of false starts in the past are hindering progress, but that he saw the way forward was data trusts and citizen juries.

His answer ignores the most obvious solution: respect existing law and human rights, using data only in ways that people want and give their permission to do so. Then show them that you did that, and nothing more. In short, what medConfidential first proposed in 2014, the creation of data usage reports.

The infrastructure for managing personal data controls in the public sector, as well as its private partners, must be the basic building block for any national AI strategy.  Views from public engagement work, polls, and outreach has not changed significantly since those done in 2013-14, but ask for the same over and over again. Respect for ‘red lines’ and to have control and choice. Won’t government please make it happen?

If the government fails to put in place those foundations, whatever strategy it builds will fall in the same ways they have done to date, like care.data did by assuming it was acceptable to use data in the way that the government wanted, without a social licence, in the name of “innovation”. Aims that were championed by companies such as Dr Foster, that profited from reusing personal data from the public sector, in a “hole and corner deal” as described by the chairman of the House of Commons committee of public accounts in 2006. Such deals put industry and “innovation” ahead of what the public want in terms of ‘red lines’ for acceptable re-uses of their own personal data and for data re-used in the public interest vs for commercial profit.  And “The Department of Health failed in its duty to be open to parliament and the taxpayer.” That openness and accountability are still missing nearly ten years on in the scope creep of national datasets and commercial reuse, and in expanding data policies and research programmes.

I disagree with the suggestion made that Data Trusts will somehow be more empowering to everyone than mechanisms we have today for data management. I believe Data Trusts will further stratify those who are included and those excluded, and benefit those who have capacity to be able to participate, and disadvantage those who cannot choose. They are also a figleaf of acceptability that don’t solve the core challenge . Citizen juries cannot do more than give a straw poll. Every person whose data is used has entitlement to rights in law, and the views of a jury or Trust cannot speak for everyone or override those rights protected in law.

Tabitha Goldstaub spoke next and outlined some of what AI Council Roadmap had published. She suggested looking at removing barriers to best support the AI start-up community.

As I wrote when the roadmap report was published, there are basics missing in government’s own practice that could be solved. It had an ambition to, “Lead the development of data governance options and its uses. The UK should lead in developing appropriate standards to frame the future governance of data,” but the Roadmap largely ignored the governance infrastructures that already exist. One can only read into that a desire to change and redesign what those standards are.

I believe that there should be no need to change the governance of data but instead make today’s rights able to be exercised and deliver enforcement to make existing governance actionable. Any genuine “barriers” to data use in data protection law,  are designed as protections for people; the people the public sector, its staff and these arms length bodies are supposed to serve.

Blaming AI and algorithms, blaming lack of clarity in the law, blaming “barriers” is often avoidance of one thing. Human accountability. Accountability for ignorance of the law or lack of consistent application. Accountability for bad policy, bad data and bad applications of tools is a human responsibility. Systems you choose to apply to human lives affect people, sometimes forever and in the most harmful ways, so those human decisions must be accountable.

I believe that some simple changes in practice when it comes to public administrative data could bring huge steps forward there:

  1. An audit of existing public admin data held, by national and local government, and consistent published registers of databases and algorithms / AI / ML currently in use.
  2. Identify the lawful basis for each set of data processes, their earliest records dates and content.
  3. Publish that resulting ROPA and storage limitations.
  4. Assign accountable owners to databases, tools and the registers.
  5. Sort out how you will communicate with people whose data you unlawfully process to meet the law, or stop processing it.
  6. And above all, publish a timeline for data quality processes and show that you understand how the degradation of data accuracy, quality affect the rights and responsibilities in law that change over time, as a result.

Goldstaub went on to say on ethics and inclusion, that if it’s not diverse, it’s not ethical. Perhaps the next panel itself and similar events could take a lesson learned from that, as such APPG panel events are not as diverse as they could or should be themselves.  Some of the biggest harms in the use of AI are after all for those in communities least represented, and panels like this tend to ignore lived reality.

The Rt Rev Croft then wrapped up the introductory talks on that more human note, and by exploding some myths.  He importantly talked about the consequences he expects of the increasing use of AI and its deployment in ‘the future of work’ for example, and its effects for our humanity. He proposed 5 topics for inclusion in the strategy and suggested it is essential to engage a wide cross section of society. And most importantly to ask, what is this doing to us as people?

There were then some of the usual audience questions asked on AI, transparency, garbage-in garbage-out, challenges of high risk assessment, and agreements or opposition to the EU AI regulation.

What frustrates me most in these discussions is that the technology is an assumed given, and the bias that gives to the discussion, is itself ignored. A holistic national AI strategy should be looking at if and why AI at all. What are the consequences of this focus on AI and what policy-making-oxygen and capacity does it take away from other areas of what government could or should be doing? The questioner who asks how adaptive learning could use AI for better learning in education, fails to ask what does good learning look like, and if and how adaptive tools fit into that, analogue or digital, at all.

I would have liked to ask panelists if they agree that proposals of public engagement and digital literacy distract from lack of human accountability for bad policy decisions that use machine-made support? Taking  examples from 2020 alone, there were three applications of algorithms and data in the public sector challenged by civil society because of their harms: from the Home Office dropping its racist visa algorithm, DWP court case finding ‘irrational and unlawful’ in Universal Credit decisions, and the “mutant algorithm” of summer 2020 exams. Digital literacy does nothing to help people in those situations. What AI has done is to increase the speed and scale of the harms caused by harmful policy, such as the ‘Hostile Environment’ which is harmful by design.

Any Roadmap, AI Council recommendations, and any national strategy if serious about what good looks like, must answer how would those harms be prevented in the public sector *before* being applied. It’s not about the tech, AI or not, but misuse of power. If the strategy or a Roadmap or ethics code fails to state how it would prevent such harms, then it isn’t serious about ethics in AI, but ethics washing its aims under the guise of saying the right thing.

One unspoken problem right now is the focus on the strategy solely for the delivery of a pre-determined tool (AI). Who cares what the tool is? Public sector data comes from the relationship between people and the provision of public services by government at various levels, and its AI strategy seems to have lost sight of that.

What good would look like in five years would be the end of siloed AI discussion as if it is a desirable silver bullet, and mythical numbers of ‘economic growth’ as a result, but see AI treated as is any other tech and its role in end-to-end processes or service delivery would be discussed proportionately. Panelists would stop suggesting that the GDPR is hard to understand or people cannot apply it.  Almost all of the same principles in UK data laws have applied for over twenty years. And regardless of the GDPR, the Convention 108 applies to the UK post-Brexit unchanged, including associated Council of Europe Guidelines on AI, data protection, privacy and profiling.

Data laws. AI regulation. Profiling. Codes of Practice on children, online safety or biometrics and emotional or gait recognition. There *are* gaps in data protection law when it comes to biometric data not used for unique identification purposes. But much of this is already rolled into other law and regulation for the purposes of upholding human rights and the rule of law. The challenge in the UK is often not having the law, but its lack of enforcement. There are concerns in civil society that the DCMS is seeking to weaken core ICO duties even further. Recent  government, council and think tank roadmaps talk of the UK leading on new data governance, but in reality simply want to see established laws rewritten to be less favourable of rights. To be less favourable towards people.

Data laws are *human* rights-based laws. We will never get a workable UK national data strategy or national AI strategy if government continues to ignore the very fabric of what they are to be built on. Policy failures will be repeated over and over until a strategy supports people to exercise their rights and have them respected.

Imagine if the next APPG on AI asked what would human rights’ respecting practice and policy look like, and what infrastructure would the government need to fund or build to make it happen?  In public-private sector areas (like edTech). Or in the justice system, health, welfare, children’s social care. What could that Roadmap look like and how we can make it happen over what timeframe? Strategies that could win public trust *and* get the sectoral wins the government and industry are looking for. Then we might actually move forwards on getting a functional strategy that would work, for delivering public services and where both AI and data fit into that.

The consent model fails school children. Let’s fix it.

The Joint Committee on Human Rights report, The Right to Privacy (Article 8) and the Digital Revolution,  calls for robust regulation to govern how personal data is used and stringent enforcement of the rules.

“The consent model is broken” was among its key conclusions.

Similarly, this summer,  the Swedish DPA found, in accordance with GDPR, that consent was not a valid legal basis for a school pilot using facial recognition to keep track of students’ attendance given the clear imbalance between the data subject and the controller.

This power imbalance is at the heart of the failure of consent as a lawful basis under Art. 6, for data processing from schools.

Schools, children and their families across England and Wales currently have no mechanisms to understand which companies and third parties will process their personal data in the course of a child’s compulsory education.

Children have rights to privacy and to data protection that are currently disregarded.

  1. Fair processing is a joke.
  2. Unclear boundaries between the processing in-school and by third parties are the norm.
  3. Companies and third parties reach far beyond the boundaries of processor, necessity and proportionality, when they determine the nature of the processing: extensive data analytics,  product enhancements and development going beyond necessary for the existing relationship, or product trials.
  4. Data retention rules are as unrespected as the boundaries of lawful processing. and ‘we make the data pseudonymous / anonymous and then archive / process / keep forever’ is common.
  5. Rights are as yet almost completely unheard of for schools to explain, offer and respect, except for Subject Access. Portability for example, a requirement for consent, simply does not exist.

In paragraph 8 of its general comment No. 1, on the aims of education, the UN Convention Committee on the Rights of the Child stated in 2001:

“Children do not lose their human rights by virtue of passing through the school gates. Thus, for example, education must be provided in a way that respects the inherent dignity of the child and enables the child to express his or her views freely in accordance with article 12, para (1), and to participate in school life.”

Those rights currently unfairly compete with commercial interests. And that power balance in education is as enormous, as the data mining in the sector. The then CEO of Knewton,  Jose Ferreira said in 2012,

“the human race is about to enter a totally data mined existence…education happens to be today, the world’s most data mineable industry– by far.”

At the moment, these competing interests and the enormous power imbalance between companies and schools, and schools and families, means children’s rights are last on the list and oft ignored.

In addition, there are serious implications for the State, schools and families due to the routine dependence on key systems at scale:

  • Infrastructure dependence ie Google Education
  • Hidden risks [tangible and intangible] of freeware
  • Data distribution at scale and dependence on third party intermediaries
  • and not least, the implications for families’ mental health and stress thanks to the shift of the burden of school back office admin from schools, to the family.

It’s not a contract between children and companies either

Contract GDPR Article 6 (b) does not work either, as a basis of processing between the data processing and the data subject, because again, it’s the school that determines the need for and nature of the processing in education, and doesn’t work for children.

The European Data Protection Board published Guidelines 2/2019 on the processing of personal data under Article 6(1)(b) GDPR in the context of the provision of online services to data subjects, on October 16, 2019.

Controllers must, inter alia, take into account the impact on data subjects’ rights when identifying the appropriate lawful basis in order to respect the principle of fairness.

They also concluded that, on the capacity of children to enter into contracts, (footnote 10, page 6)

“A contractual term that has not been individually negotiated is unfair under the Unfair Contract Terms Directive “if, contrary to the requirement of good faith, it causes a significant imbalance in the parties’ rights and obligations arising under the contract, to the detriment of the consumer”.

Like the transparency obligation in the GDPR, the Unfair Contract Terms Directive mandates the use of plain, intelligible language.

Processing of personal data that is based on what is deemed to be an unfair term under the Unfair Contract Terms Directive, will generally not be consistent with the requirement under Article5(1)(a) GDPR that processing is lawful and fair.’

In relation to the processing of special categories of personal data, in the guidelines on consent, WP29 has also observed that Article 9(2) does not recognize ‘necessary for the performance of a contract’ as an exception to the general prohibition to process special categories of data.

They too also found:

it is completely inappropriate to use consent when processing children’s data: children aged 13 and older are, under the current legal framework, considered old enough to consent to their data being used, even though many adults struggle to understand what they are consenting to.

Can we fix it?

Consent models fail school children. Contracts can’t be between children and companies. So what do we do instead?

Schools’ statutory tasks rely on having a legal basis under data protection law, the public task lawful basis Article 6(e) under GDPR, which implies accompanying lawful obligations and responsibilities of schools towards children. They cannot rely on (f) legitimate interests. This 6(e) does not extend directly to third parties.

Third parties should operate on the basis of contract with the school, as processors, but nothing more. That means third parties do not become data controllers. Schools stay the data controller.

Where that would differ with current practice, is that most processors today stray beyond necessary tasks and become de facto controllers. Sometimes because of the everyday processing and having too much of a determining role in the definition of purposes or not allowing changes to terms and conditions; using data to develop their own or new products, for extensive data analytics, the location of processing and data transfers, and very often because of excessive retention.

Although the freedom of the mish-mash of procurement models across UK schools on an individual basis, learning grids, MATs, Local Authorities and no-one-size-fits-all model may often be a good thing, the lack of consistency today means your child’s privacy and data protection are in a postcode lottery. Instead we need:

  • a radical rethink the use of consent models, and home-school agreements to obtain manufactured ‘I agree’ consent.
  • to radically articulate and regulate what good looks like, for interactions between children and companies facilitated by schools, and
  • radically redesign a contract model which enables only that processing which is within the limitations of a processors remit and therefore does not need to rely on consent.

It would mean radical changes in retention as well. Processors can only process for only as long as the legal basis extends from the school. That should generally be only the time for which a child is in school, and using that product in the course of their education. And certainly data must not stay with an indefinite number of companies and their partners, once the child has left that class, year, or left school and using the tool. Schools will need to be able to bring in part of the data they outsource to third parties for learning, *if* they need it as evidence or part of the learning record, into the educational record.

Where schools close (or the legal entity shuts down and no one thinks of the school records [yes, it happens], change name, and reopen in the same walls as under academisation) there must be a designated controller communicated before the change occurs.

The school fence is then something that protects the purposes of the child’s data for education, for life, and is the go to for questions. The child has a visible and manageable digital footprint. Industry can be confident that they do indeed have a lawful basis for processing.

Schools need to be within a circle of competence

This would need an independent infrastructure we do not have today, but need to draw on.

  • Due diligence,
  • communication to families and children of agreed processors on an annual basis,
  • an opt out mechanism that works,
  • alternative lesson content on offer to meet a similar level of offering for those who do,
  • and end-of-school-life data usage reports.

The due diligence in procurement, in data protection impact assessment, and accountability needs to be done up front, removed from the classroom teacher’s responsibility who is in an impossible position having had no basic teacher training in privacy law or data protection rights, and the documents need published in consultation with governors and parents, before beginning processing.

However, it would need to have a baseline of good standards that simply does not exist today.

That would also offer a public safeguard for processing at scale, where a company is not notifying the DPA due to small numbers of children at each school, but where overall group processing of special category (sensitive) data could be for millions of children.

Where some procurement structures might exist today, in left over learning grids, their independence is compromised by corporate partnerships and excessive freedoms.

While pre-approval of apps and platforms can fail where the onus is on the controller to accept a product at a point in time, the power shift would occur where products would not be permitted to continue processing without notifying of significant change in agreed activities, owner, storage of data abroad and so on.

We shift the power balance back to schools, where they can trust a procurement approval route, and children and families can trust schools to only be working with suppliers that are not overstepping the boundaries of lawful processing.

What might school standards look like?

The first principles of necessity, proportionality, data minimisation would need to be demonstrable — just as required under data protection law for many years, and is more explicit under GDPR’s accountability principle. The scope of the school’s authority must be limited to data processing for defined educational purposes under law and only these purposes can be carried over to the processor. It would need legislation and a Code of Practice, and ongoing independent oversight. Violations could mean losing the permission to be a provider in the UK school system. Data processing failures would be referred to the ICO.

  1. Purposes: A duty on the purposes of processing to be for necessary for strictly defined educational purposes.
  2. Service Improvement: Processing personal information collected from children to improve the product would be very narrow and constrained to the existing product and relationship with data subjects — i.e security, not secondary product development.
  3. Deletion: Families and children must still be able to request deletion of personal information collected by vendors which do not form part of the permanent educational record. And a ‘clean slate’ approach for anything beyond the necessary educational record, which would in any event, be school controlled.
  4. Fairness: Whilst at school, the school has responsibility for communication to the child and family how their personal data are processed.
  5. Post-school accountability as the data, resides with the school: On leaving school the default for most companies, should be deletion of all personal data, provided by the data subject, by the school, and inferred from processing.  For remaining data, the school should become the data controller and the data transferred to the school. For any remaining company processing, it must be accountable as controller on demand to both the school and the individual, and at minimum communicate data usage on an annual basis to the school.
  6. Ongoing relationships: Loss of communication channels should be assumed to be a withdrawal of relationship and data transferred to the school, if not deleted.
  7. Data reuse and repurposing for marketing explicitly forbidden. Vendors must be prohibited from using information for secondary [onward or indirect] reuse, for example in product or external marketing to pupils or parents.
  8. Families must still be able to object to processing, on an ad hoc basis, but at no detriment to the child, and an alternative method of achieving the same aims must be offered.
  9. Data usage reports would become the norm to close the loop on an annual basis.  “Here’s what we said we’d do at the start of the year. Here’s where your data actually went, and why.”
  10.  In addition, minimum acceptable ethical standards could be framed around for example, accessibility, and restrictions on in-product advertising.

There must be no alternative back route to just enough processing

What we should not do, is introduce workarounds by the back door.

Schools are not to carry on as they do today, manufacturing ‘consent’ which is in fact unlawful. It’s why Google, despite the objection when I set this out some time ago, is processing unlawfully. They rely on consent that simply cannot and does not exist.

The U.S. schools model wording would similarly fail GDPR tests, in that schools cannot ‘consent’ on behalf of children or families. I believe that in practice the US has weakened what should be strong protections for school children, by having the too expansive  “school official exception” found in the Family Educational Rights and Privacy Act (“FERPA”), and as described in Protecting Student Privacy While Using Online Educational Services: Requirements and Best Practices.

Companies can also work around their procurement pathways.

In parallel timing, the US Federal Trade Commission’s has a consultation open until December 9th, on the Implementation of the Children’s Online Privacy Protection Rule, the COPPA consultation.

The COPPA Rule “does not preclude schools from acting as intermediaries between operators and schools in the notice and consent process, or from serving as the parents’ agent in the process.”

‘There has been a significant expansion of education technology used in classrooms’, the FTC mused before asking whether the Commission should consider a specific exception to parental consent for the use of education technology used in the schools.

In a backwards approach to agency and the development of a rights respecting digital environment for the child, the consultation in effect suggests that we mould our rights mechanisms to fit the needs of business.

That must change. The ecosystem needs a massive shift to acknowledge that if it is to be GDPR compliant, which is a rights respecting regulation, then practice must become rights respecting.

That means meeting children and families reasonable expectations. If I send my daughter to school, and we are required to use a product that processes our personal data, it must be strictly for the *necessary* purposes of the task that the school asks of the company, and the child/ family expects, and not a jot more.

Borrowing on Ben Green’s smart enough city concept, or Rachel Coldicutt’s just enough Internet, UK school edTech suppliers should be doing just enough processing.

How it is done in the U.S. governed by FERPA law is imperfect and still results in too many privacy invasions, but it offers a regional model of expertise for schools to rely on, and strong contractual agreements of what is permitted.

That, we could build on. It could be just enough, to get it right.

Swedish Data Protection Authority decision published on facial recognition (English version)

In August 2019, the Swedish DPA fined Skellefteå Municipality, Secondary Education Board 200 000 SEK (approximately 20 000 euros) pursuant to the General Data Protection Regulation (EU) 2016/679 for using facial recognition technology to monitor the attendance of school children.

The Authority has now made a 14-page translation of the decision available in English on its site, that can be downloaded.

This facial recognition technology trial, compared images from  camera surveillance with pre-registered images of the face of each child, and processed first and last name.

In the preamble, the decision recognised that the General Data Protection Regulation does not contain any derogations for pilot or trial activities.

In summary, the Authority concluded that by using facial recognition via camera to monitor school children’s attendance, the Secondary Education Board (Gymnasienämnden) in the municipality of Skellefteå (Skellefteå kommun) processed personal data that was unnecessary, excessively invasive, and unlawful; with regard to

  • Article 5 of the General Data Protection Regulation by processing personal data in a manner that is more intrusive than necessary and encompasses more personal data than is necessary for the specified purpose (monitoring of attendance)
  • Article 9 processing special category personal data (biometric data) without having a valid derogation from the prohibition on the processing of special categories of personal data,

and

  • Articles 35 and 36 by failing to fulfil the requirements for an impact assessment and failing to carry out prior consultation with the Swedish Data Protection Authority.

Consent

Perhaps the most significant part of the decision is the first officially documented recognition in education data processing under GDPR, that consent fails, even though explicit guardians’ consent was requested and it was possible to opt out.  It recognised that this was about processing the personal data of children in a disempowered relationship and environment.

It makes the assessment that consent was not freely given. It is widely recognised that consent cannot be a tick box exercise,  and that any choice must be informed. However, little attention has yet been given in GDPR circles, to the power imbalance of relationships, especially for children.

The decision recognised that the relationship that exists between the data subject and the controller, namely the balance of power, is significant in assessing whether a genuine choice exists, and whether or not it can be freely given without detriment. The scope for voluntary consent within the public sphere is limited:

“As regards the school sector, it is clear that the students are in a position of dependence with respect to the school …”

The Education Board had said that consent was the basis for the processing of the facial recognition in attendance monitoring.

With the Data Protection Authority’s assessment that the consent was invalid, the lawful basis for processing fell away.

The importance of necessity

The basis for processing was consent 6(1)(a), not 6(1)(e) ‘necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller’ so as to process special category [sensitive] personal data.

However the same test of necessity, was also important in this case. Recital 39 of GDPR requires that personal data should be processed only if the purpose of the processing could not reasonably be fulfilled by other means.

The Swedish Data Protection Authority recognised and noted that, while there is a legal basis for administering student attendance at school, there is no explicit legal basis for performing the task through the processing of special categories of personal data or in any other manner which entails a greater invasion of privacy — put simply, taking the register via facial recognition did not meet the data protection test of being necessary and proportionate. There are less privacy invasive alternatives available, and on balance, the rights of the individual outweigh those of the data processor.

While some additional considerations were made for local Swedish data protection law,  (the Data Protection Act (prop. 2017/18:105 Ny dataskyddslag)) even those exceptional provisions were not intended to be applied routinely to everyday tasks.

Considering rights by design

The decision refers to  the document provided by the school board, Skellefteå kommun – Framtidens klassrum (Skelleftå municipality – The classroom of the future). In the appendix (p. 5), “it noted one advantage of facial recognition is that it is easy to register a large group such as a class in bulk. The disadvantages mentioned include that it is a technically advanced solution which requires a relatively large number of images of each individual, that the camera must have a free line of sight to all students who are present, and that any headdress/shawls may cause the identification process to fail.”

The Board did not submit a prior consultation for data protection impact assessment to the Authority under Article 36. The Authority considered that a number of factors indicated that the processing operations posed a high risk to the rights and freedoms of the individuals concerned but that these were inadequately addressed, and failed to assess the proportionality of the processing in relation to its purposes.

For example, the processing operations involved
a) the use of new technology,
b) special categories of personal data,
c) children,
d) and a power imbalance between the parties.

As the risk assessment submitted by the Board did not demonstrate an assessment of relevant risks to the rights and freedoms of the data subjects [and its mitigations], the decision noted that the high risks pursuant to Article 36 had not been reduced.

What’s next for the UK

The Swedish Data Protection Authority identifies some important points in perhaps the first significant GDPR ruling in the education sector so far, and much will apply school data processing in the UK.

What may surprise some, is that this decision was not about the distribution of the data; since the data was stored on a local computer without any internet connection.  It was not about security, since the computer was kept in a locked cupboard. It was about the fundamentals of basic data protection and rights to privacy for children in the school environment, under the law.

Processing must meet the tests of necessity. Necessary is not defined by a lay test of convenience.

Processing must be lawful. Consent is rarely going to offer a lawful basis for routine processing in schools, and especially when it comes to the risks to the rights and freedoms of the child when processing biometric data, consent fails to offer satisfactory and adequate lawful grounds for processing, due to the power imbalance.

Data should be accurate, be only the minimum necessary and proportionate, and not respect the fundamental rights of the child.

The Swedish DPA fined Skellefteå Municipality, Secondary Education Board 200 000 SEK (approximately 20 000 euros). According to Article 83 (1) of the General Data Protection Regulation, supervisory authorities must ensure that the imposition of administrative fines is effective, proportionate and dissuasive, and in this case, is designed to end the processing infringements.

The GDPR, as preceding data protection law did, offers a route for data controllers and processors to understand what is lawful, and it demands their accountability to be able to demonstrate they are.

Whether children in the UK will find that it affords them their due protections, now depends on its enforcement like this case.