There is no such thing as “the Digital Age of Consent”. The national consultation: kids online and UK government powers (1).

The government has launched its consultation, including on “raising the digital age of consent”, alongside examining social-media bans for under-16s, curfews, enforced breaks, age verification and VPN misuse. A further “digital childhood inquiry” was announced in January to examine these issues more broadly. The government further proposes to amend the Online Safety Act 2023, through the Crime and Policing Bill, granting itself extremely wide powers.

Unwittingly or not, the most substantial shock this week is the Department for Education seeking to grant itself sweeping powers (pp4-6) without scrutiny or evidence, to “restrict access by children of or under a specified age to specified internet services which they provide, or to specified features or functionalities of such services,” and to substantively revise Data Protection law about the, “Age of consent in relation to processing of a child’s personal data: information society services.” These include (in 214A(3)) screen time limits and the times of day at which children may access the service or a specified feature or functionality of the service i.e. curfews or shutdown laws, that provably did not work in other countries. 

These powers are in effect a placeholder for what the government intends to do after the consultation closes, and appears already decided.

Key to much of the discourse is the idea of raising the “digital age of consent”. It is shorthand and misleading. It obscures its aims and what raising the age threshold of Article 8 of data protection law would — and would not — achieve.

Much of this discussion, rests on a fundamental misunderstanding of what consent actually is under data-protection law, and whether age is legally relevant to it or not. It is fundamentally the wrong vehicle to drive restrictions on use of social media by children, not least because data protection law Article 8 applies to *all* personal data from children processing under the basis of consent by an information society service, which is almost everything commercially-driven online, not only (as yet undefined) social media.

The DfE amendment on powers to arbitrarily change data protection law Art 8 as a proxy vehicle for social media restrictions–without consultation, evidence or scrutiny–are fatally flawed, by conflating bundled service obligations and required data processing (verified ID/age checks) with a consent process (p.5).

Meanwhile, the consultation is written with closed answers that lead to fixed outcomes. I suggest these need worked around via email submission to propose an alternative in any responses — and watch out, as the print-ready version and the html version have different numbering for the same questions.

Disappointingly, since the changes to the law are already written, it suggests the consultation is itself, a tick-box exercise.

The issues in summary

Ultimately, there is no such thing as a “digital age of consent” and it doesn’t work as a proxy to raise the age in Article 8 of data protection law to restrict children’s access to social media:

  • Consent is not determined by age but by capacity, and its freely given, informed, power respecting, consensual characteristics;
  • Consent [e.g. to use age verification] is invalid if it is compulsory or obligatory, coercive or bundled into the provision of a service ;
  • Consent does not provide a valid legal ground for the processing of personal data where there is a clear power imbalance between the data subject and the controller (Recital 43);
  • Article 8 requires data minimisation and Recital 57 that no additional personal data [from parent or child] be processed solely to satisfy the requirements of the Act;
  • Raising the age in Article 8 until when a parent’s personal data must be processed in addition to the child’s data to authorise “consent”, will mean the personal data from more parents would be collected for more years until the child reaches the higher age threshold and from more services. Article 8 applies to *all* data processing on the basis of consent by ISS*, “offered directly to a child” not only social media.

Policy makers must not misuse data protection law for this.

Age as a Gatekeeper is no small change

Article 8 was flawed from the outset, and little public attention has been paid to how age came to be treated as a proxy for capacity in the GDPR at all. It expands data collection (by requiring parental data at all, and in this scenario for longer) and adults cannot truly consent “on behalf of” a child under pressure when AV is an obligation under a do-it-or-lose-it model when a service might be an edTech tool for example, or even a social media user group for communications used by a school.

Arguably the use of Article 8 to get tick box permission from parents is not valid consent today, never mind bringing more parents into scope, “such processing shall be lawful only if and to the extent that consent is given or authorised by the holder of parental responsibility over the child.”

Furthermore, it does nothing to empower children or to enable them to exercise their own rights, to respect or promote their rights as independent data subjects to their personhood, agency, and dignity. We could indeed redraft Article 8, but not like this. We do need to fix how and where parents permission is a poor proxy for informed, freely given consent.

As Australia has done, the age at which a designated set of social media platforms may or may not provide accounts to children is an obligation on the platforms not the child or parents, and this should be separated from thinking on data protection law.

Europe has always relied on capacity, not age, for the valid exercise of children’s rights and in its frameworks that uphold them under the law. In the era of Epstein, it is more important than ever that we do not normalise the notion that consent is nothing more than a tick box exercise.

Using age as a bouncer to access online spaces, is problematic if in order to separate children from adults, websites must check the IDs of everyone to know who is not a child, to treat children differently.

The future of age as a gatekeeper in online safety is a global political agenda that affects not only children but raises questions about its wider purposes in the state control of identity, security, and informational power in the digital age with far-reaching effects on democratic participation and how anyone uses the Internet. What has been a relatively permissionless activity for anyone with the ability to be anonymous or to choose to have multiple identities managed by the corporate provider, suddenly becomes a state-ID-controlled activity. “Robust” or “highly effective” age checks require an official state-authorised ID against which an age credential is verified or assured, whether the access point is provided by a commercial third-party provider or not. Not everyone has one in the UK. Everyone will be obliged to hold a national ID of some kind in future, if “robust” age verification or assurance becomes mandatory to use social media, or go online.

The consultation says (p.40), “One way to achieve the strongest possible approach to any new age-linked restrictions would be to require every existing UK social media user to verify their age online, for example if we were to enact a ban on children from all social media.” 

That means online companies no longer using their own know-your-customer log-in model, enabling us to choose how to present our identity to them, but a requirement for a robust know-your-citizen model, with obligations to be able to prove digitally who you are with some sort of “passport-level verification of your legal identity” (26:56). When it comes to the national ID for all, the Westminster government is already planning cross-government uses for Home Office such as fraud detection (01:02:00) and immigration enforcement, “much like online banking“. The enormity and reality of this requirement to have a digital ID available to use for age verification should not be seen through the rose-tinted lens of protecting children. With the imposition to have a state-accepted ID controlled via the database state, It’s the end of access to the Internet as we know it for everyone. It’s as if we’re back in 2009 with a consultation to be published next week to “explore the benefits” to people of having a national digital ID by 2029. That is the reality of “robust” age verification. How it will work, whether the ID-age-restricted access obligation will pop up based on the location of the hosted content or location of the user, is as yet unclear.

While the EU is working out its collective approach, Ireland has already announced its own plans for bringing in a state age-verification system and assumes EU presidency in July.

Meanwhile, 418 expert academics, technologists and scientists from 30 countries warned this week that social-media bans and age checks can backfire. In a statement, they called for a moratorium until evidence is clearer, citing easy circumvention, migration to risky fringe sites, and years-long infrastructure hurdles.

Rather than consider expanding state age-verification systems that will without due attention, be able to attach not only your IP address and tracking cookies but your legal identity to every search you make, and every place you visit online, we need to spend more effort on how to avoid the identification of children (by companies or others)  becoming the norm.

The consultation on this topic: raising the “digital age of consent”

There are 5 narrow questions on minimum age restrictions and the effectiveness of age-verification and age-assurance technologies bundled into a few questions, and five on VPNs, each of which I will address as separate topics later.

This blogpost is only about changing the so-called and misdescribed “digital age of consent”. The proposals suggests raising the age at which parental personal details are no longer required to process a child’s data where processing is on the basis of consent.

The answers to the consultation in that section are too closed to propose a new approach, but the approach needs reframed entirely.

Replace questions 8-11 on the age of “digital consent” with a new approach. Instead of using Article 8 in data protection law, to raise the age of data processing about parents as well as the child from 13 to 16 or anything else, government should review the Age Appropriate Design Code (“the Code”) and address this via enforcement of other parts of the existing data protection law.

We must avoid the paradox of:

“We must process more personal data (age verification, and connected parental ID data) to protect children from data processing.”



The background detail of why

1. Consent in Data Protection law

Under the UK GDPR, consent is only one of six lawful bases on which personal data may be processed. Consent is not required in all circumstances. Where consent is relied upon, it must meet strict legal standards:

  • Consent must be freely given, specific, unambiguous, and expressed by a clear, affirmative act by the data subject;
  • Consent must be informed, which means individuals must know who is processing their data, for what purposes, what type of data is involved, and that they can withdraw consent at any time without disadvantage;
  • Consent cannot be bundled, coerced, or made a condition of accessing a service (like pay or consent models, there must be real choice).

This matters because the media framing of “digital consent” treats age as if it were a standalone gateway to online access. It is not.

Consent cannot be freely given where there is compulsion, imbalance of power, or a lack of genuine choice. If processing personal data — for example for age verification or assurance — is made a mandatory condition of accessing a service, that requirement itself invalidates the ‘freely given’ nature of consent. This cannot produce valid consent under data-protection law. Recitals 42 and 43 are very explicit about this.

It is these qualities of consent and not age, that determines whether the data processing basis of consent for either the parent or the child is valid, and therefore lawful, or not. It’s not enough to tick “agree”.

2. Article 8 is narrow in scope and not about social media as such

The misunderstanding of Article 8 of the UK GDPR is not primarily about age. It is about scope.

Article 8 applies only where three specific conditions are met:

Article 8 is only relevant where (a) personal data is processed (b) by an information society service (ISS)*, AND that data is processed (c) on the basis of consent. If a website or service processes personal data on another data protection law lawful basis — such as legitimate interests, performance of a contract, or compliance with a legal obligation — then the consent rules, and the age rules attached to them in Article 8, do not apply.

If any of these conditions is absent, Article 8 is irrelevant. It does not govern children’s data processing in general, and it does not create a general age rule for online services or social-media access.

3. Article 8 does not create a “digital age of consent”

What Article 8 says is that below the national age threshold set in domestic law (between 13 and 16, depending on the country), parental authorisation is required to process a younger child’ personal data, in practice this means collecting personal details from them as well as personal data from the child to connect the relationship. Again, this applies only where the condition applies that consent is the lawful basis being relied upon.

Crucially, Article 8 does not impose a positive obligation on services to obtain consent from a child at a particular age. It creates a negative duty from when a parent’s permission as authorisation in place of or a “pseudo” consent (and the extra parental personal data) is no longer required for processing, and only for processing by an ISS*. 

Let’s assume the same parents who agree to children’s social media use and help them sign up today by providing parental permission (in the UK under 13) will do so in future. What this change would mean for them, is that it would require those parents’ data to be processed for longer, for more years. Instead of requiring parents’ personal data as well as the child’s only aged up to 13 it would require the parents’ data to continue to be processed up to the newly raised age limit in addition to that child’s own data. Let’s assume the number of parents who agree at each year up to the newly raised limit increases. The companies now no longer only get the child’s personal data after age 13, but parents’ data too. And this applies to far more companies as ISS than social media. Just what we don’t want.

4. Proposals to “raise the digital age of consent” in order to restrict children’s access to social media are therefore conceptually flawed

Against this background, proposals to restrict more children’s access to social media by “raising the digital age of consent” are conceptually flawed. They attempt to use data-protection law, which regulates the obligations of data controllers (service providers), as a proxy for regulating users permissions to access content. There is in fact no such thing as a digital age of consent because it is an obligation to hand over the parents’ data bundled with the child’s as a condition of the service, and not freely given.

The GDPR does not restrict who may access online services. It regulates the conditions under which their personal data may be processed. As long as the child is over the age of the Terms and Conditions set by the provider, the child can use the service at any age if it doesn’t process personal data on the basis of consent.

There is also a basic logical contradiction at the heart of many such proposals. A service cannot know whether a user is a child or an adult without processing personal data that reveals age, date of birth, or an age credential. Prohibiting or restricting the processing of children’s data at younger ages, while simultaneously requiring services to identify children in order to exclude them, is incoherent.

It is also incompatible with the duty towards data minimisation and recital 38 to protect children from excessive data processing. Recital 57 further explains why this attempt to misuse data protection law is flawed. It would require more data to be collected, to ascertain age, than may otherwise be necessary to process.

“If the personal data processed by a controller do not permit the controller to identify a natural person, the data controller should not be obliged to acquire additional information in order to identify the data subject for the sole purpose of complying with any provision of this Regulation.”  


*Definition: What is an ISS? (an information society service)

The basic definition of an ISS in the GDPR is based on Article 1(1)(b) of Directive (EU) 2015/1535:

“any service normally provided for remuneration, at a distance, by electronic means and at the individual request of a recipient of services.

For the purposes of this definition:

(i) ‘at a distance’ means that the service is provided without the parties being simultaneously present;

(ii) ‘by electronic means’ means that the service is sent initially and received at its destination by means of electronic equipment for the processing (including digital compression) and storage of data, and entirely transmitted, conveyed and received by wire, by radio, by optical means or by other electromagnetic means;

(iii) ‘at the individual request of a recipient of services’ means that the service is provided through the transmission of data on individual request.”

Leading AI literacy to further the common good

The UK Department for Science and Technology has been criticised online for its publication of a links list to commercial AI resources packaged as practical AI skills for work.

There are two major problems if you enable AI “literacy” and policy to be led this way. The first, is the framing as something prioritised for employment, and it’s notable that many of these providers are the employers who are often the very same companies seeking to increase their profits through cost reduction from increased efficiency, or having fewer humans in their workforce. A position the UK government has accepted as caused by AI and an inevitability.

The second, is that the subject, and what society understands about its salience and meaning, is steered by the same hands of Big Tech, it plays into, through the effect of consolidation of power.

To present ‘teaching about AI’ as being about skills for the workforce (and a narrow range of workplaces at that), is not only misguided because it narrows learning to being only about technical skills, but because it misdirects us all to look away, more broadly, from what “AI” is being used for, how, why, and by whom.

The critique is therefore important to understand not just about quality of courses, but about the narrowing of AI literacy itself.

AI Literacy is in fact, vital democratic infrastructure.

Problem 1: AI Literacy as Workforce Optimisation

The recommendation of the AI Skills for Life and Work: Rapid Evidence Review, published on January 28th seems not to have been taken here, to involve professional organisations, such as the British Computer Society (BCS) and Royal Academy of Engineering (RAE),  in defining and policing standards that training courses should meet. These expert organisations are notably absent from the list of new and founding partners.

Though the announcements claimed these courses were checked against Skills England’s AI foundation skills for work benchmark, also published on January 28th, something seems to have gone badly wrong in any basic due diligence to check even that the links all worked. That should have included checks being done for claims that free courses were actually at zero cost to users, before the public was steered towards those providers in media coverage.

If Skills England wants to restore both its own credibility and public trust in the providers, it could publish its criteria and findings about how the courses chosen for the AI Skills Boost programme, and evaluation of their assessment against Skills England’s new AI foundation skills for work benchmark and how that was designed.

The second challenge, is the Westminster government is focussed only on skills for some work, and ‘the rest’ of life is vague at best.

Problem 2: Narrative Capture by Big Tech distorts the big picture

Evidence from organisations that have scrutinised UK real-world AI in practice; one recent synthesis is by Data Justice Lab for example, of cancelled systems in the public sector, may not fit the narrow scope of AI skills for some types of work, but it does offer valuable lessons to learn from for other areas, in particular how AI affects public sector services, which in turn affects so many of us on a daily basis.

The government has repeatedly disagreed on AI policy, with recommendations from peers, from experts, and with what the public is saying. In stark contrast with other European countries approach, the UK refuses to legislate on unacceptable risk levels.

The public are already paying the price for this. The prioritisation of move fast and break things “route to impact” has so far come at the cost to citizens and broken everyday lives in welfare systems. Loss of agency and everyday friction are making life harder, less efficient, more stressful in many ways, the opposite of what many felt was the promise of technology and early Internet.

AI is already shaping the justice system through police surveillance, legal research, and citizens advice bots and making AI the cornerstone of its approach, while the basic courts’ IT tools are totally dysfunctional and those in charge won’t listen and won’t invest in the infrastructure to fix it.

[Notable aside, don’t let this put you off having your say and speaking out. There are a few days left to have your say in a consultation on the Wild West of facial recognition used for law enforcement.]

The youth backlash to AI slop has become incessant and the average older person in the street is fed up they need a multitude of apps and a smartphone to perform everyday tasks that used to be simpler to get done. (40% of drivers said that paying for parking with cash was their preferred choice in a 2025 poll of 13,755 drivers for The AA.)

Thousands of workers are run ragged by the algorithmic slave drivers of gig-economy apps, in precarious jobs, and less protected than European counterparts with weaker workers rights post Brexit, so tragically dramatised in the Loach film, Sorry we missed you.

The question is not, do we need literacy to live in a world of AI vs human? It is, how do we live everyday life well, under powerful, undemocratic, often unaccountable, corporate control that is being accelerated and intensified by tech tools we have no say over?

Any AI literacy approach that fails to address this, fails full stop.

Why we must prioritise AI Literacy as democratic infrastructure

How do you democratize a technology that itself, in the form we’re seeing it now, is a product of concentrated power?”

The AI media narrative will, given time, not be driven by what government says about AI, but by how it makes us feel. Increasingly, that is, more vulnerable under uncertainty over income; fear of losing our jobs; increased surveillance; and loss of freedom; indeed a loss of power over our everyday life and need to “take back control“. We saw where that led in 2016. The government will pay the price for those feelings again, if it does not act now to address them.

We now have choices about whose version of AI literacy we follow in the UK. I have the privilege of contributing to work at the Council of Europe, in an approach that I hope will be adopted by the UK later this year, and we could lead on, instead of following ‘what tech says’.

It is an alternative comprehensive framework that addresses all the dimensions of AI literacy—particularly the human dimension— not only to more holistically train technologically skilled citizens to design or use AI, but prepares everyone for living with AI, with a focus on the values of democracy, human rights, and the rule of law.

Being AI literate means understanding how technology and companies affect fundamental economic, human, social and political rights and how we can protect ourselves, so that we can act in ways we choose.

Our parliamentary sovereignty and democratic processes, depend on the power to control our own national narratives and parliamentary processes, including the outcome of elections.

The media and public’s ability to be informed in an election and beyond, depends on the ability to identify and challenge misinformation, to use independent critical thought; to question power; and that depends on an informed and critical citizenry empowered with our own social agency.

We cannot centre these things if the government direction of travel is steered by U.S. led Open AI, Accenture, Google, IBM and Microsoft. Narrow media messaging is conflicted, both saying ‘use AI for furthering economic growth’ and at the same time, excusing those same companies for making job cuts as if they really can’t help it and it is in fact they who have no choice thanks to AI. ‘Blame the AI, don’t blame us (but please forget we chose to build / buy / use it)’.

Education and the role of AI and literacy in the Public Interest

The public interest depends on the state to offer education free from commercial influence and gain, and to objectively understand the implications of AI, not as products that may become obsolete from one day to the next, but with a human-centric, technology neutral approach that looks for outcomes rather than product skills.

We also need a UK government that is committed to doing what it says it will do on AI, not one that simply tells others how to do it.

Whitehall departments are not adequately transparent over the ways they use AI and algorithms and the use of the (perhaps overly complex) AI register is low, despite it being “a requirement for all government departments”.

As AI systems become increasingly embedded in social, economic, and political systems, we must ensure everyone has the necessary level of awareness and critical understanding, to navigate an AI-transformed world in everyday life. Not only to use AI effectively, but to ensure that those responsible for AI development and deployment can respect and enhance human dignity, rights, and democratic values.

We need to protect those people who are excluded in life, or over-policed, without the freedom necessary to what being fully human requires, especially for those who are marginalised, “the outliers” in society and often excluded in the biometric training data from which AI are built—by race, language, gender, age, health or disability.

We need to protect our biometric data, our faces and voices, to be able to show up and speak up when it matters.

As the Pope summed up in his recent World Communications Day message, AI literacy must prioritise understanding, “how algorithms shape our perception of reality, how AI biases work, what mechanisms determine the presence of certain content in our feeds, what the economic principles and models of the AI economy are and how they might change.

The future of freedom in society in the UK, our humanity, our democracy, our trust, depend not on a handful of companies who strive for a brave new world, nor on AI infrastructure they are selling us well-packaged in hype. Our collective future depends on one digital Minister having the courage to take a new direction.

Confusion over GenAI in the classroom

Apparently, there’s been online confusion recently, around what Google does and does not use from Gmail to train Gemini. But it’s not really clear what the clarification means. “No Gmail users’ emails are used to train Google’s Gemini AI,” is very specific wording and merits closer attention. I was certainly told in person by Google Execs in a group meeting around two years ago, that school pupil data was used to train and develop its products. (They also said when I mentioned use of Forms by schools to transfer pupils’ passport and health data, “oh I wouldn’t do that”.) That appears to still be true for some product lines, but it’s less clear for others.

Misunderstandings about pupils using GenAI in schools abound too. Mistaken claims that teachers can, “consent on behalf of children” to wave through as the data protection lawful basis for using AI products. Omissions and inaccurate information on IP rights.  Inaccurate definitions of closed and open AI systems, with blanket claims that pupils can more safely use the first over the latter.

Broadly speaking, UK guidance on using AI in the classroom has focussed on Generative AI.  The same is true of many “How To” guides published as OpEds or articles, or even books by popular ex-RE teachers turned AI experts. But these often fail to state the fact that it is highly likely many of these off-the-shelf GenAI tools cannot be used lawfully in a classroom, asking children to set up individual accounts and using them directly. Just as importantly, other edTech tools that integrate them into their front end, might depend on the same GenAI company policies. This needs thorough understanding of how both products work together. It is misleading for guidance to suggest that pupils can use these tools if schools just ensure they do so thoughtfully or under supervision.

So let’s take a look at the companies’ own published policies on how user data is used by the company and their publicly offered GenAI. Between Google Gemini, Anthropic Claude, and OpenAI ChatGPT, some are more complex or opaque than others. Interestingly, no company states why any service is not permitted for children.

Google Gemini and Education

Google’s policies around Gemini in education are extraordinarily complex and interlink. Even after extensive reading, it’s still unclear how they’re meant to work—let alone how they could be understood by pupils. Google’s “responsible AI” training” guidance cannot easily be reconciled with the many products and sub-products through which Gemini can be used, including Workspace for Education.

Using the tools requires understanding:

  • which version of Google products or Workspace you have,

  • the distinction between “core” and “additional” services,

  • how Gemini features layer on top of those, and

  • different age-gating rules, defaults, and admin-controlled settings.

The Gemini app is a standalone AI assistant. Google Workspace with Gemini, on the other hand, integrates AI directly into Google Workspace applications like Gmail, Docs, Sheets, Slides, and Meet. Since June, Gemini is included in the Workspace for Education edition free of charge by default, as an admin-managed core Workspace service.

Since only June this year, Google Workspace for Education users have “added data protection” in the Gemini app, meaning their chats with Gemini are not human reviewed or used to train AI models. Qualifying [my added stress] Google Workspace for Education editions, including Education Standard and Education Plus, have the same privacy assurances.

What those data protection standards were prior to June, why it changed, and what they are for “non-qualifying” products, remain unclear.

1. Does Google not understand European data protection law?

However, before we even get into “the AI part”, Google’s own guidance for UK schools claims that data processing is lawful if schools collect consent from parents for minors’ use of “Additional Services” such as YouTube or Maps:

Admins must provide or obtain consent for the use of the services by their minor users.”

“Additional Services (like YouTube, Google Maps, and Applied Digital Skills) are designed for consumer users and can optionally be used with Google Workspace for Education accounts if allowed for educational purposes by a school’s domain administrator. “ 

It is not explained why, but it might be because Google or its sub-processors use the data in these Additional Services to “provide, maintain, protect and improve” services and “to develop new ones”.

Source: https://support.google.com/a/answer/6356441?sjid=7831273918566805521-EU

However:

  • Consent in schools is rarely valid because it cannot be “freely given”: parents and pupils face a clear power imbalance, and opting out may disadvantage the child. Routine educational processing cannot rely on consent;
  • Developing “new” services is new product development, which requires valid consent under the EU/UK GDPR and therefore means current practice is without a lawful basis;
  • Google’s approach therefore sets up schools as well as itself, for unlawful practice under European data-protection law.

2. Unclear and overlapping terms

Google’s T&Cs vary between its education tiers and versions Core and Additional, Free and Paid, fundamentals, standard, and plus, like Google Workspace for Education and Gemini products, or its generic Workspace for Education terms. For staff, parents (or the school child themselves) is very difficult to determine:

  • what data is processed where,

  • how it connects to Gemini and AI features (e.g., voice transcription or agentic AI), or

  • what changes at age 18.

For example, the Gemini Apps Privacy Hub states that for users aged 18+, call and text history may be imported into Gemini activity. It is unclear whether this includes data generated before turning 18, or whether it affects children who become 18 while using Workspace for Education.

3. Age controls depend on the administrator

Google relies heavily on institutions to understand, or even configure users’ and organisational age settings. For example:

“Workspace for Education users designated as under 18 will not be able to use Gemini in Classroom…” (Source: Google support answers.)

This appears to conflict with the latest June 2025 product announcements above, but it’s hard to be sure.

Unlike primary and secondary education, Higher-education institutions users not actively designated as under the age of 18 have no additional restrictions for Google services. Admins must ensure any under-18s are placed in an organisational unit with the correct age settings. This shifts responsibility—and risk—onto administrators who may not fully understand the implications.

In summary, it is unclear how different education product offerings, act together with various Gemini offerings. And Google seems to want to push accountability down to the institutional Admin. The simplistic answer appears to be, if you don’t use a paid version of Google tools in education, Google reuses the activity from users of any age as training data for the company to provide, maintain, protect and improve additional services, and to develop new ones.

Given Google’s world-class legal and communications resources, it is striking how opaque both the legal basis and the explanations remain. Clearer, simpler company guidance is urgently needed. Any company clarification and simplifications would be welcome.

Claude is not intended for use by children under age 18.

“Our Services are not directed towards, and we do not knowingly collect, use, disclose, sell, or share any information from children under the age of 18. ” [Source https://www.anthropic.com/legal/privacy]

Any guidance seen elsewhere for educational settings may also be misleading where it suggests that if a school user does not directly “put” personal data “into” the LLM, the tool will not be processing personal data.

Claude’s policy, for example, contradicts that, because other usage data collected indirectly but not “put in” by the user is still personal data, such as IP and other identifiers:

“Consistent with your device or browser permissions, your device or browser automatically sends us information about when and how you install, access, or use our Services. This includes information such as your device type, operating system information, browser information and web page referrers, mobile network, connection information, mobile operator or internet service provider (ISP), time zone setting, IP address (including information about the location of the device derived from your IP address), identifiers (including device or advertising identifiers, probabilistic identifiers, and other unique personal or online identifiers), and device location.” [source: https://www.anthropic.com/legal/privacy]

Open AI ChatGPT and children

The OpenAI terms of use require users to be at least 13 years old, and those under 18 must have parental or guardian permission.

Our Service is not directed to children under the age of 13. OpenAI does not knowingly collect Personal Information from children under the age of 13. […] If you are 13 or older, but under 18, you must have permission from your parent or guardian to use our Services.”

Like Google’s Additional Services, this means it is unsuitable and unlawful to use in schools. If parents are told their child should be or are required to use the LLM and the school is asking for tick box, it may be an acknowledgement of use, but it is not consent.

The company website ‘help’ goes on to add, “We advise caution with exposure to kids, even those who meet our age requirements, and if you are using ChatGPT in the education context for children under 13, the actual interaction with ChatGPT must be conducted by an adult.”

Overall

To sum up, wording from the guidance for schools in Wales, says, The age ratings of generative AI tools must be considered before using them. Age ratings can vary, and some tools are only designed for use by over-18s. Many generative AI tools are not designed for education.”

Which begs the question, why does so much effort and guidance for school children’s AI use in the classroom, focus on how to use Generative AI at all?

I am hopeful that instead we can soon include better guidance and knowledge as part of “digital literacy” or “citizenship” skills in the curriculum, starting with teacher training about, not with “AI”.

The Trouble with the Data Bill and Children’s Data

Part 1: The Trouble with the Data Bill and Children’s Data

Will the Data Use and Access Bill fall at the final hurdle? The popular consensus on ping-pong is no, but the government’s intransigence on “AI-companies-trump-creators without protections for today’s status quo in copyright law” versus “the defense of transparency duties on sources of training data by the Lords” has been the last stand in democratic checks-and-balances on an executive, giddy with its Commons majority.

This year the government scrapped the Privacy and Consumer Advisory Group (PCAG) that advised the government on how to provide users with a simple, trusted and secure means of accessing public services. It then went on to scrap the body it had been merged into, the One Login Inclusion and Privacy Advisory Group (OLIPAG) to advise the Government Digital Service’s GOV.UK One Login programme on inclusion, privacy, data usage, equality and digital identity.

Despite wide-ranging concerns from civil society in data and technology, the government treated engagement on the Bill as a mere tick-box exercise, showing no meaningful willingness to revise the draft inherited from the previous administration.

Its parts that concern children, in particular but not explicitly, include (a) the changes to purpose limitation and the extension of ‘consent’ for research grounds that explicitly include commercial use and are broadly drawn; (b) removing balancing tests as a protection explicitly from vulnerable people (undefined, but one could assume children and elderly or minoritised) on the weak lawful basis of legitimate interests that the Bill elevates into its own condition for processing and (c) on when and how to bypass fair processing and informing people if the controller in effect thinks it’s too many people (and there’s no objective test for that).

Procedurally,  the fact that the new law’s powers will apply to all data already held on Commencement date, undermine fair processing made in the past and combined with these 3 changes mean personal data may now be used in ways that were not made clear or allowed at the time of collection.

This is significant. And that’s not even what the government chose to miss out, like addressing adequacy properly; suitable safeguards in automated decision making missed in the 2018 UK drafting; or protections for new and emerging misuses of the law and personal data under targeted advertising; technology undermining freedom of thought, and clarifying increased uses of bodily data that are not used for ‘singling out’ and companies claim are not biometrics but are, and that normalise very intrusive tech, even inferring emotion, that may covered in the EU AI Act but remain a free-for-all here outside it.

Other divergences may begin if the Bill does pass with some of its late additions. Clause 81 Data protection by design and default: children’s higher protection matters (p100 of the Bill) is one.

This in effects elevates a bit of recital 38 onto the face of the Bill, introducing an explicit acknowledgment of it being child’s data in impact assessments and the obligations to the child of data protection by design and default.

However, it has two challenges — the first, a somewhat puzzling caveat that excludes preventive or counselling services, and it is precisely those services, often that are processing health and other sensitive data and should require the highest standards of data protection by design and default. (Not forgetting children’s data controller the NSPCC was one of 11 major charities fined by the ICO for unlawful practices in 2017.)

Second, the Bill as now drafted starts to bring with it a new problem for UK data protection law with expanded expectations to treat ‘children’s data’ differently from adults.

There’s no definition of data from children and it’s a problem. Is it a quality of the data or the person it comes from? If personal data was collected when a child didn’t know about it or understand it, does that duty to extra consideration wear off if you wait long enough to use the data?

Do these protections apply only at the time of collection because the person the data is about is then aged under 18, or do they persist as a characteristic of the data even after the person it is from, ages into adulthood? How does this interface with the parents’ rights who perhaps made a consent choice or were informed, “on behalf of” their child, and the child is now an adult?

With the volume of data collected now about children that fails to respect data minimisation and persists into adulthood it is a new’ish set of problems that we need to address clearly if it will be used to guide practice or in court.

Furthermore, in order for data controllers to know who is and who is not a child and process data accordingly means knowing everyone who is not as well. It may be problematic if these Data Use and Access Bill changes come to UK Data Protection law, without defining these additional consideration duties towards “data from children”, and that they should require no additional personal data in order to meet the duty (recital 57 should have been put on the face of the Bill) and will no doubt become a blueprint for others beyond the UK.

This Data use and Access Bill brings nothing that enhances UK data protection law which was aiming to create something somewhere that someone could label a Brexit dividend by people who see data protection law not as it was designed. It was designed to protect people from intrusion on their lives by others in secret, and uphold our fundamental rights and freedoms that others can restrict because of the power that information through data gives them; all while enabling the free flow of data through a consolidated framework for its operationalisation. The GDPR has been successfully painted as red tape to be circumvented. But we remain signatories to the first legally binding international instrument in the data protection field. I for one, would be glad to see this bill fall and we keep the data and IP laws we’ve already got today.

Better law is both necessary and possible but must start with proper routes to drafting, consultation, and non-partisan collaborations. Expert groups outside the political process for prior consultation need reinstated. And pre-legislative scrutiny, with expertise in data and technology must happen with evidence taken only after a bill has had its final drafting but before being laid before Parliament with a window for change. It is too late for adequate data and tech scrutiny to make amendments by the time it comes to asking two chambers of largely non-specialists, to put lipstick on a pig.

========================

Part 2: The Trouble with the Data Bill and Screen Time debate

Perhaps separately, as it did not make it into the Bill but still stole a lot of oxygen from other matters that merited more attention, I cannot share the populist support of the amendment to raise the age in Article 8 of the UK GDPR. Introduced late again, after it failed to gain traction as the private member’s Safer Phones Bill and after failing to get the social media ban discussed under the Sunak government it has not made it into the Bill but government murmurings continue in the media on ideas to limit screen time like the Cinderella law that was tried and failed in South Korea.

I have already discussed the reasons why with the drafters of the original PMB proposals at UsForThem, and I can be quite open. Wearing a non-partisan hat, I find it an ill-thought out, rather authoritarian ideology-based approach, that restricts children’s rights and does not enhance them — including the right to access to information, and right to play — with unintended consequences (i) for the most vulnerable children who would just be bought ‘adult phones’ by unscrupulous or abusive adults, and (ii) the age-gating of everyone, without any objective evidence exists that the proposals on changing Article 8 of the GDPR or connected changes, will make anything better for children in the issues it seeks to address.

I fully recognise the validity of concerns about children’s use of social media — but the expert evidence here, of those who have studied children’s and media for decades such as Professor Sonia Livingstone or Candice Odgers, in my view do not support an evidence-based approach to raising the age of data processing by information society services (ISS) that process personal data if the goal is only to restrict access to services. Why? Parts of the various proposed change, as they were drafted, included banning the processing of children’s data, was also illogical on two grounds.

(a) A company cannot know that a data subject is a child without processing personal data that reveals their age or date of birth, or at very least, a credential that says ‘I am under X age’. How can the company not process the data of a child, but must process their data to prevent their access?

(b) Such services can continue to be used and accessed without hindrance by children as long as those ISS process no personal data under current law or the new clauses, Article 8 is not engaged unless personal data is processed — and then the arguments made about screen time and harmful content are all irrelevant to the clause — demonstrating again that the aim of these proposals is not about children’s rights or really children’s data processing, but only aim to restrict access to content — the role of the Online Safety Act, not Data Protection law. Here too, online safety law needs to consider defining “children’s” data and content and the changing nature of the characteristic over time.

There is also the rarely discussed issue that the age-assurance / AV tools don’t achieve their aims. The Data Protection Authority in France the CNIL found on age that, “all the solutions proposed can easily be circumvented.”

But such changes would have substantial unintended consequences.

I remember sitting in the 2015 Brussels CPDP audience, at a panel event run by Google, in which the age of 13 was being debated as the right one for the GDPR to adopt for children’s data processing by information society services without parental consent.  I remember thinking, ‘but that means they’ll need to know everyone else is not a child in order to identify those who are. That’s not good’. Age verification and age assurance are not “child safety” tools. They are measures that must be applied to every user of a service to treat them differently by identifying or inferring who is and who is not a child.

GDPR was a different law from the past regulation about children’s rights in Europe in that it was age-based, not capacity based. As such, it remains out of synch with the protection, participation and empowerment rights for children that are embodied in many countries domestic law, based on the UN Convention on the Rights of the Child.

Ignoring the role of time in Data Governance

The words we use to define data

In the 2021 Defend Digital Me report, The Words We Use in Data Policy: Putting People Back in the Picture, we examined why public conversations about personal data often fail. We highlighted the need for systemic changes in how we talk about data to better account for children’s data within the UK’s national data strategy. A central issue is how we think about data—often seen and framed through misleading metaphors. Metaphors like ‘flows,’ ‘footprints,’ or ‘traces’ influence public opinions and policy but oversimplify governance challenges. These framings profoundly affect views on what should be done with data. This matters as the Data Use and Access Bill in Parliament seeks to rewrite UK data protection law, threatening to undermine public trust in administrative data just as AI companies and others lobby for increased access.

Data as language, not a commodity

But imagine instead that data is not a fixed entity or commodity; it is more akin to language telling the story of your life. Data, turned into information, conveys meaning, which varies by source, user, context, and time. Misinterpreting or ignoring these dimensions leads to poor governance and flawed decisions. Data’s characteristics and value are ephemeral and interpersonal. Like Dr Louise Banks in Arrival, policymakers must recognise that UK data governance requires a multidimensional approach to understanding what data is—not just substance, but traceability, context, and meaning across the data life cycle. We need to talk more about the dimension of time in data governance laws.

The Time Dimension in Data Governance

Time reshapes data governance, affecting its accuracy, personal nature, and user relationships. Personal data may shift between personal and non-personal depending on context, use, and linkage over time.

  1. Personal Data Over Time
    Data can simultaneously be personal or non-personal depending on who holds it and what it is combined with. What identifies an individual in Dataset A may not identify another without access to Dataset B but while I hold A and you hold both A and B, then it is only personal data to you. Over time, data’s ‘personal’ characteristic may shift to include me depending on its use or linkage or breadth of access or leaks and more.
  2. Accuracy and Completeness
    Data degrades over time. For instance, a “current address” loses accuracy when someone moves house. But changing systems—such as updated postcode formats to give a new one to the same property or new categorisations (e.g., introducing “White Northern Irish” into a population that may have previously selected “White British” in a census)—can undermine past data’s comparability and completeness. More importantly, how would you know and how will AI systems know if we have no context, no life-cycle ROPA, and give up enforcing the importance of this?
  3. Children’s Data and Vulnerabilities
    Special protections for what is labelled “children’s data” in law raises questions: Do these protections apply only at the time of collection because the person the data is about is aged under 18, or do they persist as a characteristic of the data even after the person it is from, ages into adulthood? The concept of a “clean slate,” as proposed by the High-Level Expert Group on AI (HLEG), goes some way to solving this issue. However, current practices fail to provide such safeguards that the original GDPR deemed necessary. Failures of which, are demonstrated in the National Pupil Database as the prime case study over time.
  4. Evolving Definitions and Legal Changes
    Policy shifts, such as the UK’s Data Use and Access Bill, can change how data is categorised and handled over time by recategorising it as of the law’s commencement date. Such changes affect its characteristics and governance.

Why lifecycle governance matters

Data governance is a constraint on the imbalance of power beyond the lifetime of the data itself and the relationship between the data subject and their user. European data protection laws, rooted in human rights principles, emphasise lifecycle governance. Concepts like data minimisation, retention limitation, and respect for data subject rights ensure that the relationship between individuals and data users remains dynamic and accountable.

The point of data collection is not to produce the KPI, or the report, or benchmark, or even to follow the money in delivery of a public service. The point is the delivery of a public service. Public administrative data collected on the side is a by-product, often opinion based, in the process. Statistical data may follow standards and a review process. Much of the rest of public admin data may not. A return might suggest 100% completion but that is no measure of accuracy. When public policy deifiesthe product” of data as AI, we focus on the wrong end of the process. Data about public administrative services is a set of contextualised inputs, a dynamic and interpretive representation of public-service delivery and the person’s life it involves, not fixed outputs with fixed characteristics or quality. The person must be kept in the picture in a continuous governance process. Engagement in public service delivery must not end when someone walks out the door, if their data continues to be processed.

We must ensure any public policy or AI creating inferences of meaning are built only on data that are correct, and used within the context in which the meaning intended at source is valid over time.

This is a critical period in which AI companies and others are lobbying hard for more access. Ignoring the role of time in data governance avoids accountability for the problems of data quality and contextual collapse, but will mean datasets that are not fit for purpose will become the foundations for public policy, or for building AI to use or to export. Carnegie UK’s research offers a sobering reminder: poorly designed systems can waste taxpayer money, erode public trust, and fail to deliver promised benefits.

Let’s talk more about the exercise of traceability, context, and meaning across the personal data life cycle. We need to talk more about the dimension of time in data governance laws.

Safety not surveillance

The Youth Endowment Fund (YEF) was established in March 2019 by children’s charity Impetus, with a £200m endowment and a ten-year mandate from the Home Office.

The YEF has just published a report as part of a series about the prevalence of relationship violence among teenagers and what schools are doing to promote healthy relationships. A total of 10,387 children aged 13-17 participated in the survey. While it rightly points out its limitations of size and sampling, its key findings include:

“Of the 10,000 young people surveyed in our report 27% have been in a romantic relationship. 49% of those said they have experienced violent or controlling behaviours from their partner.”

Controlling behaviours are the most common, reported by 46% of those in relationships, and include behaviours such as having their partner check who they’ve been talking to on their phone or social media accounts (30%). They also include being afraid to disagree with their partner (27%) or being afraid to break up with them (26%)”, and “feeling watched or monitored (23%).”

(Source ref. pages 7 and 21).

The report effectively outlines the extent of these problems and focuses on the ‘what’ rather than the ‘why.’ But further discussing the underlying causes is also critical before making recommendations of what needs to be done. In the media, this went on to suggest schools better teach children about relationships. But if you have the wrong reasons for why any complex social problem has come about, you may reach for wrong solutions, addressing symptoms not causes.

Control Normalised in Surveillance

Most debate about teenagers online is about harm from content, contact, or conduct. And often the answer that comes, is more monitoring of what children do online, who they speak to on their phone or social media accounts, and controlling their activity. But research suggests that these very solutions should be analysed as part of the problem.

An omission in the report—and in broader discussions about control and violence in relationships—is the normalisation of the routine use of behavioural controls by ‘loved ones’,  imposed through apps and platforms, perpetuated by parents, teachers, and children’s peers.

The growing normalisation of controlling behaviours in relationships identified in the new report—framed as care or love, such as knowing where someone is, what they’re doing, and with whom—mirrors practices in parental and school surveillance tech, widely sold as safeguarding tools for a decade. These products often operate without consent, justified as being, “in the child’s best interests,” “because we care,” or “because I love you.”

Teacher training on consent and coercive control is unlikely to succeed if staff model contradictory behaviours. “Do as I say, not as I do” tackles the wrong end of the problem.

The ‘privacy’ vs ‘protection’ debate is often polarised. This YEF report should underscore their interdependence: without privacy, children are made more vulnerable, not safer.

The Psychological Costs of Surveillance

Dr. Tonya Rooney, an academic based in Australia, has extensively studied how technology shapes childhood. She argues that,

“the effects of near-constant surveillance in schools, public spaces, and now increasingly the home environment may have far-reaching consequences for children growing up under this watchful gaze.”(Minut, 2019).

“Children become reactive agents, contributing to a cycle of suspicion and anxiety, robbing childhood of valuable opportunities to trust and be trusted.”

In the UK, while the mental health and behavioural impacts of surveillance on children—whether as the observer or the observed—remain under-researched, there is clear international and UK based evidence that parental control apps, school “safeguarding” systems, and encryption workarounds that breach confidentiality, are harming children’s interests.

  • Constant monitoring creates a pervasive sense of constant scrutiny and undermines trust in a relationship. These apps and platforms are not only undermining trusted relationships today in authority whether it be families or teachers, but are detrimental to children developing trust in themselves, and others.
  • Child surveillance can have negative effects on mental health through the creation of a cycle of fear and anxiety and helplessness dependent on someone else being in control, to solve it for them.
  • Child surveillance has a chilling effect, not only through behavioural control of where you go, with whom, doing what, but of thought and freedom of speech, and fear of making mistakes with no space for errors to go unnoticed or unrecorded. People who are aware they are being monitored limit their self-expression and worry about what others think, which can be especially problematic for children in an educational setting, or in pursuit of curiosity and self discovery.

Research by the U.S.-based Center for Tech and Democracy (2022) highlights the disproportionate harm and discriminatory effects of pupils’ activity monitoring. Black, Hispanic, and LGBTQ+ children report experiencing higher levels of harm.

“LGBTQ+ students are even experiencing “non-consensual disclosure of sexual orientation and/or gender identity (i.e., “outing”), due to student activity monitoring.”

Children need safe spaces that are truly safe, which means trusted. The June 2024 Tipping the Balance report from the Australian eSafety Commissioner shows that LGBTIQ+ teens, for instance, rely on encrypted spaces to discuss deeply personal matters—45% of them shared private things they wouldn’t talk about face-to-face. And just over four in 10 LGBTIQ+ teens (42%) searched for mental health information at least once a week (compared with the national average of 20%).

Surveillance of Children Secures Future Markets

School “SafetyTech” practices normalise surveillance as if it were an inevitable part of life, undermining privacy as a fundamental right as a principle to be expected and respected. Some companies, even use this as a marketing feature, not a bug.

One company selling safeguarding tech to schools has framed their products as preparation for workplace device monitoring, teaching students “skills and expectations” for inevitable employment surveillance. In a 2020 EdTech UK presentation, entitled, ‘Protecting student wellness with real time monitoring‘, Netsweeper representatives described their tools as what employers want, fostering productivity by ensuring students are, “engaged, dialled in, and productive workers now and in the future.”

Many of the leading companies sell in both child and adult sectors. That the DUA Bill will give these kinds of companies’ activity in effect a ‘get-out-of-jail-free card’ for processing ‘vulnerable’ people’s data under the blanket purposes of ‘safeguarding’ — able to claim lawful grounds of legitimate interests,  without needing to do any risk assessment or balancing test of harms to people’s rights—, therefore worries me a lot.

Parental Control and Perception of Harms

Parents and children perceive these tools differently when it comes to the personal, on-mobile-device, commercial markets.

Work done in the U.S. by academics at the Stevens Institute of Technology found that while parents often praise them for enhancing safety—e.g., “I can monitor everything my son does” parental negative findings were largely technical failures, such as unstable systems that crashed. Their research also found that teens found failures as harms, primarily to trust and the power dynamics in relationships. Students in the said that parental control apps as a form of “parental stalking,” and that they, “may negatively impact parent-teen relationships.”

Research done in the UK, also found children’s more nuanced understanding of privacy as a collective harm,  because, “parents’ access to their messages would compromise their friends’ privacy as well: they can eves drop on your convos and stuff that you dont want them to hear […] not only is it a violation of my privacy that i didnt permit, but it is of friends too that parents dont know about”” (quoted as in original).

These researchers concluded that, increasing evidence suggests that such apps may be bringing with them new kinds of harms associated with excessive restrictions and privacy invasion.

A Call for Change

Academic evidence increasingly shows the harm caused by these apps in intra-familial relationships, and between schools and pupils, but research seems to be missing on the impact on children’s emotional and cognitive development and in turn, any effects in their own romantic relationships.

I believe surveillance tools undermine their understanding of healthy relationships with each other. If some adults model controlling behaviours as ‘love and caring’ in their relationships, even inadvertently, it would come as no surprise that some young people replicate similar controlling attitudes in their own behaviour.

This is our responsibility to fix. Surveillance is not safety. If we take the emerging evidence seriously, a precautionary approach might suggest:

  • Parents and teachers must change their own behaviours to prioritise trust, respect, and autonomy, giving children agency and the ability to act, without tech-solutionist monitoring.
  • Regulatory action is urgently needed to address the use of surveillance technologies in schools and commercial markets.
  • Policy makers should be rigorous in accepting who is making these markets, who is accountable for their actions, and for their health and safety, and efficacy and error rates standards, since they are already rolled out at scale across the public sector.

The “best interests of the child” cherry picked from part of Article 3 of the UN Convention on the Rights of the Child seems to have become a lazy shorthand for all children’s rights in discussion of the digital environment, and with participation, privacy and provision rights,  trumped by protection. Freedoms seem forgotten. Its preamble is worth a careful read in full if you have not done so for some time. And as set out in the General comment No. 25 (2021):

“Any digital surveillance of children, together with any associated automated processing of personal data, should respect the child’s right to privacy and should not be conducted routinely, indiscriminately or without the child’s knowledge.”

If the DfE is “reviewing the content of RSHE and putting children’s wellbeing at the heart of guidance for schools” they must also review the lack of safety and quality standards, error rates, and monitoring outcomes of the effects of KCSiE digital surveillance obligations for schools.

Children need both privacy and protection —not only for their safety, but to freely develop and flourish into adulthood.


References

Alelyani, T. et al. (2019) ‘Examining Parent Versus Child Reviews of Parental Control Apps on Google Play’, in, pp. 3–21. Available at: https://doi.org/10.1007/978-3-030-21905-5_1. (Accessed: 4 December 2024).

CDT Report – Hidden Harms: The Misleading Promise of Monitoring Students Online’ (2022) Center for Democracy and Technology, 3 August. Available at: https://cdt.org/insights/report-hidden-harms-the-misleading-promise-of-monitoring-students-online/ (Accessed: 4 December 2024).

The Chilling Effect of Student Monitoring: Disproportionate Impacts and Mental Health Risks’ (2022) Center for Democracy and Technology, 5 May. Available at: https://cdt.org/insights/the-chilling-effect-of-student-monitoring-disproportionate-impacts-and-mental-health-risks/ (Accessed: 4 December 2024).

Growing Up in the Age of Surveillance | Minut (2019). Available at: https://www.minut.com/blog/growing-up-in-the-age-of-surveillance (Accessed: 4 December 2024).

Malik, A.S., Acharya, S. and Humane, S. (2024.) ‘Exploring the Impact of Security Technologies on Mental Health: A Comprehensive Review’, Cureus, 16(2), p. e53664. Available at: https://doi.org/10.7759/cureus.53664. (Accessed: 4 December 2024).

Privacy and Protection: A children’s rights approach to encryption (2023) CRIN and Defend Digital Me. Available at: https://home.crin.org/readlistenwatch/stories/privacy-and-protection (Accessed: 4 December 2024).

Teen privacy: Boyd, Danah and Marwick, Alice E., Social Privacy in Networked Publics: Teens’ Attitudes, Practices, and Strategies (September 22, 2011). A Decade in Internet Time: Symposium on the Dynamics of the Internet and Society, September 2011, Available at SSRN: https://ssrn.com/abstract=1925128

Wang, G., Zhao, J., Van Kleek, M., & Shadbolt, N. (2021). Protection or punishment? Relating the design space of parental control apps and perceptions about them to support parenting for online safety. Proceedings of the Conference on Computer Supported Cooperative Work Conference, 5(CSCW2). https://ora.ox.ac.uk/objects/uuid:da71019d-157c-47de-a310-7e0340599e22

The contest and clash of child rights and parent power

What does the U.S. election outcome mean for education here? One aspect is that while the ‘Christian right’ in the UK may not be as powerful as its US counterpart, it still exerts influence on public policy. While far from new, it has become more prominent in parliament since the 2019 election. But even in 2008, Channel 4 Dispatches broadcast an investigation into the growth of Christian fundamentalism in the UK. The programme, “In God’s Name” highlighted the political lobbying by pro-life groups behind changes to tighten abortion law in the Human Fertilisation Bill including work between their then key lobbyist, and the MP Nadine Dorries.

The programme highlighted the fears of some of their members based on the “great replacement” conspiracy theory of the rising power of Islam from the East, replacing Christianity in the West. And it also showed how the ADF from the U.S. was funding UK strategic litigation to challenge and change UK laws including the McClintock v Department of Constitutional Affairs [2008].

The work of Sian Norris today, highlights why this U.S. election result is likely to see more of all of that over here.  As we see the rights’ environment move towards an ever greater focus on protection and protectionism, I make the case why this is all relevant for the education sector in England, and we must far better train and support school staff in practice, to manage competing sources of authority, interests and rights.


Child rights supported by parent power

Over the last ten years, since I began working in this field, there has been a noticeable shift in public discourse in the UK parliament and media around child rights, shaping public policy. It is visible in the established print, radio and TV media. In social media. It is in the language used, the funding available, and the parliamentary space and time taken up by new stakeholder groups and individuals involved, to the detriment of crowding out more moderate or established voices. On the one hand it is a greater pluralism and democracy in action. On the other, where its organisation is orchestrated, are the aims and drivers transparent in the public interest?

When it comes to parents, those behind many seemingly grassroots small p “Parent Power” groups are opaque, often with large well funded, often U.S. organisations behind them.

The challenges for established academics and think tanks in this closed and crowded policy advisory space is that these new arrivals, astroturf  ‘grassroots’ and offshoots from existing groups bring with them loud voices who co-opt the language of child rights, who are adept in policy and media spaces that were previously given to expert and evidence-based child rights academics.

Emerging voices are given authority by a very narrow group of parliamentarians, and are lent support by institutional capture through an increasing number of individuals embedded from industry or with conservative religious views hired into positions of authority. There is a shift in the weight given to views and opinions compared with facts and research, and cherry picked evidence to inform institutional positions and consultations, as a result.

The new players bring no history of being interested in children’s rights —in fact, many act in opposition to equality rights, or access to information, and appear more interested in control of children than universal human rights and fundamental freedoms. The shift of a balance in discussion on child rights to child protection above all else is not only in the UK but mainland Europe, the U.S. and Australia which is the latest to plan a ban on under 16s access to social media.

Whose interests do these people serve really, while packaged in the language of child rights?

Taking back parent and teacher control

Parallel arguments made in the public sphere have grown: the first on why authority must be taken away from parents and teachers and returned to the State over fears of loss of parental control of children’s access to information and children’s ‘safety’ including calls for state-imposed bans on mobile phones for children or enforced parental surveillance control tools. And at the same time,  parents want fewer state interventions.  Arguments include that, “over the last few years the State has been assuming ever greater control, usurping the rights of parents over their children.”

The political football of the day seems to move regularly from ‘ban mobile phones in schools‘ or at all, to the content of classroom materials, ‘give parents a right to withdraw children from access to sex ed and relationships teaching’ (RSE not biology). But perhaps more important even than the substance, is that the essence of what the Brexit vote tapped into, a sense that BigTech and the State, ‘others’, interfere with everyday life in ways from which people want to ‘take back control’ is not going away.

Opening up classroom content opens a can of worms

The challenge for teachers can be in their schools every day. Parents have a right to request that their child is withdrawn from sex education, but not from relationships education. In 2023, the DfE published refreshed guidance saying, “parents should be able to see what their children are being taught in RSHE lessons. Schools must share teaching materials with parents.”

I often argue that there is too little transparency and parental control over what is taught and how, and that parents should be able to see what is being taught and its sources but not with regard to RSE, but when it comes to edTech.  We need a more open classroom when it comes to content from companies of all kinds.

But this means also addressing how far the rights of parents and the rights of the child complement or compete with one another, when it comes to Article 26 (3)(b) of the UDHR on education, “Parents have a prior right to choose the kind of education that shall be given to their children.” And how does this affect teachers agency and authority?

These clashes are starting to overlap in a troubling lack of ethical oversight in intrusive national pupil data gathering exercises in England and in Scotland both of which have left parents furious, to the data grab planned from GPs in Wales. Complaints will without a doubt become louder and more widespread, and public trust lost.

When interests are contested and not aligned, who decides what is in a child’s best interests for their protection in a classroom?

When does the public interest kick in as well as individual interests in the public good from having children attend school, present to health services, and how are collective losses taken into account?

In the law today, responsibility for fulfilling a child’s right to education rests with parents, not schools. So what happens when decisions by schools interfere with parents’ views? When I think about children in the context of AI, automated decisions and design in edTech shaping child development, I think about child protection from strangers engineering a child’s development in closed systems.  It matters to protect a child from an unknown and unlimited number of persons interfering with who they will become.

But even the most basic child protections are missing in the edtech environment today without any public sector standards or oversight. I might object to the school about a product. My child might have a right to object in data protection law. But in practice, objection is impossible to exercise.

The friction this creates is going to grow and there is no good way to deal with it right now. Because the education sector is being opened up to a wider range of commercial, outside parties, it is also being opened up to the risks and challenges that brings. It can no longer be something put in the box marked ‘too difficult’ but needs attention.

The backlash will only grow if the sense of ‘overreach’ continues.

Built-in political and cultural values

The values coming over here from the U.S. are not only coming through parents’ grassroots groups, the religious right, or anti-LGBTQ voices in the media of all kinds, but are coming in directly to the classroom embedded into edTech products. The values underpinning AI or any other technology used in the classroom are opaque because the people behind the product are usually hidden. We cannot therefore separate the products from their designers’ politics. If those products are primarily U.S. made, then it is unsurprising if the values from their education and their political systems are those embedded into their pedagogy. Many of which seem less about the UNCRC art. 29 aims of education, and far more about the purposes of education centred on creating human capital via, “an emphasis on the knowledge economy that can reduce both persons and education to economic actors and be detrimental to wider social and ethical goals. ”

This is nothing new.

In 2013, Michael Gove gave a keynote speech in the U.S. to the National Summit for Education Reform, an organisation set up by Governor Jeb Bush. He talked about edTech too, and the knowledge economy of education and needing “every pair of hands” to “rebuild our economies”. Aside from his normalisation of the acceptance of ‘badging’ children in the classroom with failure (32:15) (“rankings of the students in the test were posted with the students name with colour codes… and some of the lower performers would wear a sticker on a ribbon with the colour code of their performance“) he also shared his view with echoes of the “great replacement theory” that, “the 20th century may be the last American Century we face the fact that the West and the values that we associate with it, liberalism, openness, decency, democracy, the rule of law, risks being eclipsed by a Rising Sun from the East.” We could well ask, whose flavour of ‘liberalism’ is that?

The fight for or against a progressive future

Today, anti-foreign, anti-abortion, and pro-natalist pro-conservative Christian values all meet in a Venn diagram in organisations pushing to undermine classically liberal aspects of teaching in England’s education system. And before this sounds a bit extreme, consider how these conspiracy theories and polarised views have been normalised. Listen (25:00) to the end of discussion on “the nation state” at the 2023 NatCon UK Conference co-badged with the Edmund Burke Foundation.  Becoming a parent is followed by discussion on housing pressure *from migrants* as well as a more-than-slightly eugenic-themed discussion of longevity, and then in passing, AI.  At the same event, fellow MP Miriam Cates claimed the UK’s low birthrate is the most pressing policy issue of the generation and is caused in part by “cultural Marxism” as reported by the Guardian. Orbán in Hungary in 2022, claimed he was fighting against “the great European population exchange … a suicidal attempt to replace the lack of European, Christian children with adults from other civilisations – migrants”.

These debates are inextricably linked in a fight for or against a progressive future. We have a Westminster Opposition now fighting for its own future and the ‘culture wars’ have been routinely part of its frontbenchers’ media discussions for some time. Much of it that is likely to continue to be played out in the education system, starting with the challenge to the Higher Education Freedom of Speech Act 2023, which always seemed to me more about the control of content on campus than its freedoms.

In today’s information society, Castells arguments that cultural battles for power are primarily fought in the media, where identity plays a critical role in influencing public policy and societal norms, where politics becomes theatre and, “citizens around the world react defensively, voting to prevent harm from the state in place of entrusting it with their will,” seem timely. (End of Millenium, p.383). Companies and vested interests have actual power, and elected leaders are left only with influence. This undermines the spirit of a democratic society.

The future of authority and competing interests

After the U.S. election result, that influence coming from across the Pond into UK public policy will not only find itself more motivated and more empowered, but likely, better funded.

Why all this matters for schools is that we are likely to see more of its polarised values-sets imported from the U.S. and there is no handbook for school governors nor staff of all backgrounds, to manage parents and the strong feelings it can all create. Nor does the sector understand the legal framework it needs to withstand it.

Having opened up classrooms to outside interests on classroom content, some families are pulling children out of school because of these fundamental disagreements with their values and the vehicles for their delivery—from the contents of teaching, to intrusive data surveys, and concerns over commercialisation and screen time of tech-based tools without proven beneficial outcomes. Whose best interests does the system serve and who decides whose interests come first when they are in conflict? How are these to be assessed and explained to parents and children, together with their rights?

How do teachers remain in authority where they are perceived as overstepping what parents reasonably expect, or where AI manages curriculum content and teachers cannot explain its assessment scoring or benchmarking profile of a pupil? What should the boundaries be especially as edTech blurs them between school and home, teachers and parents. We need to far better train and support educational staff in practice, to be prepared to manage competing sources of authority, and the emerging fight for interests and rights.

Pirates and their stochastic parrots

It’s a privilege to have a letter published in the FT as I do today, and thanks to the editors for all their work in doing so.

I’m a bit sorry that it lost the punchline which was supposed to bring a touch of AI humour about pirates and their stochastic parrots. And its rather key point was cut that,

“Nothing in current European laws, including Convention 108 for the UK, prevents companies developing AI lawfully.”

So for the record, and since it’s (£), my agreed edited version was:

“The multi-signatory open letter advertisement, paid for by Meta, entitled “Europe needs regulatory certainty on AI” (September 19) was fittingly published on International Talk Like a Pirate Day.

It seems the signatories believe they cannot do business in Europe without “pillaging” more of our data and are calling for new law.

Since many companies lobbied against the General Data Protection Regulation or for the EU AI Act to be weaker, or that the Council of Europe’s AI regulation should not apply to them, perhaps what they really want is approval to turn our data into their products without our permission.

Nothing in current European laws, including Convention 108 for the UK, prevents companies developing AI lawfully. If companies want more consistent enforcement action, I suggest Data Protection Authorities comply and act urgently to protect us from any pirates out there, and their greedy stochastic parrots. “

Prior to print they asked to cut out a middle paragraph too.

“In the same week, LinkedIn sneakily switched on a ‘use me for AI development’ feature for UK users without telling us (paused the next day); Larry Ellison suggested at Oracle’s Financial Analyst Meeting  that more AI should usher in an era of mass citizen surveillance, and our Department for Education has announced it will allow third parties to exploit school children’s assessment data for AI product building, and can’t rule out it will include personal data.”

It is in fact the cumulative effect around the recent flurry of AI activities by various parties, state and commercial, that deserves greater attention rather than being only about this Meta-led complaint. Who is grabbing what data and what infrastructure contracts and creating what state dependencies and strengths to what end game?  While some present the “AI race” as China or India versus the EU or the US to become AI “super powers”, is what “Silicon Valley” offers, their way is the only way, a better offer?

It’s not in fact, “Big Tech” I’m concerned about, but the arrogance of so many companies that in the middle of regulatory scrutiny  would align themselves with one that would rather put out PR that omits the fact they are under it, instead only calling for the law to be changed, and frankly misleading the public by suggesting it is all for our own good than talk about how this serves their own interests.

Who do they think they are to dictate what new laws must look like when they seem simply unwilling to stick to those we have?

Perhaps this open letter serves as a useful starting point to direct DPAs to the companies in need of most scrutiny around their data practices. They seem to be saying they want weaker laws or more enforcement. Some are already well known for challenging both. Who could forget Meta (Facebook’s) secret emotional contagion study involving children in which friends’ postings were moved to influence moods, or the case of letting third parties, including Cambridge Analytica access users’ data? Then there’s the data security issues or the fine over international transfers or the anti-trust issues. And there’s the legal problems with their cookies. And all of this built from humble beginnings by the same founder of Facemash “a prank website” to rate women as hot or not.

As Congressman Long reportedly told Zuckerberg in 2018, “You’re the guy to fix this. We’re not. You need to save your ship.”

The Meta-led ad called for “harmonisation enshrined in regulatory frameworks like the GDPR” and I absolutely agree. The DPAs need to stand tall and stand up to OpenAI and friends (ever dwindling in number so it seems) and reassert the basic, fundamental principles of data protection laws from the GDPR to Convention 108 to protect fundamental human rights. Our laws should do so whether companies like them or not. After all, it is often abuse of data rights by companies, and states, that populations need protection from.

Data protection ‘by design and by default’ is not optional under European data laws established for decades. It is not enough to argue that processing is necessary because you have chosen to operate your business in a particular way, nor a necessary part of your chosen methods.

The Netherlands DPA is right to say scraping is almost always unlawful. A legitimate interest cannot be simply plucked from thin air by anyone who is neither an existing data controller nor processor and has no prior relationship to the data subjects who have no reasonable expectation of their re-use of data online that was not posted for the purposes that the scraper has grabbed it and without any informed processing and offer of an opt out. Instead the only possible basis for this kind of brand new controller should be consent. Having to break the law, hardly screams ‘innovation’.

Regulators do not exist to pander to wheedling, but to independently uphold the law in a democratic society in order to protect people, not prioritise the creation of products:

  • Lawfulness, fairness and transparency.
  • Purpose limitation.
  • Data minimisation.
  • Accuracy.
  • Storage limitation.
  • Integrity and confidentiality (security)
    and
  • Accountability.

In my view, it is the lack of dissausive enforcement as part of checks-and-balances on big power like this, regardless of where it resides, that poses one of the biggest data-related threats to humanity.

Not AI, nor being “left out” of being used to build it for their profit.

The “new normal” is not inevitable. (1/2)

Today Keir Starmer talked about us having more control in our lives. He said, “markets don’t give you control – that is almost literally their point.”

This week we’ve seen it embodied in a speech given by Oracle co-founder Larry Ellison, at their Financial Analyst Meeting 2024. He said that AI is on the verge of ushering in a new era of mass behavioural surveillance, of police and citizens. Oracle he suggested, would be the technological backbone for such applications, keeping everyone “on their best behaviour” through constant real-time machine-learning-powered monitoring.(LE FAQs 1:09:00).

Ellison’s sense of unquestionable entitlement to decide that *he* should be the company to control how all citizens (and police) behave-by-design, and his omission of any consideration of any democratic mandate for that, should shock us. Not least because he’s wrong in some of his claims. (There is no evidence that having a digital dystopia makes this difference in school safety, in particular given the numbers of people known to the school).

How does society trust in this direction of travel in how our behaviour is shaped and how corporations impose their choices? How can a government promise society more control in our lives and yet enable a digital environment, which plays a large part in our everyday life, of which we seem to have ever less control?

The new government sounds keen on public infrastructure investment as a route to structural transformation. But the risk is that the cost constraints mean they seek the results expected from the same plays in an industrial development strategy of old, but now using new technology and tools. It’s a big mistake, huge. And nothing less than national democracy is at stake because individuals cannot meaningfully hold corporations to account. The economic and political context in which an industrial strategy is defined is now behind paywalls and without parliamentary consensus, oversight or societal legitimacy in a formal democratic environment. The nature of the constraints on businesses’ power were once more tangible and localised and their effects easier to see in one place. Power has moved away from government to corporations considerably in the time Labour was out of it. We are now more dependent on being users of multiple private techno-solutions to everyday things often that we hate using, from paying for the car park, to laundry, to job hunting. All with indirect effects on national security, on service provsion at scale, as well as direct everyday effects for citizens and increasingly, our disempowerment and lack of agency in our own lives.

LinkedIn this week first chose not to ask users at all in what has become the OpenAI modus operandi of take first, and ask forgiveness later to grab our personal data from the platform to train “content creation AI models.” (And then it did a U-turn.)

Data Protection law is supposed to offer people protection from such misuse, but without enforcement, ever more companies are starting to copy each other on rinse and repeat.

The Convention 108 requires respect for rights and fundamental freedoms, and in particular the right to privacy, and special categories of data which may not be processed automatically unless domestic law provides appropriate safeguards. It must be obtained fairly. In the cases of many emerging [Generative] AI companies have disregarded fundamentals of European DP laws: Purpose limitation for incompatible uses, no relationship between the subject and company, lack of accuracy or even being actively offered a right to object when in fact it should be a consent basis, means there is unfair and unlawful processing. If we are to accept that anyone at all can take any personal data online and use it for an entirely different purpose to turn into commercial products ignoring all this, then frankly the Data Protection authorities may as well close. Are these commercial interests simply so large that they believe they can get away with steam-rollering over democratic voice and human rights, as well as the rule of (data protection) law? Unless there is consistent ‘cease and desist’ type enforcement, it seems to be rapidly becoming the new normal.

If instead regulators were to bring meaningful enforcement that is dissuasive as the law is supposed to be, what would change? What will shift practice on facial recognition in the world as foreseen by Larry Ellison, and shift public policy to sourcing responsibly? How is democracy to be saved from technocratic authoritarianism going global? If the majority of people in living in democracies are never asked for their views, and have changes imposed on their lives that they do not want how do we raise a right to object and take control?

While the influence in in our political systems of institutions, such as Oracle, and their financial interests are in growing ever more-, and ever larger -, AI models, the interests of our affected communities are not represented in state decisions at national or global levels.

While tools are being built to resist content scraping from artists, what is there for faces and facts, or even errors, about our lives?

Ted Chiang asked in the New Yorker in 2023, whether an alternative is possible to the  current direction of travel. “Some might say that it’s not the job of A.I. to oppose capitalism. That may be true, but it’s not the job of A.I. to strengthen capitalism, either. Yet that is what it currently does.” The greatest current risk of AI is not what we imagine from I, Robot he suggested, but “A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value.”

I remember giving a debate talk as a teen thirty years ago, about the risk of rising sea levels in Vanuatu. It is a reality that causes harm. Satellite data indicates the sea level has risen there by about 6 mm per year since 1993. Someone told me in an interesting recent Twitter exchange, when it comes to climate impacts and AI that they are, “not a proponent of ‘reducing consumption'”.  The reality of climate change in the UK today, only hints at the long-term consequences for the world from the effects of migration and extreme weather events. Only by restraining consumption might we change those knock-on effects of climate change in anything like the necessary timeframe, or the next thirty years worsen more quickly.

But instead of hearing meaningful assessment of what we need in public policy, we hear politicians talk about growth as what they want and that bigger can only be better. More AI. More data centres. What about more humanity in this machine-led environment?

 While some argue for an alternative, responsible, rights respecting path as the only sustainable path forwards, like Meeri Haataja, Chief Executive and Founder, Saidot, Helsinki, Finland in his August letter to the FT,  some of the largest companies appear to be suggesting this week, publishing ads in various European press, that they seem to struggle to follow the law (like Meta), so Europe needs a new one. And to paraphrase, they suggest it’s for our own good.

And that’s whether we like it, or not. You might not choose to use any of their Meta products but might still be being used by them, using your activity online to to create advertising market data that can be sold. Researchers at the University of Oxford analysed a million smartphone apps and found that “the average one contains third‑party code from 10 different companies that facilitates this kind of tracking. Nine out of 10 of them sent data to Google.  Four out of 10 of them sent data to Facebook. In the case of Facebook, many of them sent data automatically without the individual having the opportunity to say no to it.” Reuse for training AI is far more explicit and wrong and “it is for Meta [or any other company] to ensure and demonstrate ongoing compliance.” We have rights, and the rule of law, and at stake are our democratic processes.

“The balance of the interests of working people” must include respect for their fundamental rights and freedoms in the digital environment as well as supporting the interests of economic growth and they are mutually achievable not mutually exclusive.

But in the UK we put the regulator on a leash in 2015, constrained by a duty towards “economic growth”. It’s a constraint that should not apply to industry and market regulators, like the ICO.

While some companies plead to be let off with their bad behaviour, others expect to profit from encouraging the state to increase the monitoring of ours or asking that the law be written ever in their favour. Regulators need to stand tall, and stand up to it and the government needs to remove their leash.

Even in 1959, Labour MPs included housewives concerned with being misled by advertisers and manufacturers (06:40). Many of the electoral issues have stayed the same in over 65 years.  But the power grab going on in this era of information age is unprecedented.

We need not accept this techno-authoritarianism as a “new normal” and inevitable. If indeed as Starmer concluded, “Britain belongs to you“, then it needs MPs to act like it and defend fundamental rights and freedoms to uphold values like the rule of law, even with companies who believe it does not apply to them. With a growing swell of nationalism and plenty who may not believe Britain belongs to all of us, but that, ‘Tomorrow belongs to Me‘, it is indeed a time when “great forces demand a decisive government prepared to face the future.”


See also Part 2: Farming out our Children. AI AI Oh.

The video referenced above is of Dr. Sasha Luccioni, the research scientist and climate lead at HuggingFace, an open-source community and machine-learning platform for AI developers, and is part 1 of the TED Radio Hour episode, Our tech has a climate problem.

Farming out our children. AI AI Oh. (2/2)

Today Keir Starmer talked about us having more control in our lives. “Taking back control is a Labour argument”, he said. So let’s see it in education tech policy where parents told us in 2018, less than half felt they had sufficient control of their child’s digital footprint.

Not only has the UK lost control of which companies control large parts of the state education infrastructure and its delivery, the state is *literally* giving away control of our children’s lives recorded in identifiable data at national level, and since 2012 included giving it to journalists, think tanks, and companies.

Why it matters is less about the data per se, but what is done with it without our permission and how that affects our lives.

Politicians’ love affair with AI (undefined) seems to be as ardent as under the previous government. The State appears to have chosen to further commercialise children’s lives in data, having announced towards the end of the school summer holidays that  the DfE and DSIT will give pupils’ assessment data to companies for AI product development. I get angry about this, because the data is badly misunderstood, and not a product to pass around but the stories of children’s lives in data, and that belongs to them to control.

Are we asking the right questions today about AI and education?  In 2016 in a post for Nesta, Sam Smith foresaw the algorithmic fiasco that would happen in the summer of 2020  pointing out that exam-marking algorithms like any other decisions, have unevenly distributed consequences. What prevents that happening daily but behind closed doors and in closed systems? The answer is, nothing.

Both the adoption of AI in education and education about AI is unevenly distributed. Driven largely by commercial interests, some are co-opting teaching unions for access to the sector, others more cautious, have focused on the challenges of bias and discrimination and plagiarism. As I recently wrote in Schools Week, the influence of corporate donors and their interests in shaping public sector procurement, such as the Tony Blair Institute’s backing by Oracle owner Larry Ellison, therefore demands scrutiny.

Should society allow its public sector systems and laws to be shaped primarily to suit companies? The users of the systems are shaped by how those companies work, so who keeps the balance in check?

In a 2021 reflection here on World Children’s Day, I asked the question, Man or Machine, who shapes my child? Three years later, I am still concerned about the failure to recognize and address the question of redistribution of not only pupils’ agency but teachers’ authority; from individuals to companies (pupils and the teacher don’t decide what is ‘right’ you do next, the ‘computer’ does). From public interest institutions to companies (company X determines the curriculum content of what the computer does and how, not the school). And from State to companies (accountability for outcomes falls through the gap in outsourcing activity to the AI company).

Why it matters, is that these choices do not only influence how we are teaching and learning, but how children feel about it and develop.

The human response to surveillance (and that is what much of AI relies on, massive data-veillance and dashboards) is a result of the chilling effect of being ‘watched‘ by known or unknown persons behind the monitoring. We modify our behaviours to be compliant to their expectations. We try not to stand out from the norm, or to protect ourselves from resulting effects.

The second reason we modify our behaviours is to be compliant with the machine itself. Thanks to the lack of a responsible human in the interaction mediated by the AI tool, we are forced to change what we do to comply with what the machine can manage. How AI is changing human behaviour is not confined to where we walk, meet, play and are overseen in out- or indoor spaces. It is in how we respond to it, and ultimately, how we think.

In the simplest examples, using voice assistants shapes how children speak, and in prompting generative AI applications, we can see how we are forced to adapt how we think to put the questions best suited to getting the output we want. We are changing how we behave to suit machines. How we change behaviour is therefore determined by the design of the company behind the product.

There is limited public debate yet on the effects of this for education, on how children act, interact, and think using machines, and no consensus in the UK education sector whether it is desirable to introduce these companies and their steering that bring changes in teaching and learning and to the future of society, as a result.

And since then in 2021, I would go further. The neo-liberal approach to education and its emphasis on the efficiency of human capital and productivity, on individualism and personalisation,  all about producing ‘labour market value’, and measurable outcomes, is commonly at the core of AI in teaching and learning platforms.

Many tools dehumanise children into data dashboards, rank and spank their behaviours and achivements, punish outliers and praise norms, and expect nothing but strict adherence to rules (sometimes incorrect ones, like mistakes in maths apps). As some companies have expressly said, the purpose of this is to normalise such behaviours ready to be employees of the future, and the reason their tools are free is to normalise their adoption for life.

AI by the normalisation of values built-in by design to tools, is even seen by some as encouraging fascistic solutions to social problems.

But the purpose of education is not only about individual skills and producing human capital to exploit.  Education is a vital gateway to rights and the protection of a democratic society. Education must not only be about skills as an economic driver when talking about AI and learners in terms of human capital, but include rights, championing the development of a child’s personality to their fullest potential, and intercultural understanding, digital citizenship on dis-/misinformation, discrimination and the promotion and protection of democracy and the natural world. “It shall promote understanding, tolerance and friendship among nations, racial or religious groups, and shall further the activities of the United Nations for the maintenance of peace.”

Peter Kyle, the UK DSIT’s Secretary of State said last week, that, “more than anything else, it is growth that will shape those young people’s future.” But what will be used to power all this growth in AI, at what environmental and social costs, and will we get a say?

Don’t forget, in this project announcement the Minister said, “This is the first of many projects that will transform how we see and use public sector data.” That’s our data, about us. And when it comes to schools, that’s not only the millions of learners who’ve left already but who are school children today. Are we really going to accept turning them into data fodder for AI without a fight? As Michael Rosen summed up so perfectly in 2018, “First they said they needed data about the children  to find out what they’re learning… then the children became data.”  If this is to become the new normal, where is the mechanism for us to object? And why this, now, in such a hurry?

Purpose limitation should also prevent retrospective reuse of learners’ records and data, but it hasn’t so far on general identifying and sensitive data distribution from the NPD at national level or from edTech in schools. The project details, scant as they are, suggest parents were asked for consent in this particular pilot, but the Faculty AI notice seems legally weak for schools, and when it comes to using pupil data for building into AI products the question is whether consent can ever be valid — since it cannot be withdrawn once given, and the nature of being ‘freely given’ is affected by the power imbalance.

So far there is no field to record an opt out in any schools’ Information Management Systems though many discussions suggest it would be relatively straightforward to make it happen. However it’s important to note their own DSIT public engagement work on that project says that opt-in is what those parents told the government they would expect. And there is a decade of UK public engagement on data telling government opt-in is what we want.

The regulator has been silent so far on the DSIT/DfE announcement despite lack of fair processing and failures on Articles 12, 13 and 14 of the GDPR being one of the key findings in its 2020 DfE audit. I can use a website to find children’s school photos, scraped without our permission. What about our school records?

Will the government consult before commercialising children’s lives in data to feed AI companies and ‘the economy’ or any of the other “many projects that will transform how we see and use public sector data“?  How is it different from the existing ONS, ADR, or SAIL databank access points and processes? Will the government evaluate the impact on child development, behaviour or mental health of increasing surveillance in schools? Will MPs get an opt-in or even -out, of the commercialisation of their own school records?

I don’t know about ‘Britain belongs to us‘, but my own data should.


See also Part 1: The New Normal is Not Inevitable.

Thinking to some purpose