Confusion over GenAI in the classroom

Apparently, there’s been online confusion recently, around what Google does and does not use from Gmail to train Gemini. But it’s not really clear what the clarification means. “No Gmail users’ emails are used to train Google’s Gemini AI,” is very specific wording and merits closer attention. I was certainly told in person by Google Execs in a group meeting around two years ago, that school pupil data was used to train and develop its products. (They also said when I mentioned use of Forms by schools to transfer pupils’ passport and health data, “oh I wouldn’t do that”.) That appears to still be true for some product lines, but it’s less clear for others.

Misunderstandings about pupils using GenAI in schools abound too. Mistaken claims that teachers can, “consent on behalf of children” to wave through as the data protection lawful basis for using AI products. Omissions and inaccurate information on IP rights.  Inaccurate definitions of closed and open AI systems, with blanket claims that pupils can more safely use the first over the latter.

Broadly speaking, UK guidance on using AI in the classroom has focussed on Generative AI.  The same is true of many “How To” guides published as OpEds or articles, or even books by popular ex-RE teachers turned AI experts. But these often fail to state the fact that it is highly likely many of these off-the-shelf GenAI tools cannot be used lawfully in a classroom, asking children to set up individual accounts and using them directly. Just as importantly, other edTech tools that integrate them into their front end, might depend on the same GenAI company policies. This needs thorough understanding of how both products work together. It is misleading for guidance to suggest that pupils can use these tools if schools just ensure they do so thoughtfully or under supervision.

So let’s take a look at the companies’ own published policies on how user data is used by the company and their publicly offered GenAI. Between Google Gemini, Anthropic Claude, and OpenAI ChatGPT, some are more complex or opaque than others. Interestingly, no company states why any service is not permitted for children.

Google Gemini and Education

Google’s policies around Gemini in education are extraordinarily complex and interlink. Even after extensive reading, it’s still unclear how they’re meant to work—let alone how they could be understood by pupils. Google’s “responsible AI” training” guidance cannot easily be reconciled with the many products and sub-products through which Gemini can be used, including Workspace for Education.

Using the tools requires understanding:

  • which version of Google products or Workspace you have,

  • the distinction between “core” and “additional” services,

  • how Gemini features layer on top of those, and

  • different age-gating rules, defaults, and admin-controlled settings.

The Gemini app is a standalone AI assistant. Google Workspace with Gemini, on the other hand, integrates AI directly into Google Workspace applications like Gmail, Docs, Sheets, Slides, and Meet. Since June, Gemini is included in the Workspace for Education edition free of charge by default, as an admin-managed core Workspace service.

Since only June this year, Google Workspace for Education users have “added data protection” in the Gemini app, meaning their chats with Gemini are not human reviewed or used to train AI models. Qualifying [my added stress] Google Workspace for Education editions, including Education Standard and Education Plus, have the same privacy assurances.

What those data protection standards were prior to June, why it changed, and what they are for “non-qualifying” products, remain unclear.

1. Does Google not understand European data protection law?

However, before we even get into “the AI part”, Google’s own guidance for UK schools claims that data processing is lawful if schools collect consent from parents for minors’ use of “Additional Services” such as YouTube or Maps:

Admins must provide or obtain consent for the use of the services by their minor users.”

“Additional Services (like YouTube, Google Maps, and Applied Digital Skills) are designed for consumer users and can optionally be used with Google Workspace for Education accounts if allowed for educational purposes by a school’s domain administrator. “ 

It is not explained why, but it might be because Google or its sub-processors use the data in these Additional Services to “provide, maintain, protect and improve” services and “to develop new ones”.

Source: https://support.google.com/a/answer/6356441?sjid=7831273918566805521-EU

However:

  • Consent in schools is rarely valid because it cannot be “freely given”: parents and pupils face a clear power imbalance, and opting out may disadvantage the child. Routine educational processing cannot rely on consent;
  • Developing “new” services is new product development, which requires valid consent under the EU/UK GDPR and therefore means current practice is without a lawful basis;
  • Google’s approach therefore sets up schools as well as itself, for unlawful practice under European data-protection law.

2. Unclear and overlapping terms

Google’s T&Cs vary between its education tiers and versions Core and Additional, Free and Paid, fundamentals, standard, and plus, like Google Workspace for Education and Gemini products, or its generic Workspace for Education terms. For staff, parents (or the school child themselves) is very difficult to determine:

  • what data is processed where,

  • how it connects to Gemini and AI features (e.g., voice transcription or agentic AI), or

  • what changes at age 18.

For example, the Gemini Apps Privacy Hub states that for users aged 18+, call and text history may be imported into Gemini activity. It is unclear whether this includes data generated before turning 18, or whether it affects children who become 18 while using Workspace for Education.

3. Age controls depend on the administrator

Google relies heavily on institutions to understand, or even configure users’ and organisational age settings. For example:

“Workspace for Education users designated as under 18 will not be able to use Gemini in Classroom…” (Source: Google support answers.)

This appears to conflict with the latest June 2025 product announcements above, but it’s hard to be sure.

Unlike primary and secondary education, Higher-education institutions users not actively designated as under the age of 18 have no additional restrictions for Google services. Admins must ensure any under-18s are placed in an organisational unit with the correct age settings. This shifts responsibility—and risk—onto administrators who may not fully understand the implications.

In summary, it is unclear how different education product offerings, act together with various Gemini offerings. And Google seems to want to push accountability down to the institutional Admin. The simplistic answer appears to be, if you don’t use a paid version of Google tools in education, Google reuses the activity from users of any age as training data for the company to provide, maintain, protect and improve additional services, and to develop new ones.

Given Google’s world-class legal and communications resources, it is striking how opaque both the legal basis and the explanations remain. Clearer, simpler company guidance is urgently needed. Any company clarification and simplifications would be welcome.

Claude is not intended for use by children under age 18.

“Our Services are not directed towards, and we do not knowingly collect, use, disclose, sell, or share any information from children under the age of 18. ” [Source https://www.anthropic.com/legal/privacy]

Any guidance seen elsewhere for educational settings may also be misleading where it suggests that if a school user does not directly “put” personal data “into” the LLM, the tool will not be processing personal data.

Claude’s policy, for example, contradicts that, because other usage data collected indirectly but not “put in” by the user is still personal data, such as IP and other identifiers:

“Consistent with your device or browser permissions, your device or browser automatically sends us information about when and how you install, access, or use our Services. This includes information such as your device type, operating system information, browser information and web page referrers, mobile network, connection information, mobile operator or internet service provider (ISP), time zone setting, IP address (including information about the location of the device derived from your IP address), identifiers (including device or advertising identifiers, probabilistic identifiers, and other unique personal or online identifiers), and device location.” [source: https://www.anthropic.com/legal/privacy]

Open AI ChatGPT and children

The OpenAI terms of use require users to be at least 13 years old, and those under 18 must have parental or guardian permission.

Our Service is not directed to children under the age of 13. OpenAI does not knowingly collect Personal Information from children under the age of 13. […] If you are 13 or older, but under 18, you must have permission from your parent or guardian to use our Services.”

Like Google’s Additional Services, this means it is unsuitable and unlawful to use in schools. If parents are told their child should be or are required to use the LLM and the school is asking for tick box, it may be an acknowledgement of use, but it is not consent.

The company website ‘help’ goes on to add, “We advise caution with exposure to kids, even those who meet our age requirements, and if you are using ChatGPT in the education context for children under 13, the actual interaction with ChatGPT must be conducted by an adult.”

Overall

To sum up, wording from the guidance for schools in Wales, says, The age ratings of generative AI tools must be considered before using them. Age ratings can vary, and some tools are only designed for use by over-18s. Many generative AI tools are not designed for education.”

Which begs the question, why does so much effort and guidance for school children’s AI use in the classroom, focus on how to use Generative AI at all?

I am hopeful that instead we can soon include better guidance and knowledge as part of “digital literacy” or “citizenship” skills in the curriculum, starting with teacher training about, not with “AI”.