Tag Archives: blog

There is no such thing as “the Digital Age of Consent”. The national consultation: kids online and UK government powers (1).

The government has launched its consultation, including on “raising the digital age of consent”, alongside examining social-media bans for under-16s, curfews, enforced breaks, age verification and VPN misuse. A further “digital childhood inquiry” was announced in January to examine these issues more broadly. The government further proposes to amend the Online Safety Act 2023, through the Crime and Policing Bill, granting itself extremely wide powers.

Unwittingly or not, the most substantial shock this week is the Department for Education seeking to grant itself sweeping powers (pp4-6) without scrutiny or evidence, to “restrict access by children of or under a specified age to specified internet services which they provide, or to specified features or functionalities of such services,” and to substantively revise Data Protection law about the, “Age of consent in relation to processing of a child’s personal data: information society services.” These include (in 214A(3)) screen time limits and the times of day at which children may access the service or a specified feature or functionality of the service i.e. curfews or shutdown laws, that provably did not work in other countries. 

These powers are in effect a placeholder for what the government intends to do after the consultation closes, and appears already decided.

Key to much of the discourse is the idea of raising the “digital age of consent”. It is shorthand and misleading. It obscures its aims and what raising the age threshold of Article 8 of data protection law would — and would not — achieve.

Much of this discussion, rests on a fundamental misunderstanding of what consent actually is under data-protection law, and whether age is legally relevant to it or not. It is fundamentally the wrong vehicle to drive restrictions on use of social media by children, not least because data protection law Article 8 applies to *all* personal data from children processing under the basis of consent by an information society service, which is almost everything commercially-driven online, not only (as yet undefined) social media.

The DfE amendment on powers to arbitrarily change data protection law Art 8 as a proxy vehicle for social media restrictions–without consultation, evidence or scrutiny–are fatally flawed, by conflating bundled service obligations and required data processing (verified ID/age checks) with a consent process (p.5).

Meanwhile, the consultation is written with closed answers that lead to fixed outcomes. I suggest these need worked around via email submission to propose an alternative in any responses — and watch out, as the print-ready version and the html version have different numbering for the same questions.

Disappointingly, since the changes to the law are already written, it suggests the consultation is itself, a tick-box exercise.

The issues in summary

Ultimately, there is no such thing as a “digital age of consent” and it doesn’t work as a proxy to raise the age in Article 8 of data protection law to restrict children’s access to social media:

  • Consent is not determined by age but by capacity, and its freely given, informed, power respecting, consensual characteristics;
  • Consent [e.g. to use age verification] is invalid if it is compulsory or obligatory, coercive or bundled into the provision of a service ;
  • Consent does not provide a valid legal ground for the processing of personal data where there is a clear power imbalance between the data subject and the controller (Recital 43);
  • Article 8 requires data minimisation and Recital 57 that no additional personal data [from parent or child] be processed solely to satisfy the requirements of the Act;
  • Raising the age in Article 8 until when a parent’s personal data must be processed in addition to the child’s data to authorise “consent”, will mean the personal data from more parents would be collected for more years until the child reaches the higher age threshold and from more services. Article 8 applies to *all* data processing on the basis of consent by ISS*, “offered directly to a child” not only social media.

Policy makers must not misuse data protection law for this.

Age as a Gatekeeper is no small change

Article 8 was flawed from the outset, and little public attention has been paid to how age came to be treated as a proxy for capacity in the GDPR at all. It expands data collection (by requiring parental data at all, and in this scenario for longer) and adults cannot truly consent “on behalf of” a child under pressure when AV is an obligation under a do-it-or-lose-it model when a service might be an edTech tool for example, or even a social media user group for communications used by a school.

Arguably the use of Article 8 to get tick box permission from parents is not valid consent today, never mind bringing more parents into scope, “such processing shall be lawful only if and to the extent that consent is given or authorised by the holder of parental responsibility over the child.”

Furthermore, it does nothing to empower children or to enable them to exercise their own rights, to respect or promote their rights as independent data subjects to their personhood, agency, and dignity. We could indeed redraft Article 8, but not like this. We do need to fix how and where parents permission is a poor proxy for informed, freely given consent.

As Australia has done, the age at which a designated set of social media platforms may or may not provide accounts to children is an obligation on the platforms not the child or parents, and this should be separated from thinking on data protection law.

Europe has always relied on capacity, not age, for the valid exercise of children’s rights and in its frameworks that uphold them under the law. In the era of Epstein, it is more important than ever that we do not normalise the notion that consent is nothing more than a tick box exercise.

Using age as a bouncer to access online spaces, is problematic if in order to separate children from adults, websites must check the IDs of everyone to know who is not a child, to treat children differently.

The future of age as a gatekeeper in online safety is a global political agenda that affects not only children but raises questions about its wider purposes in the state control of identity, security, and informational power in the digital age with far-reaching effects on democratic participation and how anyone uses the Internet. What has been a relatively permissionless activity for anyone with the ability to be anonymous or to choose to have multiple identities managed by the corporate provider, suddenly becomes a state-ID-controlled activity. “Robust” or “highly effective” age checks require an official state-authorised ID against which an age credential is verified or assured, whether the access point is provided by a commercial third-party provider or not. Not everyone has one in the UK. Everyone will be obliged to hold a national ID of some kind in future, if “robust” age verification or assurance becomes mandatory to use social media, or go online.

The consultation says (p.40), “One way to achieve the strongest possible approach to any new age-linked restrictions would be to require every existing UK social media user to verify their age online, for example if we were to enact a ban on children from all social media.” 

That means online companies no longer using their own know-your-customer log-in model, enabling us to choose how to present our identity to them, but a requirement for a robust know-your-citizen model, with obligations to be able to prove digitally who you are with some sort of “passport-level verification of your legal identity” (26:56). When it comes to the national ID for all, the Westminster government is already planning cross-government uses for Home Office such as fraud detection (01:02:00) and immigration enforcement, “much like online banking“. The enormity and reality of this requirement to have a digital ID available to use for age verification should not be seen through the rose-tinted lens of protecting children. With the imposition to have a state-accepted ID controlled via the database state, It’s the end of access to the Internet as we know it for everyone. It’s as if we’re back in 2009 with a consultation to be published next week to “explore the benefits” to people of having a national digital ID by 2029. That is the reality of “robust” age verification. How it will work, whether the ID-age-restricted access obligation will pop up based on the location of the hosted content or location of the user, is as yet unclear.

While the EU is working out its collective approach, Ireland has already announced its own plans for bringing in a state age-verification system and assumes EU presidency in July.

Meanwhile, 418 expert academics, technologists and scientists from 30 countries warned this week that social-media bans and age checks can backfire. In a statement, they called for a moratorium until evidence is clearer, citing easy circumvention, migration to risky fringe sites, and years-long infrastructure hurdles.

Rather than consider expanding state age-verification systems that will without due attention, be able to attach not only your IP address and tracking cookies but your legal identity to every search you make, and every place you visit online, we need to spend more effort on how to avoid the identification of children (by companies or others)  becoming the norm.

The consultation on this topic: raising the “digital age of consent”

There are 5 narrow questions on minimum age restrictions and the effectiveness of age-verification and age-assurance technologies bundled into a few questions, and five on VPNs, each of which I will address as separate topics later.

This blogpost is only about changing the so-called and misdescribed “digital age of consent”. The proposals suggests raising the age at which parental personal details are no longer required to process a child’s data where processing is on the basis of consent.

The answers to the consultation in that section are too closed to propose a new approach, but the approach needs reframed entirely.

Replace questions 8-11 on the age of “digital consent” with a new approach. Instead of using Article 8 in data protection law, to raise the age of data processing about parents as well as the child from 13 to 16 or anything else, government should review the Age Appropriate Design Code (“the Code”) and address this via enforcement of other parts of the existing data protection law.

We must avoid the paradox of:

“We must process more personal data (age verification, and connected parental ID data) to protect children from data processing.”



The background detail of why

1. Consent in Data Protection law

Under the UK GDPR, consent is only one of six lawful bases on which personal data may be processed. Consent is not required in all circumstances. Where consent is relied upon, it must meet strict legal standards:

  • Consent must be freely given, specific, unambiguous, and expressed by a clear, affirmative act by the data subject;
  • Consent must be informed, which means individuals must know who is processing their data, for what purposes, what type of data is involved, and that they can withdraw consent at any time without disadvantage;
  • Consent cannot be bundled, coerced, or made a condition of accessing a service (like pay or consent models, there must be real choice).

This matters because the media framing of “digital consent” treats age as if it were a standalone gateway to online access. It is not.

Consent cannot be freely given where there is compulsion, imbalance of power, or a lack of genuine choice. If processing personal data — for example for age verification or assurance — is made a mandatory condition of accessing a service, that requirement itself invalidates the ‘freely given’ nature of consent. This cannot produce valid consent under data-protection law. Recitals 42 and 43 are very explicit about this.

It is these qualities of consent and not age, that determines whether the data processing basis of consent for either the parent or the child is valid, and therefore lawful, or not. It’s not enough to tick “agree”.

2. Article 8 is narrow in scope and not about social media as such

The misunderstanding of Article 8 of the UK GDPR is not primarily about age. It is about scope.

Article 8 applies only where three specific conditions are met:

Article 8 is only relevant where (a) personal data is processed (b) by an information society service (ISS)*, AND that data is processed (c) on the basis of consent. If a website or service processes personal data on another data protection law lawful basis — such as legitimate interests, performance of a contract, or compliance with a legal obligation — then the consent rules, and the age rules attached to them in Article 8, do not apply.

If any of these conditions is absent, Article 8 is irrelevant. It does not govern children’s data processing in general, and it does not create a general age rule for online services or social-media access.

3. Article 8 does not create a “digital age of consent”

What Article 8 says is that below the national age threshold set in domestic law (between 13 and 16, depending on the country), parental authorisation is required to process a younger child’ personal data, in practice this means collecting personal details from them as well as personal data from the child to connect the relationship. Again, this applies only where the condition applies that consent is the lawful basis being relied upon.

Crucially, Article 8 does not impose a positive obligation on services to obtain consent from a child at a particular age. It creates a negative duty from when a parent’s permission as authorisation in place of or a “pseudo” consent (and the extra parental personal data) is no longer required for processing, and only for processing by an ISS*. 

Let’s assume the same parents who agree to children’s social media use and help them sign up today by providing parental permission (in the UK under 13) will do so in future. What this change would mean for them, is that it would require those parents’ data to be processed for longer, for more years. Instead of requiring parents’ personal data as well as the child’s only aged up to 13 it would require the parents’ data to continue to be processed up to the newly raised age limit in addition to that child’s own data. Let’s assume the number of parents who agree at each year up to the newly raised limit increases. The companies now no longer only get the child’s personal data after age 13, but parents’ data too. And this applies to far more companies as ISS than social media. Just what we don’t want.

4. Proposals to “raise the digital age of consent” in order to restrict children’s access to social media are therefore conceptually flawed

Against this background, proposals to restrict more children’s access to social media by “raising the digital age of consent” are conceptually flawed. They attempt to use data-protection law, which regulates the obligations of data controllers (service providers), as a proxy for regulating users permissions to access content. There is in fact no such thing as a digital age of consent because it is an obligation to hand over the parents’ data bundled with the child’s as a condition of the service, and not freely given.

The GDPR does not restrict who may access online services. It regulates the conditions under which their personal data may be processed. As long as the child is over the age of the Terms and Conditions set by the provider, the child can use the service at any age if it doesn’t process personal data on the basis of consent.

There is also a basic logical contradiction at the heart of many such proposals. A service cannot know whether a user is a child or an adult without processing personal data that reveals age, date of birth, or an age credential. Prohibiting or restricting the processing of children’s data at younger ages, while simultaneously requiring services to identify children in order to exclude them, is incoherent.

It is also incompatible with the duty towards data minimisation and recital 38 to protect children from excessive data processing. Recital 57 further explains why this attempt to misuse data protection law is flawed. It would require more data to be collected, to ascertain age, than may otherwise be necessary to process.

“If the personal data processed by a controller do not permit the controller to identify a natural person, the data controller should not be obliged to acquire additional information in order to identify the data subject for the sole purpose of complying with any provision of this Regulation.”  


*Definition: What is an ISS? (an information society service)

The basic definition of an ISS in the GDPR is based on Article 1(1)(b) of Directive (EU) 2015/1535:

“any service normally provided for remuneration, at a distance, by electronic means and at the individual request of a recipient of services.

For the purposes of this definition:

(i) ‘at a distance’ means that the service is provided without the parties being simultaneously present;

(ii) ‘by electronic means’ means that the service is sent initially and received at its destination by means of electronic equipment for the processing (including digital compression) and storage of data, and entirely transmitted, conveyed and received by wire, by radio, by optical means or by other electromagnetic means;

(iii) ‘at the individual request of a recipient of services’ means that the service is provided through the transmission of data on individual request.”

Safety not surveillance

The Youth Endowment Fund (YEF) was established in March 2019 by children’s charity Impetus, with a £200m endowment and a ten-year mandate from the Home Office.

The YEF has just published a report as part of a series about the prevalence of relationship violence among teenagers and what schools are doing to promote healthy relationships. A total of 10,387 children aged 13-17 participated in the survey. While it rightly points out its limitations of size and sampling, its key findings include:

“Of the 10,000 young people surveyed in our report 27% have been in a romantic relationship. 49% of those said they have experienced violent or controlling behaviours from their partner.”

Controlling behaviours are the most common, reported by 46% of those in relationships, and include behaviours such as having their partner check who they’ve been talking to on their phone or social media accounts (30%). They also include being afraid to disagree with their partner (27%) or being afraid to break up with them (26%)”, and “feeling watched or monitored (23%).”

(Source ref. pages 7 and 21).

The report effectively outlines the extent of these problems and focuses on the ‘what’ rather than the ‘why.’ But further discussing the underlying causes is also critical before making recommendations of what needs to be done. In the media, this went on to suggest schools better teach children about relationships. But if you have the wrong reasons for why any complex social problem has come about, you may reach for wrong solutions, addressing symptoms not causes.

Control Normalised in Surveillance

Most debate about teenagers online is about harm from content, contact, or conduct. And often the answer that comes, is more monitoring of what children do online, who they speak to on their phone or social media accounts, and controlling their activity. But research suggests that these very solutions should be analysed as part of the problem.

An omission in the report—and in broader discussions about control and violence in relationships—is the normalisation of the routine use of behavioural controls by ‘loved ones’,  imposed through apps and platforms, perpetuated by parents, teachers, and children’s peers.

The growing normalisation of controlling behaviours in relationships identified in the new report—framed as care or love, such as knowing where someone is, what they’re doing, and with whom—mirrors practices in parental and school surveillance tech, widely sold as safeguarding tools for a decade. These products often operate without consent, justified as being, “in the child’s best interests,” “because we care,” or “because I love you.”

Teacher training on consent and coercive control is unlikely to succeed if staff model contradictory behaviours. “Do as I say, not as I do” tackles the wrong end of the problem.

The ‘privacy’ vs ‘protection’ debate is often polarised. This YEF report should underscore their interdependence: without privacy, children are made more vulnerable, not safer.

The Psychological Costs of Surveillance

Dr. Tonya Rooney, an academic based in Australia, has extensively studied how technology shapes childhood. She argues that,

“the effects of near-constant surveillance in schools, public spaces, and now increasingly the home environment may have far-reaching consequences for children growing up under this watchful gaze.”(Minut, 2019).

“Children become reactive agents, contributing to a cycle of suspicion and anxiety, robbing childhood of valuable opportunities to trust and be trusted.”

In the UK, while the mental health and behavioural impacts of surveillance on children—whether as the observer or the observed—remain under-researched, there is clear international and UK based evidence that parental control apps, school “safeguarding” systems, and encryption workarounds that breach confidentiality, are harming children’s interests.

  • Constant monitoring creates a pervasive sense of constant scrutiny and undermines trust in a relationship. These apps and platforms are not only undermining trusted relationships today in authority whether it be families or teachers, but are detrimental to children developing trust in themselves, and others.
  • Child surveillance can have negative effects on mental health through the creation of a cycle of fear and anxiety and helplessness dependent on someone else being in control, to solve it for them.
  • Child surveillance has a chilling effect, not only through behavioural control of where you go, with whom, doing what, but of thought and freedom of speech, and fear of making mistakes with no space for errors to go unnoticed or unrecorded. People who are aware they are being monitored limit their self-expression and worry about what others think, which can be especially problematic for children in an educational setting, or in pursuit of curiosity and self discovery.

Research by the U.S.-based Center for Tech and Democracy (2022) highlights the disproportionate harm and discriminatory effects of pupils’ activity monitoring. Black, Hispanic, and LGBTQ+ children report experiencing higher levels of harm.

“LGBTQ+ students are even experiencing “non-consensual disclosure of sexual orientation and/or gender identity (i.e., “outing”), due to student activity monitoring.”

Children need safe spaces that are truly safe, which means trusted. The June 2024 Tipping the Balance report from the Australian eSafety Commissioner shows that LGBTIQ+ teens, for instance, rely on encrypted spaces to discuss deeply personal matters—45% of them shared private things they wouldn’t talk about face-to-face. And just over four in 10 LGBTIQ+ teens (42%) searched for mental health information at least once a week (compared with the national average of 20%).

Surveillance of Children Secures Future Markets

School “SafetyTech” practices normalise surveillance as if it were an inevitable part of life, undermining privacy as a fundamental right as a principle to be expected and respected. Some companies, even use this as a marketing feature, not a bug.

One company selling safeguarding tech to schools has framed their products as preparation for workplace device monitoring, teaching students “skills and expectations” for inevitable employment surveillance. In a 2020 EdTech UK presentation, entitled, ‘Protecting student wellness with real time monitoring‘, Netsweeper representatives described their tools as what employers want, fostering productivity by ensuring students are, “engaged, dialled in, and productive workers now and in the future.”

Many of the leading companies sell in both child and adult sectors. That the DUA Bill will give these kinds of companies’ activity in effect a ‘get-out-of-jail-free card’ for processing ‘vulnerable’ people’s data under the blanket purposes of ‘safeguarding’ — able to claim lawful grounds of legitimate interests,  without needing to do any risk assessment or balancing test of harms to people’s rights—, therefore worries me a lot.

Parental Control and Perception of Harms

Parents and children perceive these tools differently when it comes to the personal, on-mobile-device, commercial markets.

Work done in the U.S. by academics at the Stevens Institute of Technology found that while parents often praise them for enhancing safety—e.g., “I can monitor everything my son does” parental negative findings were largely technical failures, such as unstable systems that crashed. Their research also found that teens found failures as harms, primarily to trust and the power dynamics in relationships. Students in the said that parental control apps as a form of “parental stalking,” and that they, “may negatively impact parent-teen relationships.”

Research done in the UK, also found children’s more nuanced understanding of privacy as a collective harm,  because, “parents’ access to their messages would compromise their friends’ privacy as well: they can eves drop on your convos and stuff that you dont want them to hear […] not only is it a violation of my privacy that i didnt permit, but it is of friends too that parents dont know about”” (quoted as in original).

These researchers concluded that, increasing evidence suggests that such apps may be bringing with them new kinds of harms associated with excessive restrictions and privacy invasion.

A Call for Change

Academic evidence increasingly shows the harm caused by these apps in intra-familial relationships, and between schools and pupils, but research seems to be missing on the impact on children’s emotional and cognitive development and in turn, any effects in their own romantic relationships.

I believe surveillance tools undermine their understanding of healthy relationships with each other. If some adults model controlling behaviours as ‘love and caring’ in their relationships, even inadvertently, it would come as no surprise that some young people replicate similar controlling attitudes in their own behaviour.

This is our responsibility to fix. Surveillance is not safety. If we take the emerging evidence seriously, a precautionary approach might suggest:

  • Parents and teachers must change their own behaviours to prioritise trust, respect, and autonomy, giving children agency and the ability to act, without tech-solutionist monitoring.
  • Regulatory action is urgently needed to address the use of surveillance technologies in schools and commercial markets.
  • Policy makers should be rigorous in accepting who is making these markets, who is accountable for their actions, and for their health and safety, and efficacy and error rates standards, since they are already rolled out at scale across the public sector.

The “best interests of the child” cherry picked from part of Article 3 of the UN Convention on the Rights of the Child seems to have become a lazy shorthand for all children’s rights in discussion of the digital environment, and with participation, privacy and provision rights,  trumped by protection. Freedoms seem forgotten. Its preamble is worth a careful read in full if you have not done so for some time. And as set out in the General comment No. 25 (2021):

“Any digital surveillance of children, together with any associated automated processing of personal data, should respect the child’s right to privacy and should not be conducted routinely, indiscriminately or without the child’s knowledge.”

If the DfE is “reviewing the content of RSHE and putting children’s wellbeing at the heart of guidance for schools” they must also review the lack of safety and quality standards, error rates, and monitoring outcomes of the effects of KCSiE digital surveillance obligations for schools.

Children need both privacy and protection —not only for their safety, but to freely develop and flourish into adulthood.


References

Alelyani, T. et al. (2019) ‘Examining Parent Versus Child Reviews of Parental Control Apps on Google Play’, in, pp. 3–21. Available at: https://doi.org/10.1007/978-3-030-21905-5_1. (Accessed: 4 December 2024).

CDT Report – Hidden Harms: The Misleading Promise of Monitoring Students Online’ (2022) Center for Democracy and Technology, 3 August. Available at: https://cdt.org/insights/report-hidden-harms-the-misleading-promise-of-monitoring-students-online/ (Accessed: 4 December 2024).

The Chilling Effect of Student Monitoring: Disproportionate Impacts and Mental Health Risks’ (2022) Center for Democracy and Technology, 5 May. Available at: https://cdt.org/insights/the-chilling-effect-of-student-monitoring-disproportionate-impacts-and-mental-health-risks/ (Accessed: 4 December 2024).

Growing Up in the Age of Surveillance | Minut (2019). Available at: https://www.minut.com/blog/growing-up-in-the-age-of-surveillance (Accessed: 4 December 2024).

Malik, A.S., Acharya, S. and Humane, S. (2024.) ‘Exploring the Impact of Security Technologies on Mental Health: A Comprehensive Review’, Cureus, 16(2), p. e53664. Available at: https://doi.org/10.7759/cureus.53664. (Accessed: 4 December 2024).

Privacy and Protection: A children’s rights approach to encryption (2023) CRIN and Defend Digital Me. Available at: https://home.crin.org/readlistenwatch/stories/privacy-and-protection (Accessed: 4 December 2024).

Teen privacy: Boyd, Danah and Marwick, Alice E., Social Privacy in Networked Publics: Teens’ Attitudes, Practices, and Strategies (September 22, 2011). A Decade in Internet Time: Symposium on the Dynamics of the Internet and Society, September 2011, Available at SSRN: https://ssrn.com/abstract=1925128

Wang, G., Zhao, J., Van Kleek, M., & Shadbolt, N. (2021). Protection or punishment? Relating the design space of parental control apps and perceptions about them to support parenting for online safety. Proceedings of the Conference on Computer Supported Cooperative Work Conference, 5(CSCW2). https://ora.ox.ac.uk/objects/uuid:da71019d-157c-47de-a310-7e0340599e22