Category Archives: trust

Safety not surveillance

The Youth Endowment Fund (YEF) was established in March 2019 by children’s charity Impetus, with a £200m endowment and a ten-year mandate from the Home Office.

The YEF has just published a report as part of a series about the prevalence of relationship violence among teenagers and what schools are doing to promote healthy relationships. A total of 10,387 children aged 13-17 participated in the survey. While it rightly points out its limitations of size and sampling, its key findings include:

“Of the 10,000 young people surveyed in our report 27% have been in a romantic relationship. 49% of those said they have experienced violent or controlling behaviours from their partner.”

Controlling behaviours are the most common, reported by 46% of those in relationships, and include behaviours such as having their partner check who they’ve been talking to on their phone or social media accounts (30%). They also include being afraid to disagree with their partner (27%) or being afraid to break up with them (26%)”, and “feeling watched or monitored (23%).”

(Source ref. pages 7 and 21).

The report effectively outlines the extent of these problems and focuses on the ‘what’ rather than the ‘why.’ But further discussing the underlying causes is also critical before making recommendations of what needs to be done. In the media, this went on to suggest schools better teach children about relationships. But if you have the wrong reasons for why any complex social problem has come about, you may reach for wrong solutions, addressing symptoms not causes.

Control Normalised in Surveillance

Most debate about teenagers online is about harm from content, contact, or conduct. And often the answer that comes, is more monitoring of what children do online, who they speak to on their phone or social media accounts, and controlling their activity. But research suggests that these very solutions should be analysed as part of the problem.

An omission in the report—and in broader discussions about control and violence in relationships—is the normalisation of the routine use of behavioural controls by ‘loved ones’,  imposed through apps and platforms, perpetuated by parents, teachers, and children’s peers.

The growing normalisation of controlling behaviours in relationships identified in the new report—framed as care or love, such as knowing where someone is, what they’re doing, and with whom—mirrors practices in parental and school surveillance tech, widely sold as safeguarding tools for a decade. These products often operate without consent, justified as being, “in the child’s best interests,” “because we care,” or “because I love you.”

Teacher training on consent and coercive control is unlikely to succeed if staff model contradictory behaviours. “Do as I say, not as I do” tackles the wrong end of the problem.

The ‘privacy’ vs ‘protection’ debate is often polarised. This YEF report should underscore their interdependence: without privacy, children are made more vulnerable, not safer.

The Psychological Costs of Surveillance

Dr. Tonya Rooney, an academic based in Australia, has extensively studied how technology shapes childhood. She argues that,

“the effects of near-constant surveillance in schools, public spaces, and now increasingly the home environment may have far-reaching consequences for children growing up under this watchful gaze.”(Minut, 2019).

“Children become reactive agents, contributing to a cycle of suspicion and anxiety, robbing childhood of valuable opportunities to trust and be trusted.”

In the UK, while the mental health and behavioural impacts of surveillance on children—whether as the observer or the observed—remain under-researched, there is clear international and UK based evidence that parental control apps, school “safeguarding” systems, and encryption workarounds that breach confidentiality, are harming children’s interests.

  • Constant monitoring creates a pervasive sense of constant scrutiny and undermines trust in a relationship. These apps and platforms are not only undermining trusted relationships today in authority whether it be families or teachers, but are detrimental to children developing trust in themselves, and others.
  • Child surveillance can have negative effects on mental health through the creation of a cycle of fear and anxiety and helplessness dependent on someone else being in control, to solve it for them.
  • Child surveillance has a chilling effect, not only through behavioural control of where you go, with whom, doing what, but of thought and freedom of speech, and fear of making mistakes with no space for errors to go unnoticed or unrecorded. People who are aware they are being monitored limit their self-expression and worry about what others think, which can be especially problematic for children in an educational setting, or in pursuit of curiosity and self discovery.

Research by the U.S.-based Center for Tech and Democracy (2022) highlights the disproportionate harm and discriminatory effects of pupils’ activity monitoring. Black, Hispanic, and LGBTQ+ children report experiencing higher levels of harm.

“LGBTQ+ students are even experiencing “non-consensual disclosure of sexual orientation and/or gender identity (i.e., “outing”), due to student activity monitoring.”

Children need safe spaces that are truly safe, which means trusted. The June 2024 Tipping the Balance report from the Australian eSafety Commissioner shows that LGBTIQ+ teens, for instance, rely on encrypted spaces to discuss deeply personal matters—45% of them shared private things they wouldn’t talk about face-to-face. And just over four in 10 LGBTIQ+ teens (42%) searched for mental health information at least once a week (compared with the national average of 20%).

Surveillance of Children Secures Future Markets

School “SafetyTech” practices normalise surveillance as if it were an inevitable part of life, undermining privacy as a fundamental right as a principle to be expected and respected. Some companies, even use this as a marketing feature, not a bug.

One company selling safeguarding tech to schools has framed their products as preparation for workplace device monitoring, teaching students “skills and expectations” for inevitable employment surveillance. In a 2020 EdTech UK presentation, entitled, ‘Protecting student wellness with real time monitoring‘, Netsweeper representatives described their tools as what employers want, fostering productivity by ensuring students are, “engaged, dialled in, and productive workers now and in the future.”

Many of the leading companies sell in both child and adult sectors. That the DUA Bill will give these kinds of companies’ activity in effect a ‘get-out-of-jail-free card’ for processing ‘vulnerable’ people’s data under the blanket purposes of ‘safeguarding’ — able to claim lawful grounds of legitimate interests,  without needing to do any risk assessment or balancing test of harms to people’s rights—, therefore worries me a lot.

Parental Control and Perception of Harms

Parents and children perceive these tools differently when it comes to the personal, on-mobile-device, commercial markets.

Work done in the U.S. by academics at the Stevens Institute of Technology found that while parents often praise them for enhancing safety—e.g., “I can monitor everything my son does” parental negative findings were largely technical failures, such as unstable systems that crashed. Their research also found that teens found failures as harms, primarily to trust and the power dynamics in relationships. Students in the said that parental control apps as a form of “parental stalking,” and that they, “may negatively impact parent-teen relationships.”

Research done in the UK, also found children’s more nuanced understanding of privacy as a collective harm,  because, “parents’ access to their messages would compromise their friends’ privacy as well: they can eves drop on your convos and stuff that you dont want them to hear […] not only is it a violation of my privacy that i didnt permit, but it is of friends too that parents dont know about”” (quoted as in original).

These researchers concluded that, increasing evidence suggests that such apps may be bringing with them new kinds of harms associated with excessive restrictions and privacy invasion.

A Call for Change

Academic evidence increasingly shows the harm caused by these apps in intra-familial relationships, and between schools and pupils, but research seems to be missing on the impact on children’s emotional and cognitive development and in turn, any effects in their own romantic relationships.

I believe surveillance tools undermine their understanding of healthy relationships with each other. If some adults model controlling behaviours as ‘love and caring’ in their relationships, even inadvertently, it would come as no surprise that some young people replicate similar controlling attitudes in their own behaviour.

This is our responsibility to fix. Surveillance is not safety. If we take the emerging evidence seriously, a precautionary approach might suggest:

  • Parents and teachers must change their own behaviours to prioritise trust, respect, and autonomy, giving children agency and the ability to act, without tech-solutionist monitoring.
  • Regulatory action is urgently needed to address the use of surveillance technologies in schools and commercial markets.
  • Policy makers should be rigorous in accepting who is making these markets, who is accountable for their actions, and for their health and safety, and efficacy and error rates standards, since they are already rolled out at scale across the public sector.

The “best interests of the child” cherry picked from part of Article 3 of the UN Convention on the Rights of the Child seems to have become a lazy shorthand for all children’s rights in discussion of the digital environment, and with participation, privacy and provision rights,  trumped by protection. Freedoms seem forgotten. Its preamble is worth a careful read in full if you have not done so for some time. And as set out in the General comment No. 25 (2021):

“Any digital surveillance of children, together with any associated automated processing of personal data, should respect the child’s right to privacy and should not be conducted routinely, indiscriminately or without the child’s knowledge.”

If the DfE is “reviewing the content of RSHE and putting children’s wellbeing at the heart of guidance for schools” they must also review the lack of safety and quality standards, error rates, and monitoring outcomes of the effects of KCSiE digital surveillance obligations for schools.

Children need both privacy and protection —not only for their safety, but to freely develop and flourish into adulthood.


References

Alelyani, T. et al. (2019) ‘Examining Parent Versus Child Reviews of Parental Control Apps on Google Play’, in, pp. 3–21. Available at: https://doi.org/10.1007/978-3-030-21905-5_1. (Accessed: 4 December 2024).

CDT Report – Hidden Harms: The Misleading Promise of Monitoring Students Online’ (2022) Center for Democracy and Technology, 3 August. Available at: https://cdt.org/insights/report-hidden-harms-the-misleading-promise-of-monitoring-students-online/ (Accessed: 4 December 2024).

The Chilling Effect of Student Monitoring: Disproportionate Impacts and Mental Health Risks’ (2022) Center for Democracy and Technology, 5 May. Available at: https://cdt.org/insights/the-chilling-effect-of-student-monitoring-disproportionate-impacts-and-mental-health-risks/ (Accessed: 4 December 2024).

Growing Up in the Age of Surveillance | Minut (2019). Available at: https://www.minut.com/blog/growing-up-in-the-age-of-surveillance (Accessed: 4 December 2024).

Malik, A.S., Acharya, S. and Humane, S. (2024.) ‘Exploring the Impact of Security Technologies on Mental Health: A Comprehensive Review’, Cureus, 16(2), p. e53664. Available at: https://doi.org/10.7759/cureus.53664. (Accessed: 4 December 2024).

Privacy and Protection: A children’s rights approach to encryption (2023) CRIN and Defend Digital Me. Available at: https://home.crin.org/readlistenwatch/stories/privacy-and-protection (Accessed: 4 December 2024).

Teen privacy: Boyd, Danah and Marwick, Alice E., Social Privacy in Networked Publics: Teens’ Attitudes, Practices, and Strategies (September 22, 2011). A Decade in Internet Time: Symposium on the Dynamics of the Internet and Society, September 2011, Available at SSRN: https://ssrn.com/abstract=1925128

Wang, G., Zhao, J., Van Kleek, M., & Shadbolt, N. (2021). Protection or punishment? Relating the design space of parental control apps and perceptions about them to support parenting for online safety. Proceedings of the Conference on Computer Supported Cooperative Work Conference, 5(CSCW2). https://ora.ox.ac.uk/objects/uuid:da71019d-157c-47de-a310-7e0340599e22

Views on a National AI strategy

Today was the APPG AI Evidence Meeting – The National AI Strategy: How should it look? Here’s some of my personal views and takeaways.

Have the Regulators the skills and competency to hold organisations to account for what they are doing? asked Roger Taylor, the former Chair of Ofqual the exams regulator, as he began the panel discussion, chaired by Lord Clement-Jones.

A good question was followed by another.

What are we trying to do with AI? asked Andrew Strait, Associate Director of Research Partnerships at Ada Lovelace Institute and formerly of DeepMind and Google. The goal of a strategy should not be to have more AI for the sake of having more AI, he said, but an articulation of values and goals. (I’d suggest the government may be in fact in favour of exactly that, more AI for its own sake where its appplication is seen as a growth market.) And interestingly he suggested that the Scottish strategy has more values-based model, such as fairness. [I had, it seems, wrongly assumed that a *national* AI strategy to come, would include all of the UK.]

The arguments on fairness are well worn in AI discussion and getting old. And yet they still too often fail to ask whether these tools are accurate or even work at all. Look at the education sector and one company’s product, ClassCharts, that claimed AI as its USP for years, but the ICO found in 2020 that the company didn’t actually use any AI at all. If company claims are not honest, or not accurate, then they’re not fair to anyone, never mind across everyone.

Fairness is still too often thought of in terms of explainability of a computer algorithm, not the entire process it operates in. As I wrote back in 2019, “yes we need fairness accountability and transparency. But we need those human qualities to reach across thinking beyond computer code. We need to restore humanity to automated systems and it has to be re-instated across whole processes.”

Strait went on to say that safe and effective AI would be something people can trust. And he asked the important question: who gets to define what a harm is? Rightly identifying that the harm identified by a developer of a tool, may be very different from those people affected by it. (No one on the panel attempted to define or limit what AI is, in these discussions.) He suggested that the carbon footprint from AI may counteract the benefit it would have to apply AI in the pursuit of climate-change goals. “The world we want to create with AI” was a very interesting position and I’d have liked to hear him address what he meant by that, who is “we”, and any assumptions within it.

Lord Clement-Jones asked him about some of the work that Ada Lovelace had done on harms such as facial recognition, and also asked whether some sector technologies are so high risk that they must be regulated?  Strait suggested that we lack adequate understanding of what harms are — I’d suggest academia and civil society have done plenty of work on identifying those, they’ve just been too often  ignored until after the harm is done and there are legal challenges. Strait also suggested he thought the Online Harms agenda was ‘a fantastic example’ of both horizontal and vertical regulation. [Hmm, let’s see. Many people would contest that, and we’ll see what the Queen’s Speech brings.]

Maria Axente then went on to talk about children and AI.  Her focus was on big platforms but also mentioned a range of other application areas. She spoke of the data governance work going on at UNICEF. She included the needs for driving awareness of the risks for children and AI, and digital literacy. The potential for limitations on child  development, the exacerbation of the digital divide,  and risks in public spaces but also hoped for opportunities. She suggested that the AI strategy may therefore be the place for including children.

This of course was something I would want to discuss at more length, but in summary the last decade of Westminster policy affecting children, even the Children’s Commissioner most recent Big Ask survey, bypass the question of children’s *rights* completely. If the national AI strategy by contrast would address rights, [the foundation upon which data laws are built] and create the mechanisms in public sector interactions with children that would enable them to be told if and how their data is being used (in AI systems or otherwise) and be able to exercise the choices that public engagement time and time again says is what people want, then that would be a *huge* and positive step forward to effective data practice across the public sector and for use of AI. Otherwise I see a risk that a strategy on AI and children will ignore children as rights holders across a full range of rights in the digital environment, and focus only on the role of AI in child protection, a key DCMS export aim, and ignore the invasive nature of safety tech tools, and its harms.

Next Dr Jim Weatherall from Astra Zeneca tied together  leveraging “the UK unique strengths of the NHS” and “data collected there” wanting a close knitting together of the national AI strategy and the national data strategy, so that healthcare, life sciences and biomedical sector can become “an international renowned asset.”  He’d like to see students doing data science modules in studies and international access to talent to work for AZ.

Lord Clement-Jones then asked him how to engender public trust in data use. Weatherall said a number of false starts in the past are hindering progress, but that he saw the way forward was data trusts and citizen juries.

His answer ignores the most obvious solution: respect existing law and human rights, using data only in ways that people want and give their permission to do so. Then show them that you did that, and nothing more. In short, what medConfidential first proposed in 2014, the creation of data usage reports.

The infrastructure for managing personal data controls in the public sector, as well as its private partners, must be the basic building block for any national AI strategy.  Views from public engagement work, polls, and outreach has not changed significantly since those done in 2013-14, but ask for the same over and over again. Respect for ‘red lines’ and to have control and choice. Won’t government please make it happen?

If the government fails to put in place those foundations, whatever strategy it builds will fall in the same ways they have done to date, like care.data did by assuming it was acceptable to use data in the way that the government wanted, without a social licence, in the name of “innovation”. Aims that were championed by companies such as Dr Foster, that profited from reusing personal data from the public sector, in a “hole and corner deal” as described by the chairman of the House of Commons committee of public accounts in 2006. Such deals put industry and “innovation” ahead of what the public want in terms of ‘red lines’ for acceptable re-uses of their own personal data and for data re-used in the public interest vs for commercial profit.  And “The Department of Health failed in its duty to be open to parliament and the taxpayer.” That openness and accountability are still missing nearly ten years on in the scope creep of national datasets and commercial reuse, and in expanding data policies and research programmes.

I disagree with the suggestion made that Data Trusts will somehow be more empowering to everyone than mechanisms we have today for data management. I believe Data Trusts will further stratify those who are included and those excluded, and benefit those who have capacity to be able to participate, and disadvantage those who cannot choose. They are also a figleaf of acceptability that don’t solve the core challenge . Citizen juries cannot do more than give a straw poll. Every person whose data is used has entitlement to rights in law, and the views of a jury or Trust cannot speak for everyone or override those rights protected in law.

Tabitha Goldstaub spoke next and outlined some of what AI Council Roadmap had published. She suggested looking at removing barriers to best support the AI start-up community.

As I wrote when the roadmap report was published, there are basics missing in government’s own practice that could be solved. It had an ambition to, “Lead the development of data governance options and its uses. The UK should lead in developing appropriate standards to frame the future governance of data,” but the Roadmap largely ignored the governance infrastructures that already exist. One can only read into that a desire to change and redesign what those standards are.

I believe that there should be no need to change the governance of data but instead make today’s rights able to be exercised and deliver enforcement to make existing governance actionable. Any genuine “barriers” to data use in data protection law,  are designed as protections for people; the people the public sector, its staff and these arms length bodies are supposed to serve.

Blaming AI and algorithms, blaming lack of clarity in the law, blaming “barriers” is often avoidance of one thing. Human accountability. Accountability for ignorance of the law or lack of consistent application. Accountability for bad policy, bad data and bad applications of tools is a human responsibility. Systems you choose to apply to human lives affect people, sometimes forever and in the most harmful ways, so those human decisions must be accountable.

I believe that some simple changes in practice when it comes to public administrative data could bring huge steps forward there:

  1. An audit of existing public admin data held, by national and local government, and consistent published registers of databases and algorithms / AI / ML currently in use.
  2. Identify the lawful basis for each set of data processes, their earliest records dates and content.
  3. Publish that resulting ROPA and storage limitations.
  4. Assign accountable owners to databases, tools and the registers.
  5. Sort out how you will communicate with people whose data you unlawfully process to meet the law, or stop processing it.
  6. And above all, publish a timeline for data quality processes and show that you understand how the degradation of data accuracy, quality affect the rights and responsibilities in law that change over time, as a result.

Goldstaub went on to say on ethics and inclusion, that if it’s not diverse, it’s not ethical. Perhaps the next panel itself and similar events could take a lesson learned from that, as such APPG panel events are not as diverse as they could or should be themselves.  Some of the biggest harms in the use of AI are after all for those in communities least represented, and panels like this tend to ignore lived reality.

The Rt Rev Croft then wrapped up the introductory talks on that more human note, and by exploding some myths.  He importantly talked about the consequences he expects of the increasing use of AI and its deployment in ‘the future of work’ for example, and its effects for our humanity. He proposed 5 topics for inclusion in the strategy and suggested it is essential to engage a wide cross section of society. And most importantly to ask, what is this doing to us as people?

There were then some of the usual audience questions asked on AI, transparency, garbage-in garbage-out, challenges of high risk assessment, and agreements or opposition to the EU AI regulation.

What frustrates me most in these discussions is that the technology is an assumed given, and the bias that gives to the discussion, is itself ignored. A holistic national AI strategy should be looking at if and why AI at all. What are the consequences of this focus on AI and what policy-making-oxygen and capacity does it take away from other areas of what government could or should be doing? The questioner who asks how adaptive learning could use AI for better learning in education, fails to ask what does good learning look like, and if and how adaptive tools fit into that, analogue or digital, at all.

I would have liked to ask panelists if they agree that proposals of public engagement and digital literacy distract from lack of human accountability for bad policy decisions that use machine-made support? Taking  examples from 2020 alone, there were three applications of algorithms and data in the public sector challenged by civil society because of their harms: from the Home Office dropping its racist visa algorithm, DWP court case finding ‘irrational and unlawful’ in Universal Credit decisions, and the “mutant algorithm” of summer 2020 exams. Digital literacy does nothing to help people in those situations. What AI has done is to increase the speed and scale of the harms caused by harmful policy, such as the ‘Hostile Environment’ which is harmful by design.

Any Roadmap, AI Council recommendations, and any national strategy if serious about what good looks like, must answer how would those harms be prevented in the public sector *before* being applied. It’s not about the tech, AI or not, but misuse of power. If the strategy or a Roadmap or ethics code fails to state how it would prevent such harms, then it isn’t serious about ethics in AI, but ethics washing its aims under the guise of saying the right thing.

One unspoken problem right now is the focus on the strategy solely for the delivery of a pre-determined tool (AI). Who cares what the tool is? Public sector data comes from the relationship between people and the provision of public services by government at various levels, and its AI strategy seems to have lost sight of that.

What good would look like in five years would be the end of siloed AI discussion as if it is a desirable silver bullet, and mythical numbers of ‘economic growth’ as a result, but see AI treated as is any other tech and its role in end-to-end processes or service delivery would be discussed proportionately. Panelists would stop suggesting that the GDPR is hard to understand or people cannot apply it.  Almost all of the same principles in UK data laws have applied for over twenty years. And regardless of the GDPR, the Convention 108 applies to the UK post-Brexit unchanged, including associated Council of Europe Guidelines on AI, data protection, privacy and profiling.

Data laws. AI regulation. Profiling. Codes of Practice on children, online safety or biometrics and emotional or gait recognition. There *are* gaps in data protection law when it comes to biometric data not used for unique identification purposes. But much of this is already rolled into other law and regulation for the purposes of upholding human rights and the rule of law. The challenge in the UK is often not having the law, but its lack of enforcement. There are concerns in civil society that the DCMS is seeking to weaken core ICO duties even further. Recent  government, council and think tank roadmaps talk of the UK leading on new data governance, but in reality simply want to see established laws rewritten to be less favourable of rights. To be less favourable towards people.

Data laws are *human* rights-based laws. We will never get a workable UK national data strategy or national AI strategy if government continues to ignore the very fabric of what they are to be built on. Policy failures will be repeated over and over until a strategy supports people to exercise their rights and have them respected.

Imagine if the next APPG on AI asked what would human rights’ respecting practice and policy look like, and what infrastructure would the government need to fund or build to make it happen?  In public-private sector areas (like edTech). Or in the justice system, health, welfare, children’s social care. What could that Roadmap look like and how we can make it happen over what timeframe? Strategies that could win public trust *and* get the sectoral wins the government and industry are looking for. Then we might actually move forwards on getting a functional strategy that would work, for delivering public services and where both AI and data fit into that.

Damage that may last a generation.

Hosted by the Mental Health Foundation, it’s Mental Health Awareness Week until 24th May, 2020. The theme for 2020 is ‘kindness’.

So let’s not comment on the former Education Ministers and MPs, the great-and-the-good and the-recently-resigned, involved in the Mail’s continued hatchet job on teachers. They probably believe that they are standing up for vulnerable children when they talk about the “damage that may last a generation“. Yet the evidence of much of their voting, and policy design to-date, suggests it’s much more about getting people back to work.

Of course there are massive implications for children in families unable to work or living with the stress of financial insecurity on top of limited home schooling. But policy makers should be honest about the return to school as an economic lever, not use children’s vulnerability to pressure professionals to return to full-school early, or make up statistics to up the stakes.

The rush to get back to full-school for the youngest of primary age pupils has been met with understandable resistance, and too few practical facts. Going back to a school in COVID-19 measures for very young children, will take tonnes of adjustment, to the virus, to seeing friends they cannot properly play with, to grief and stress.

When it comes to COVID-19 risk, many countries with similar population density to the UK, locked down earlier and tighter and now have lower rates of community transmission than we do. Or compare where didn’t, Sweden, but that has a population density of 24 people per Km2. The population density for the United Kingdom is 274 people per square kilometre. In Italy, with 201 inhabitants per square kilometre,  you needed a permission slip to leave home.

And that’s leaving aside the unknowns on COVID-19 immunity, or identifying it, or the lack of testing offer to over a million children under-5,  the very group expected to be those who return first to full-school.

Children have rights to education, and to life, survival and development. But the blanket target groups and target date, don’t appear to take the Best Interests of The Child, for each child, into account at all. ‘Won’t someone think of the children?’ may never have been more apt.

Parenting while poor is highly political

What’s the messaging in the debate, even leaving media extremes aside?

The sweeping assumption by many commentators that ‘the poorest children will have learned nothing‘ (BBC Newsnight, May 19) is unfair, but this blind acceptance as fact, a politicisation of parenting while poor, conflated with poor parenting, enables the claimed concern for their vulnerability to pass without question.

Many of these most vulnerable children were not receiving full time education *before* the pandemic but look at how it is told.

It would be more honest in discussion or publishing ‘statistics’ around the growing gap expected if children are out of school, to consider what the ‘excess’ gap will be and why. (Just like measuring excess deaths, not only those people who died and had been tested for COVID-19.) Thousands of vulnerable children were out of school already, due tobudget decisions that had left local authorities unable to fulfil their legal obligation to provide education.’

Pupil Referral Units were labeled “a scandal” in 2012 and only last year the constant “gangs at the gates” narrative was highly political.

“The St Giles Trust research provided more soundbites. Pupils involved in “county lines” are in pupil referral units (PRUs), often doing only an hour each day, and rarely returning into mainstream education.’ (Steve Howell, Schools Week)

Nearly ten years on, there is still lack of adequate support for children in Alternative Provision and a destructive narrative of “us versus them”.

Source: @sarahkendzior

The value of being in school

Schools have remained open for children of key workers and more than half a million pupils labeled as ‘vulnerable’, which includes those classified as “children in need” as well as 270,000 children with an education, health and care (EHC) plan for special educational needs.  Not all of those are ‘at risk’ of domestic violence or abuse or neglect. The reasons why there is low turnout, tend to be conflated.

Assumptions abound about the importance of formal education and the best place for those very young children in Early Years (age 2-5) to be in school at all, despite conflicting UK evidence, that is thin on the ground. Research for the NFER [the same organisation running the upcoming Baseline Test of four year olds still due to begin this year] (Sharp, 2002), found:

“there would appear to be no compelling educational rationale for a statutory school age of five or for the practice of admitting four-year-olds to school reception classes.” And “a late start appears to have no adverse effect on children’s progress.”

Later research from 2008, from the IoE, Research Report No. DCSF-RR061 (Sylva et al, 2008) commissioned before the then ‘new’ UK Government took office in 2010, suggested better outcomes for children who are in excellent Early Years provision, but also pointed out that more often the most vulnerable are not those in the best of provision.

“quality appears to be especially important for disadvantaged groups.”

What will provision quality be like, under Coronavirus measures? How much stress-free space and time for learning will be left at all?

The questions we should be asking are a) What has been learned for the second wave and b) Assume by May 2021 nothing changes. What would ideal schooling look like, and how do we get there?

Attainment is not the only gap

While it is not compulsory to be in any form of education, including home education, till your fifth birthday in England, most children start school at age 4 and turn five in the course of the year. It is one of the youngest starts in Europe.  Many hundreds of thousands of children start formal education in the UK even younger from age 2 or three. Yet is it truly better for children? We are way down the Pisa attainment scores, or comparable regional measures.  There has been little change in those outcomes in 13 years, except to find that our children are measured as being progressively less happy.

“As Education Datalab points out, the PISA 2018 cohort started school around 2008, so their period at school not only lines up with the age of austerity and government cuts, but with the “significant reforms” to GCSEs introduced by Michael Gove while he was Education Secretary.”  [source: Schools Week, 2019]

There’s no doubt that some of the harmful economic effects of Brexit will be attributed to the effects of the pandemic. Similarly, many of the outcomes of ten years of policy that have increased  children’s vulnerability and attainment gap, pre-COVID-19, will no doubt be conflated with harms from this crisis in the next few years.

The risk of the acceptance of misattributing this gap in outcomes, is a willingness to adopt misguided solutions, and deny accountability.

Children’s vulnerability

Many experts in children’s needs, have been in their jobs much longer than most MPs and have told them for years the harm their policies are doing to the very children, those voices now claim to want to protect. Will the MPs look at that evidence and act on it?

More than a third of babies are living below the poverty line in the UK. The common thread in many [UK] families’ lives, as Helen Barnard, deputy director for policy and partnerships for the Joseph Rowntree Foundation described in 2019, is a rising tide of work poverty sweeping across the country.” Now the Coronavirus is hitting those families harder too. The ONS found that in England the death rate  in the most deprived areas is 118% higher than in the least deprived.

Charities speaking out this week, said that in the decade since 2010, local authority spending on early intervention services dropped by 46% but has risen on late intervention, from 58% to 78% of spending on children and young people’s services over the same period.

If those advocating for a return to school, for a month before the summer, really want to reduce children’s vulnerability, they might sort out CAMHs for simultaneous support of the return to school, and address those areas in which government must first do no harm. Fix these things that increase the “damage that may last a generation“.


Case studies in damage that may last

Adoption and Children (Coronavirus) (Amendment) Regulations 2020’

Source: Children’s Commissoner (April 2020)

“These regulations make significant temporary changes to the protections given in law to some of the most vulnerable children in the country – those living in care.” ” I would like to see all the regulations revoked, as I do not believe that there is sufficient justification to introduce them. This crisis must not remove protections from extremely vulnerable children, particularly as they are even more vulnerable at this time. As an urgent priority it is essential that the most concerning changes detailed above are reversed.”

CAMHS: Mental health support

Source: Local Government Association CAMHS Facts and Figures

“Specialist services are turning away one in four of the children referred to them by their GPs or teachers for treatment. More than 338,000 children were referred to CAMHS in 2017, but less than a third received treatment within the year. Around 75 per cent of young people experiencing a mental health problem are forced to wait so long their condition gets worse or are unable to access any treatment at all.”

“Only 6.7 per cent of mental health spending goes to children and adolescent mental health services (CAMHS). Government funding for the Early Intervention Grant has been cut by almost £500 million since 2013. It is projected to drop by a further £183 million by 2020.

“Public health funding, which funds school nurses and public mental health services, has been reduced by £600 million from 2015/16 to 2019/20.”

Child benefit two-child limit

Source: May 5, Child Poverty Action Group
“You could not design a policy better to increase child poverty than this one.” source: HC51 House of Commons Work and Pensions Committee
The two-child limit Third Report of Session 2019 (PDF, 1 MB)

“Around sixty thousand families forced to claim universal credit since mid-March because of COVID-19 will discover that they will not get the support their family needs because of the controversial ‘two-child policy”.

Housing benefit

Source: the Poverty and Social Exclusion in the United Kingdom research project funded by the Economic and Social Research Council.

“The cuts [introduced from 2010 to the 2012 budget] in housing benefit will adversely affect some of the most disadvantaged groups in society and are likely to lead to an increase in homelessness, warns the homeless charity Crisis.”

Legal Aid for all children

Source: The Children’s Society, Cut Off From Justice, 2017

“The enactment of the Legal Aid, Punishment and Sentencing of Offenders Act 2012 (LASPO) has had widespread consequences for the provision of legal aid in the UK. One key feature of the new scheme, of particular importance to The Children’s Society, were the changes made to the eligibility criteria around legal aid for immigration cases. These changes saw unaccompanied and separated children removed from scope for legal aid unless their claim is for asylum, or if they have been identified as victims of child trafficking.”

“To fulfill its obligations under the UNCRC, the Government should reinstate legal aid for all unaccompanied and separated migrant children in matters of immigration by bringing it back within ‘scope’ under the Legal Aid, Sentencing and Punishment of Offenders Act 2012. Separated and unaccompanied children are super-vulnerable.”

Library services

Source: CIPFA’s annual library survey 2018

“the number of public libraries and paid staff fall every year since 2010, with spending reduced by 12% in Britain in the last four years.” “We can view libraries as a bit of a canary in the coal mine for what is happening across the local government sector…” “There really needs to be some honest conversations about the direction of travel of our councils and what their role is, as the funding gap will continue to exacerbate these issues.”

No recourse to public funds: FSM and more

source: NRPF Network
“No recourse to public funds (NRPF) is a condition imposed on someone due to their immigration status. Section 115 Immigration and Asylum Act 1999 states that a person will have ‘no recourse to public funds’ if they are ‘subject to immigration control’.”

“children only get the opportunity to apply for free school meals if their parents already receive certain benefits. This means that families who cannot access these benefits– because they have what is known as “no recourse to public funds” as a part of their immigration status– are left out from free school meal provision in England.”

Sure Start

Source: Institute for Fiscal Studies (2019)

“the reduction in hospitalisations at ages 5–11 saves the NHS approximately £5 million, about 0.4% of average annual spending on Sure Start. But the types of hospitalisations avoided – especially those for injuries – also have big lifetime costs both for the individual and the public purse”.

Youth Services

Source: Barnardo’s (2019) New research draws link between youth service cuts and rising knife crime.

“Figures obtained by the All-Party Parliamentary Group (APPG) on Knife Crime show the average council has cut real-terms spending on youth services by 40% over the past three years. Some local authorities have reduced their spending – which funds services such as youth clubs and youth workers – by 91%.”

Barnardo’s Chief Executive Javed Khan said:

“These figures are alarming but sadly unsurprising. Taking away youth workers and safe spaces in the community contributes to a ‘poverty of hope’ among young people who see little or no chance of a positive future.”

Thoughts from the YEIP Event: Preventing trust.

Here’s some thoughts about the Prevent programme, after the half day I spent at the event this week, Youth Empowerment and Addressing Violent Youth Radicalisation in Europe.

It was hosted by the Youth Empowerment and Innovation Project at the University of East London, to mark the launch of the European study on violent youth radicalisation from YEIP.

Firstly, I appreciated the dynamic and interesting youth panel. Young people, themselves involved in youth work, or early researchers on a range of topics. Panelists shared their thoughts on:

  • Removal of gang databases and systemic racial targeting
  • Questions over online content takedown with the general assumption that “someone’s got to do it.”
  • The purposes of Religious Education and lack of religious understanding as cause of prejudice, discrimination, and fear.

From these connections comes trust.

Next, Simon Chambers, from the British Council, UK National Youth Agency, and Erasmus UK, talked about the programme of Erasmus Plus, under the striking sub theme, from these connections comes trust.

  • 42% of the world’s population are under 25
  • Young people understand that there are wider, underlying complex factors in this area and are disproportionately affected by conflict, economic change and environmental disaster.
  • Many young people struggle to access education and decent work.
  • Young people everywhere can feel unheard and excluded from decision-making — their experience leads to disaffection and grievance, and sometimes to conflict.

We then heard a senior Home Office presenter speak about Radicalisation: the threat, drivers and Prevent programme.

On Contest 2018 Prevent / Pursue / Protect and Prepare

What was perhaps most surprising was his statement that the programme believes there is no checklist, [but in reality there are checklists] no single profile, or conveyer belt towards radicalisation.

“This shouldn’t be seen as some sort of predictive model,” he said. “It is not accurate to say that somehow we can predict who is going to become a terrorist, because they’ve got poor education levels, or because necessarily have a deprived background.”

But he then went on to again highlight the list of identified vulnerabilities in Thomas Mair‘s life, which suggests that these characteristics are indeed seen as indicators.

When I look at the ‘safeguarding-in-school’ software that is using vulnerabilities as signals for exactly that kind of prediction of intent, the gap between theory and practice here, is deeply problematic.

One slide included Internet content take downs, and suggested 300K pieces of illegal terrorist material have been removed since February 2010. That number he later suggested are contact with CTIRU, rather than content removal defined as a particular form. (For example it isn’t clear if this is a picture, a page, or whole site). This is still somewhat unclear and there remain important open questions, given its focus  in the online harms policy and discussion.

The big gap that was not discussed and that I believe matters, is how much autonomy teachers have, for example, to make a referral. He suggested “some teachers may feel confident” to do what is needed on their own but others, “may need help” and therefore make a referral. Statistics on those decision processes are missing, and it is very likely I believe that over referral is in part as a result of fearing that non-referral, once a computer has tagged issues as Prevent related, would be seen as negligent, or not meeting the statutory Prevent duty as it applies to schools.

On the Prevent Review, he suggested that the current timeline still stands, of August 2020, even though there is currently no Reviewer. It is for Ministers to make a decision, who will replace Lord Carlile.

Safeguarding children and young people from radicalisation

Mark Chalmers of Westminster City Council., then spoke about ‘safeguarding children and young people from radicalisation.’

He started off with a profile of the local authority demographic, poverty and wealth, migrant turnover,  proportion of non-English speaking households. This of itself may seem indicative of deliberate or unconscious bias.

He suggested that Prevent is not a security response, and expects  that the policing role in Prevent will be reduced over time, as more is taken over by Local Authority staff and the public services. [Note: this seems inevitable after the changes in the 2019 Counter Terrorism Act, to enable local authorities, as well as the police, to refer persons at risk of being drawn into terrorism to local channel panels. Should this have happened at all, was not consulted on as far as I know]. This claim that Prevent is not a security response, appears different in practice, when Local Authorities refuse FOI questions on the basis of security exemptions in the FOI Act, Section 24(1).

Both speakers declined to accept my suggestion that Prevent and Channel is not consensual. Participation in the programme, they were adamant is voluntary and confidential. The reality is that children do not feel they can make a freely given informed choice, in the face of an authority and the severity of the referral.  They also do not understand where their records go to, how confidential are they really, and how long they are kept or why.

The  recently concluded legal case and lengths one individual had to go to, to remove their personal record from the Prevent national database, shows just how problematic the mistaken perception of a consensual programme by authorities is.

I knew nothing of the Prevent programme at all in 2015. I only began to hear about it once I started mapping the data flows into, across and out of the state education sector, and teachers started coming to me with stories from their schools.

I found it fascinating to hear those speak at the conference that are so embedded in the programme. They seem unable to see it objectively or able to accept others’ critical point of view as truth. It stems perhaps from the luxury of having the privilege of believing you yourself, will be unaffected by its consequences.

“Yes,” said O’Brien, “we can turn it off. We have that privilege” (1984)

There was no ground given at all for accepting that there are deep flaws in practice. That in fact ‘Prevent is having the opposite of its intended effect: by dividing, stigmatising and alienating segments of the population, Prevent could end up promoting extremism, rather than countering it’ as concluded in the 2016 report  Preventing Education: Human Rights and Countering terrorism in UK Schools by Rights Watch UK .

Mark Chalmers conclusion was to suggest perhaps Prevent is not always going to be the current form, of bolt on ‘big programme’ and instead would be just like any other form of child protection, like FGM. That would mean every public sector worker, becomes an extended arm of the Home Office policy, expected to act in counter terrorism efforts.

But the training, the nuance, the level of application of autonomy that the speakers believe exists in staff and in children is imagined. The trust between authorities and people who need shelter, safety, medical care or schooling must be upheld for the public good.

No one asked, if and how children should be seen through the lens of terrorism, extremism and radicalisation at all. No one asked if and how every child, should be able to be surveilled online by school imposed software and covert photos taken through the webcam in the name of children’s safeguarding. Or labelled in school, associated with ‘terrorist.’ What happens when that prevents trust, and who measures its harm?

smoothwall monitor dashboard with terrorist labels on child profile

[click to view larger file]

Far too little is known about who and how makes decisions about the lives of others, the criteria for defining inappropriate activity or referrals, or the opacity of decisions on online content.

What effects will the Prevent programme have on our current and future society, where everyone is expected to surveil and inform upon each other? Failure to do so, to uphold the Prevent duty, becomes civic failure.  How is curiosity and intent separated? How do we safeguard children from risk (that is not harm) and protect their childhood experiences,  their free and full development of self?

No one wants children to be caught up in activities or radicalisation into terror groups. But is this the correct way to solve it?

This comprehensive new research by the YEIP suggests otherwise. The fact that the Home Office disengaged with the project in the last year, speaks volumes.

“The research provides new evidence that by attempting to profile and predict violent youth radicalisation, we may in fact be breeding the very reasons that lead those at risk to violent acts.” (Professor Theo Gavrielides).

Current case studies of lived experience, and history also say it is mistaken. Prevent when it comes to children, and schools, needs massive reform, at very least, but those most in favour of how it works today, aren’t the ones who can be involved in its reshaping.

“Who denounced you?” said Winston.

“It was my little daughter,” said Parsons with a sort of doleful pride. “She listened at the keyhole. Heard what I was saying, and nipped off to the patrols the very next day. Pretty smart for a nipper of seven, eh? I don’t bear her any grudge for it. In fact I’m proud of her. It shows I brought her up in the right spirit, anyway.” (1984).

 



The event was the launch of the European study on violent youth radicalisation from YEIP:  The project investigated the attitudes and knowledge of young Europeans, youth workers and other practitioners, while testing tools for addressing the phenomenon through positive psychology and the application of the Good Lives Model.

Its findings include that young people at risk of violent radicalisation are “managed” by the existing justice system as “risks”. This creates further alienation and division, while recidivism rates continue to spiral.

Policy shapers, product makers, and profit takers (1)

In 2018, ethics became the new fashion in UK data circles.

The launch of the Women Leading in AI principles of responsible AI, has prompted me to try and finish and post these thoughts, which have been on my mind for some time. If two parts of 1K is tl:dr for you, then in summary, we need more action on:

  • Ethics as a route to regulatory avoidance.
  • Framing AI and data debates as a cost to the Economy.
  • Reframing the debate around imbalance of risk.
  • Challenging the unaccountable and the ‘inevitable’.

And in the next post on:

  • Corporate Capture.
  • Corporate Accountability, and
  • Creating Authentic Accountability.

Ethics as a route to regulatory avoidance

In 2019, the calls to push aside old wisdoms for new, for everyone to focus on the value-laden words of ‘innovation’ and ‘ethics’, appears an ever louder attempt to reframe regulation and law as barriers to business, asking to cast them aside.

On Wednesday evening, at the launch of the Women Leading in AI principles of responsible AI, the chair of the CDEI said in closing, he was keen to hear from companies where, “they were attempting to use AI effectively and encountering difficulties due to regulatory structures.”

In IBM’s own words to government recently,

A rush to further regulation can have the effect of chilling innovation and missing out on the societal and economic benefits that AI can bring.”

The vague threat is very clear, if you regulate, you’ll lose. But the the societal and economic benefits are just as vague.

So far, many talking about ethics are trying to find a route to regulatory avoidance. ‘We’ll do better,’ they promise.

In Ben Wagner’s recent paper, Ethics as an Escape from Regulation: From ethics-washing to ethics-shopping,he asks how to ensure this does not become the default engagement with ethical frameworks or rights-based design. He sums up, “In this world, ‘ethics’ is the new ‘industry self-regulation.”

Perhaps it’s ingenious PR to make sure that what is in effect self-regulation, right across the business model, looks like it comes imposed from others, from the very bodies set up to fix it.

But as I think about in part 2, is this healthy for UK public policy and the future not of an industry sector, but a whole technology, when it comes to AI?

Framing AI and data debates as a cost to the Economy

Companies, organisations and individuals arguing against regulation are framing the debate as if it would come at a great cost to society and the economy. But we rarely hear, what effect do they expect on their company. What’s the cost/benefit expected for them. It’s disingenuous to have only part of that conversation. In fact the AI debate would be richer were it to be included. If companies think their innovation or profits are at risk from non-use, or regulated use, and there is risk to the national good associated with these products, we should be talking about all of that.

And in addition, we can talk about use and non-use in society. Too often, the whole debate is intangible. Show me real costs, real benefits. Real risk assessments. Real explanations that speak human. Industry should show society what’s in it for them.

You don’t want it to ‘turn out like GM crops’? Then learn their lessons on transparency, trustworthiness, and avoid the hype. And understand sometimes there is simply tech, people do not want.

Reframing the debate around imbalance of risk

And while we often hear about the imbalance of power associated with using AI, we also need to talk about the imbalance of risk.

While a small false positive rate for a company product may be a great success for them, or for a Local Authority buying the service, it might at the same time, mean lives forever changed, children removed from families, and individual reputations ruined.

And where company owners may see no risk from the product they assure is safe, there are intangible risks that need factored in, for example in education where a child’s learning pathway is determined by patterns of behaviour, and how tools shape individualised learning, as well as the model of education.

Companies may change business model, ownership, and move on to other sectors after failure. But with the levels of unfairness already felt in the relationship between the citizen and State — in programmes like Troubled Families, Universal Credit, Policing, and Prevent — where use of algorithms and ever larger datasets is increasing, long term harm from unaccountable failure will grow.

Society needs a rebalance of the system urgently to promote transparent fairness in interactions, including but not only those with new applications of technology.

We must find ways to reframe how this imbalance of risk is assessed, and is distributed between companies and the individual, or between companies and state and society, and enable access to meaningful redress when risks turn into harm.

If we are to do that, we need first to separate truth from hype, public good from self-interest and have a real discussion of risk across the full range from individual, to state, to society at large.

That’s not easy against a non-neutral backdrop and scant sources of unbiased evidence and corporate capture.

Challenging the unaccountable and the ‘inevitable’.

In 2017 the Care Quality Commission reported into online services in the NHS, and found serious concerns of unsafe and ineffective care. They have a cross-regulatory working group.

By contrast, no one appears to oversee that risk and the embedded use of automated tools involved in decision-making or decision support, in children’s services, or education. Areas where AI and cognitive behavioural science and neuroscience are already in use, without ethical approval, without parental knowledge or any transparency.

Meanwhile, as all this goes on, academics many are busy debating fixing algorithmic bias, accountability and its transparency.

Few are challenging the narrative of the ‘inevitability’ of AI.

Julia Powles and Helen Nissenbaum recently wrote that many of these current debates are an academic distraction, removed from reality. It is under appreciated how deeply these tools are already embedded in UK public policy. “Trying to “fix” A.I. distracts from the more urgent questions about the technology. It also denies us the possibility of asking: Should we be building these systems at all?”

Challenging the unaccountable and the ‘inevitable’ is the title of the conclusion of the Women Leading in AI report on principles, and makes me hopeful.

“There is nothing inevitable about how we choose to use this disruptive technology. […] And there is no excuse for failing to set clear rules so that it remains accountable, fosters our civic values and allows humanity to be stronger and better.”

[1] Powles, Nissenbaum, 2018,The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence, Medium

Next: Part  2– Policy shapers, product makers, and profit takers on

  • Corporate Capture.
  • Corporate Accountability, and
  • Creating Authentic Accountability.

Policy shapers, product makers, and profit takers (2)

Corporate capture

Companies are increasingly in controlling positions of the tech narrative in the press. They are funding neutral third-sector orgs’ and think tanks’ research. Supporting organisations advising on online education. Closely involved in politics. And sit increasingly, within the organisations set up to lead the technology vision, advising government on policy and UK data analytics, or on social media, AI and ethics.

It is all subject to corporate capture.

But is this healthy for UK public policy and the future not of an industry sector, but a whole technology, when it comes to AI?

If a company’s vital business interests seem unfazed by the risk and harm they cause to individuals — from people who no longer trust the confidentiality of the system to measurable harms — why should those companies sit on public policy boards set up to shape the ethics they claim we need, to solve the problems and restore loss of trust that these very same companies are causing?

We laud people in these companies as co-founders and forward thinkers on new data ethics institutes. They are invited to sit on our national boards, or create new ones.

What does that say about the entire board’s respect for the law which the company breached? It is hard not to see it signal acceptance of the company’s excuses or lack of accountability.

Corporate accountability

The same companies whose work has breached data protection law, multiple ways, seemingly ‘by accident’ on national data extractions, are those companies that cross the t’s and dot the i’s on even the simplest conference call, and demand everything is said in strictest confidence. Meanwhile their everyday business practices ignore millions of people’s lawful rights to confidentiality.

The extent of commercial companies’ influence on these boards is  opaque. To allow this ethics bandwagon to be driven by the corporate giants surely eschews genuine rights-based values, and long-term integrity of the body they appear to serve.

I am told that these global orgs must be in the room and at the table, to use the opportunity to make the world a better place.

These companies already have *all* the opportunity. Not only monopoly positions on their own technology, but the datasets at scale which underpin it, excluding new entrants to the market. Their pick of new hires from universities. The sponsorship of events. The political lobbying. Access to the media. The lawyers. Bottomless pockets to pay for it all. And seats at board tables set up to shape UK policy responses.

It’s a struggle for power, and a stake in our collective future. The status quo is not good enough for many parts of society, and to enable Big Tech or big government to maintain that simply through the latest tools, is a missed chance to reshape for good.

You can see it in many tech boards’ make up, and pervasive white male bias. We hear it echoed in London think tank conferences, even independent tech design agencies, or set out in some Big Tech reports. All seemingly unconnected, but often funded by the same driving sources.

These companies are often those that made it worse to start with, and the very ethics issues the boards have been set up to deal with, are at the core of their business models and of their making.

The deliberate infiltration of influence on online safety policy for children, or global privacy efforts is very real, explicitly set out in the #FacebookEmails, for example.

We will not resolve these fundamental questions, as long as the companies whose business depend on them, steer national policy. The odds will be ever in their favour.

At the same time, some of these individuals are brilliant. In all senses.

So what’s the answer. If they are around the table, what should the UK public expect of their involvement, and ensure in whose best interests it is? How do we achieve authentic accountability?

Whether it be social media, data analytics, or AI in public policy, can companies be safely permitted to be policy shapers if they wear all the hats; product maker, profit taker, *and* process or product auditor?

Creating Authentic Accountability

At minimum we must demand responsibility for their own actions from board members who represent or are funded by companies.

  1. They must deliver on their own product problems first before being allowed to suggest solutions to societal problems.
  2. There should be credible separation between informing policy makers, and shaping policy.
  3. There must be total transparency of funding sources across any public sector boards, of members, and those lobbying them.
  4. Board members must be meaningfully held accountable for continued company transgressions on rights and freedoms, not only harms.
  5. Oversight of board decision making must be decentralised, transparent and available to scrutiny and meaningful challenge.

While these new bodies may propose solutions that include public engagement strategies, transparency, and standards, few propose meaningful oversight. The real test is not what companies say in their ethical frameworks, but in what they continue to do.

If they fail to meet legal or regulatory frameworks, minimum accountability should mean no more access to public data sets and losing positions of policy influence.

Their behaviour needs to go above and beyond meeting the letter of the law, scraping by or working around rights based protections. They need to put people ahead of profit and self interests. That’s what ethics should mean, not be a PR route to avoid regulation.

As long as companies think the consequences of their platforms and actions are tolerable and a minimal disruption to their business model, society will be expected to live with their transgressions, and our most vulnerable will continue to pay the cost.


This is part 2 of thoughts on Policy shapers, product makers, and profit takers — data and AI. Part 1 is here.

The power of imagination in public policy

“A new, a vast, and a powerful language is developed for the future use of analysis, in which to wield its truths so that these may become of more speedy and accurate practical application for the purposes of mankind than the means hitherto in our possession have rendered possible.” [on Ada Lovelace, The First tech Visionary, New Yorker, 2013]

What would Ada Lovelace have argued for in today’s AI debates? I think she may have used her voice not only to call for the good use of data analysis, but for her second strength.The power of her imagination.

James Ball recently wrote in The European [1]:

“It is becoming increasingly clear that the modern political war isn’t one against poverty, or against crime, or drugs, or even the tech giants – our modern political era is dominated by a war against reality.”

My overriding take away from three days spent at the Conservative Party Conference this week, was similar. It reaffirmed the title of a school debate I lost at age 15, ‘We only believe what we want to believe.’

James writes that it is, “easy to deny something that’s a few years in the future“, and that Conservatives, “especially pro-Brexit Conservatives – are sticking to that tried-and-tested formula: denying the facts, telling a story of the world as you’d like it to be, and waiting for the votes and applause to roll in.”

These positions are not confined to one party’s politics, or speeches of future hopes, but define perception of current reality.

I spent a lot of time listening to MPs. To Ministers, to Councillors, and to party members. At fringe events, in coffee queues, on the exhibition floor. I had conversations pressed against corridor walls as small press-illuminated swarms of people passed by with Queen Johnson or Rees-Mogg at their centre.

In one panel I heard a primary school teacher deny that child poverty really exists, or affects learning in the classroom.

In another, in passing, a digital Minister suggested that Pupil Referral Units (PRU) are where most of society’s ills start, but as a Birmingham head wrote this week, “They’ll blame the housing crisis on PRUs soon!” and “for the record, there aren’t gang recruiters outside our gates.”

This is no tirade on failings of public policymakers however. While it is easy to suspect malicious intent when you are at, or feel, the sharp end of policies which do harm, success is subjective.

It is clear that an overwhelming sense of self-belief exists in those responsible, in the intent of any given policy to do good.

Where policies include technology, this is underpinned by a self re-affirming belief in its power. Power waiting to be harnessed by government and the public sector. Even more appealing where it is sold as a cost-saving tool in cash strapped councils. Many that have cut away human staff are now trying to use machine power to make decisions. Some of the unintended consequences of taking humans out of the process, are catastrophic for human rights.

Sweeping human assumptions behind such thinking on social issues and their causes, are becoming hard coded into algorithmic solutions that involve identifying young people who are in danger of becoming involved in crime using “risk factors” such as truancy, school exclusion, domestic violence and gang membership.

The disconnect between perception of risk, the reality of risk, and real harm, whether perceived or felt from these applied policies in real-life, is not so much, ‘easy to deny something that’s a few years in the future‘ as Ball writes, but a denial of the reality now.

Concerningly, there is lack of imagination of what real harms look like.There is no discussion where sometimes these predictive policies have no positive, or even a negative effect, and make things worse.

I’m deeply concerned that there is an unwillingness to recognise any failures in current data processing in the public sector, particularly at scale, and where it regards the well-known poor quality of administrative data. Or to be accountable for its failures.

Harms, existing harms to individuals, are perceived as outliers. Any broad sweep of harms across policy like Universal Credit, seem perceived as political criticism, which makes the measurable failures less meaningful, less real, and less necessary to change.

There is a worrying growing trend of finger-pointing exclusively at others’ tech failures instead. In particular, social media companies.

Imagination and mistaken ideas are reinforced where the idea is plausible, and shared. An oft heard and self-affirming belief was repeated in many fora between policymakers, media, NGOs regards children’s online safety. “There is no regulation online”. In fact, much that applies offline applies online. The Crown Prosecution Service Social Media Guidelines is a good place to start. [2] But no one discusses where children’s lives may be put at risk or less safe, through the use of state information about them.

Policymakers want data to give us certainty. But many uses of big data, and new tools appear to do little more than quantify moral fears, and yet still guide real-life interventions in real-lives.

Child abuse prediction, and school exclusion interventions should not be test-beds for technology the public cannot scrutinise or understand.

In one trial attempting to predict exclusion, this recent UK research project in 2013-16 linked children’s school records of 800 children in 40 London schools, with Metropolitan Police arrest records of all the participants. It found interventions created no benefit, and may have caused harm. [3]

“Anecdotal evidence from the EiE-L core workers indicated that in some instances schools informed students that they were enrolled on the intervention because they were the “worst kids”.”

Keeping students in education, by providing them with an inclusive school environment, which would facilitate school bonds in the context of supportive student–teacher relationships, should be seen as a key goal for educators and policy makers in this area,” researchers suggested.

But policy makers seem intent to use systems that tick boxes, and create triggers to single people out, with quantifiable impact.

Some of these systems are known to be poor, or harmful.

When it comes to predicting and preventing child abuse, there is concern with the harms in US programmes ahead of us, such as both Pittsburgh, and Chicago that has scrapped its programme.

The Illinois Department of Children and Family Services ended a high-profile program that used computer data mining to identify children at risk for serious injury or death after the agency’s top official called the technology unreliable, and children still died.

“We are not doing the predictive analytics because it didn’t seem to be predicting much,” DCFS Director Beverly “B.J.” Walker told the Tribune.

Many professionals in the UK share these concerns. How long will they be ignored and children be guinea pigs without transparent error rates, or recognition of the potential harmful effects?

Helen Margetts, Director of the Oxford Internet Institute and Programme Director for Public Policy at the Alan Turing Institute, suggested at the IGF event this week, that stopping the use of these AI in the public sector is impossible. We could not decide that, “we’re not doing this until we’ve decided how it’s going to be.” It can’t work like that.” [45:30]

Why on earth not? At least for these high risk projects.

How long should children be the test subjects of machine learning tools at scale, without transparent error rates, audit, or scrutiny of their systems and understanding of unintended consequences?

Is harm to any child a price you’re willing to pay to keep using these systems to perhaps identify others, while we don’t know?

Is there an acceptable positive versus negative outcome rate?

The evidence so far of AI in child abuse prediction is not clearly showing that more children are helped than harmed.

Surely it’s time to stop thinking, and demand action on this.

It doesn’t take much imagination, to see the harms. Safe technology, and safe use of data, does not prevent the imagination or innovation, employed for good.

If we continue to ignore views from Patrick Brown, Ruth Gilbert, Rachel Pearson and Gene Feder, Charmaine Fletcher, Mike Stein, Tina Shaw and John Simmonds I want to know why.

Where you are willing to sacrifice certainty of human safety for the machine decision, I want someone to be accountable for why.

 


References

[1] James Ball, The European, Those waging war against reality are doomed to failure, October 4, 2018.

[2] Thanks to Graham Smith for the link. “Social Media – Guidelines on prosecuting cases involving communications sent via social media. The Crown Prosecution Service (CPS) , August 2018.”

[3] Obsuth, I., Sutherland, A., Cope, A. et al. J Youth Adolescence (2017) 46: 538. https://doi.org/10.1007/s10964-016-0468-4 London Education and Inclusion Project (LEIP): Results from a Cluster-Randomized Controlled Trial of an Intervention to Reduce School Exclusion and Antisocial Behavior (March 2016)

The Trouble with Boards at the Ministry of Magic

Peter Riddell, the Commissioner for Public Appointments, has completed his investigation into the recent appointments to the Board of the Office for Students and published his report.

From the “Number 10 Googlers,”  that NUS affiliation — an interest in student union representation was seen as undesirable, to “undermining the policy goals” and what the SpAds supported, the whole report is worth a read.

Perception of the process

The concern that the Commissioner raises, over the harm  done to the public’s perception of the public appointments process means more needs done to fix these problems, before and after appointments.

This process reinforces what people think already. Jobs for the [white Oxford] boys, and yes-men.  And so what, why should I get involved anyway, and what can we hope to change?

Possibilities for improvement

What should the Department for Education (DfE) now offer and what should be required after the appointments process, for the OfS and other bodies, boards and groups et al?

  • Every board at the Department for Education, its name, aim, and members — internal and external — should be published.
  • Every board at the Department for Education should be required to publish its Terms of Appointment, and Terms of Reference.
  • Every board at the Department for Education should be required to publish agendas before meetings and meaningful meeting minutes promptly.

Why? Because there’s all sorts of boards around and their transparency is frankly non-existent. I know because I sit on one. Foolishly I did not make it a requirement to publish minutes before I agreed to join. But in a year it has only met twice, so you’ve not missed much. Who else sits where, on what policy, and why?

In another I used to sit on I got increasingly frustrated that the minutes were not reflective of the substance of discussion. This does the public a disservice twice over. The purpose of the boards look insipid and the evidence for what challenge they are intended to offer,  their very reason for being, is washed away. Show the public what’s hard, that there’s debate, that risks are analysed and balanced, and then decisions taken. Be open to scrutiny.

The public has a right to know

When scrutiny really matters, it is wrong — just as the Commissioner report reads — for any Department or body to try to hide the truth.

The purpose of transparency must be to hold to account and ensure checks-and-balances are upheld in a democratic system.

The DfE withdrew from a legal hearing scheduled at the First Tier Information Rights Tribunal last year a couple of weeks beforehand, and finally accepted an ICO decision notice in my favour. I had gone through a year of the Freedom-of-Information appeal process to get hold of the meeting minutes of the Department for Education Star Chamber Scrutiny Board, from November 2015.

It was the meeting in which I had been told members approved the collection of nationality and country of birth in the school census.

“The Star Chamber Scrutiny Board”.  Not out of Harry Potter and the Ministry of Magic but appointed by the DfE.

It’s a board that mentions actively seeking members of certain teaching unions but omits others. It publishes no meeting minutes. Its terms of reference are 38 words long, and it was not told the whole truth before one of the most important and widely criticised decisions it ever made affecting the lives of millions of children across England and harm and division in the classroom.

Its annual report doesn’t mention the controversy at all.

After sixteen months, the DfE finally admitted it had kept the Star Chamber Scrutiny Board in the dark on at least one of the purposes of expanding the school census. And on its pre-existing active, related data policy passing pupil data over to the Home Office.

The minutes revealed the Board did not know anything about the data sharing agreement already in place between the DfE and Home Office or that “(once collected) nationality data” [para 15.2.6] was intended to share with the Border Force Casework Removals Team.

Truth that the DfE was forced to reveal, and only came out two years after the meeting, and a full year after the change in law.

If the truth, transparency, diversity of political opinion on boards are allowed to die so does democracy

I spoke to Board members in 2016. They were shocked to find out what the MOU purposes were for the new data,  and that regular data transfers had already begun without their knowledge, when they were asked to sign off the nationality data collection.

Their lack of concerns raised was given in written evidence to the House of Lords Secondary Legislation Scrutiny Committee that it had been properly reviewed.

How trustworthy is anything that the Star Chamber now “approves” and our law making process to expand school data? How trustworthy is the Statutory Instrument scrutiny process?

“there was no need for DfE to discuss with SCSB the sharing of data with Home Office as: a.) none of the data being considered by the SCSB as part of the proposal supporting this SI has been, or will be, shared with any third-party (including other government departments);

[omits it “was planned to be”]

and b.) even if the data was to be shared externally, those decisions are outside the SCSB terms of reference.”

Outside the terms of reference that are 38 words long and should scrutinise but not too closely or reject on the basis of what exactly?

Not only is the public not being told the full truth about how these boards are created, and what their purpose is, it seems board members are not always told the full truth they deserve either.

Who is invited to the meeting, and who is left out? What reports are generated with what recommendations? What facts or opinion cannot be listened to, scrutinised and countered, that could be so damaging as to not even allow people to bring the truth to the table?

If the meeting minutes would be so controversial and damaging to making public policy by publishing them, then who the heck are these unelected people making such significant decisions and how? Are they qualified, are they independent, and are they accountable?

If alternately, what should be ‘independent’ boards, or panels, or meetings set up to offer scrutiny and challenge, are in fact being manipulated to manoeuvre policy and ready-made political opinions of the day,  it is a disaster for public engagement and democracy.

It should end with this ex- OfS hiring process at DfE, today.

The appointments process and the ongoing work by boards must have full transparency, if they are ever to be seen as trustworthy.

Is Hancock’s App Age Appropriate?

What can Matt Hancock learn from his app privacy flaws?

Note: since starting this blog, the privacy policy has been changed since what was live at 4.30 and the “last changed date” backdated on the version that is now live at 21.00. It shows the challenge I point out in 5:

It’s hard to trust privacy policy terms and conditions that are not strong and stable. 


The Data Protection Bill about to pass through the House of Commons requires the Information Commissioner to prepare and issue codes of practice — which must be approved by the Secretary of State — before they can become statutory and enforced.

One of those new codes (clause 124) is about age-appropriate data protection design. Any provider of an Information Society Service — as outlined in GDPR Article 8, where a child’s data are collected on the legal basis of consent — must have regard for the code, if they target the site use at a child.

For 13 -18 year olds what changes might mean compared with current practices can be demonstrated by the Minister for Digital, Culture, Media and Sport’s new app, launched today.

This app is designed to be used by children 13+. Regardless that the terms say, [more aligned with US COPPA laws rather than GDPR] the app requires parental approval 13-18, it still needs to work for the child.

Apps could and should be used to open up what politics is about to children. Younger users are more likely to use an app than read a paper for example. But it must not cost them their freedoms. As others have written, this app has privacy flaws by design.

Children merit specific protection with regard to their personal data, as they may be less aware of the risks, consequences and safeguards concerned and their rights in relation to the processing of personal data. (GDPR Recital 38).

The flaw in the intent to protect by age, in the app, GDPR and UK Bill overall, is that understanding needed for consent is not dependent on age, but on capacity. The age-based model to protect the virtual child, is fundamentally flawed. It’s shortsighted, if well intentioned, but bad-by-design and does little to really protect children’s rights.

Future age verification for example; if it is to be helpful, not harm, or  a nuisance like a new cookie law, must be “a narrow form of ‘identity assurance’ – where only one attribute (age) need be defined.” It must also respect Recital 57, and not mean a lazy data grab like GiffGaff’s.

On these 5 things this app fails to be age appropriate:

  1. Age appropriate participation, privacy, and consent design.
  2. Excessive personal data collection and permissions. (Article 25)
  3. The purposes of each data collected must be specified, explicit and not further processed for something incompatible with them. (Principle 2).
  4. The privacy policy terms and conditions must be easily understood by a child, and be accurate. (Recital 58)
  5. It’s hard to trust privacy policy terms and conditions that are not strong and stable. Among things that can change are terms on a free trial which should require active and affirmative action not continue the account forever, that may compel future costs.  Any future changes, should also be age-appropriate of themselves,  and in the way that consent is re-managed.

How much profiling does the app enable and what is it used for? The Article 29 WP recommends, “Because children represent a more vulnerable group of society, organisations should, in general, refrain from profiling them for marketing purposes.” What will this mean for any software that profile children’s meta-data to share with third parties, or commercial apps with in-app purchases, or “bait and switch” style models? As this app’s privacy policy refers to.

The Council of Europe 2016-21 Strategy on the Rights of the Child, recognises “provision for children in the digital environment ICT and digital media have added a new dimension to children’’s right to education” exposing them to new risk, “privacy and data protection issues” and that “parents and teachers struggle to keep up with technological developments. ” [6. Growing up in a Digital World, Para 21]

Data protection by design really matters to get right for children and young people.

This is a commercially produced app and will only be used on a consent and optional basis.

This app shows how hard it can be for people buying tech from developers to understand and to trust what’s legal and appropriate.

For developers with changing laws and standards they need clarity and support to get it right. For parents and teachers they will need confidence to buy and let children use safe, quality technology.

Without relevant and trustworthy guidance, it’s nigh on impossible.

For any Minister in charge of the data protection rights of children, we need the technology they approve and put out for use by children, to be age-appropriate, and of the highest standards.

This app could and should be changed to meet them.

For children across the UK, more often using apps offers them no choice whether or not to use it. Many are required by schools that can make similar demands for their data and infringe their privacy rights for life. How much harder then, to protect their data security and rights, and keep track of their digital footprint where data goes.

If the Data protection Bill could have an ICO code of practice for  children that goes beyond consent based data collection; to put clarity, consistency and confidence at the heart of good edTech for children, parents and schools, it would be warmly welcomed.


Here’s detailed examples what the Minister might change to make his app in line with GDPR, and age-appropriate for younger users.

1. Is the app age appropriate by design?

Unless otherwise specified in the App details on the applicable App Store, to use the App you must be 18 or older (or be 13 or older and have your parent or guardian’s consent).

Children over 13 can use the app, but this app needs parental consent. That’s different from GDPR– consent over and above the new laws as will apply in the UK from May. That age will vary across the EU. Inconsistent age policies are going to be hard to navigate.

Many of the things that matter to privacy, have not been included in the privacy policy (detailed below), but in the terms and conditions.

What else needs changed?

2. Personal data protection by design and default

Excessive personal data collection cannot be justified through a “consent” process, by agreeing to use the app. There must be data protection by design and default using the available technology. That includes data minimisation, and limited retention. (Article 25)

The apps powers are vast and collect far more personal data than is needed, and if you use it, even getting permission to listen to your mic. That is not data protection by design and default, which must implement data-protection principles, such as data minimisation.

If as has been suggested, in the newest version of android each permission is asked for at the point of use not on first install, that could be a serious challenge for parents who think they have reviewed and approved permissions pre-install (and fails beyond the scope of this app). An app only requires consent to install and can change the permissions behind the scenes at any time. It makes privacy and data protection by design even more important.

Here’s a copy of what the android Google library page says it can do. Once you click into “permissions” and scroll. This is excessive. “Matt Hancock” is designed to prevent your phone from sleeping, read and modify the contents of storage, and access your microphone.

Version 2.27 can access:
 
Location
  • approximate location (network-based)
Phone
  • read phone status and identity
Photos / Media / Files
  • read the contents of your USB storage
  • modify or delete the contents of your USB storage
Storage
  • read the contents of your USB storage
  • modify or delete the contents of your USB storage
Camera
  • take pictures and videos
Microphone
  • record audio
Wi-Fi connection information
  • view Wi-Fi connections
Device ID & call information
  • read phone status and identity
Other
  • control vibration
  • manage document storage
  • receive data from Internet
  • view network connections
  • full network access
  • change your audio settings
  • control vibration
  • prevent device from sleeping

“Matt Hancock” knows where you live

The app makers – and Matt Hancock – should have no necessity to know where your phone is at all times, where it is regularly, or whose other phones you are near, unless you switch it off. That is excessive.

It’s not the same as saying “I’m a constituent”. It’s 24/7 surveillance.

The Ts&Cs say more.

It places the onus on the user to switch off location services — which you may expect for other apps such as your Strava run — rather than the developers take responsibility for your privacy by design. [Click image to see larger] [Full source policy].

[update since writing this post on February 1, the policy has been greatly added to]

It also collects ill-defined “technical information”. How should a 13 year old – or parent for that matter – know what these information are? Those data are the meta-data, the address and sender tags etc.

By using the App, you consent to us collecting and using technical information about your device and related information for the purpose of helping us to improve the App and provide any services to you.

As NSA General Counsel Stewart Baker has said, “metadata absolutely tells you everything about somebody’s life. General Michael Hayden, former director of the NSA and the CIA, has famously said, “We kill people based on metadata.”

If you use this app and “approve” the use, do you really know what the location services are tracking and how that data are used? For a young person, it is impossible to know, or see where their digital footprint has gone, or knowledge about them, have been used.

3. Specified, explicit, and necessary purposes

As a general principle, personal data must be only collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes. The purposes of these very broad data collection, are not clearly defined. That must be more specifically explained, especially given the data are so broad, and will include sensitive data. (Principle 2).

While the Minister has told the BBC that you maintain complete editorial control, the terms and conditions are quite different.

The app can use user photos, files, your audio and location data, and that once content is shared it is “a perpetual, irrevocable” permission to use and edit, this is not age-appropriate design for children who might accidentally click yes, or not appreciate what that may permit. Or later wish they could get that photo back. But now that photo is on social media potentially worldwide —  “Facebook, Twitter, Pinterest, YouTube, Instagram and on the Publisher’s own websites,” and the child’s rights to privacy and consent, are lost forever.

That’s not age appropriate and not in line with GDPR on rights to withdraw consent, to object or to restrict processing. In fact the terms, conflict with the app privacy policy which states those rights [see 4. App User Data Rights] Just writing “there may be valid reasons why we may be unable to do this” is poor practice and a CYA card.

4. Any privacy policy and app must do what it says

A privacy policy and terms and conditions must be easily understood by a child, [indeed any user] and be accurate.

Journalists testing the app point out that even if the user clicks “don’t allow”, when prompted to permit access to the photo library, the user is allowed to post the photo anyway.


What does consent mean if you don’t know what you are consenting to? You’re not. GDPR requires that privacy policies are written in a way that their meaning can be understood by a child user (not only their parent). They need to be jargon-free and meaningful in “clear and plain language that the child can easily understand.” (Recital 58)

This privacy policy is not child-appropriate. It’s not even clear for adults.

5. What would age appropriate permissions for  charging and other future changes look like?

It should be clear to users if there may be up front or future costs, and there should be no assumption that agreeing once to pay for an app, means granting permission forever, without affirmative action.

Couching Bait-and-Switch, Hidden Costs

This is one of the flaws that the Matt Hancock app terms and conditions shares with many free education apps used in schools. At first, they’re free. You register, and you don’t even know when your child  starts using the app, that it’s a free trial. But after a while, as determined by the developer, the app might not be free any more.

That’s not to say this is what the Matt Hancock app will do, in fact it would be very odd if it did. But odd then, that its privacy policy terms and conditions state it could.

The folly of boiler plate policy, or perhaps simply wanting to keep your options open?

Either way, it’s bad design for children– indeed any user — to agree to something that in fact, is meaningless because it could change at any time, and automatic renewals are convenient but who has not found they paid for an extra month of a newspaper or something else they intended to only use for a limited time?  And to avoid any charges, you must cancel before the end of the free trial – but if you don’t know it’s free, that’s hard to do. More so for children.

From time to time we may offer a free trial period when you first register to use the App before you pay for the subscription.[…] To avoid any charges, you must cancel before the end of the free trial.

(And on the “For more details, please see the product details in the App Store before you download the App.” there aren’t any, in case you’re wondering).

What would age appropriate future changes be?

It should be clear to parents that what they consent to on behalf of a child, or if a child consents, at the time of install. What that means must empower them to better digital understanding and to stay in control, not allow the company to change the agreement, without the user’s clear and affirmative action.

One of the biggest flaws for parents in children using apps is that what they think they have reviewed, thought appropriate, and permitted, can change at any time, at the whim of the developer and as often as they like.

Notification “by updating the Effective Date listed above” is not any notification at all.  And PS. they changed the policy and backdated it today from February 1, 2018, to July 2017. By 8 months. That’s odd.

The statements in this “changes” contradict one another. It’s a future dated get-out-of-jail-free-card for the developer and a transparency and oversight nightmare for parents. “Your continued use” is not clear, affirmative, and freely given consent, as demanded by GDPR.

Perhaps the kindest thing to say about this policy, and its poor privacy approach to rights and responsibilities, is that maybe the Minister did not read it. Which highlights the basic flaw in privacy policies in the first place. Data usage reports how your personal data have actually been used, versus what was promised, are of much greater value and meaning. That’s what children need in schools.


Statutory Instruments, the #DPBill and the growth of the Database State

First they came for the lists of lecturers. Did you speak out?

Last week Chris Heaton-Harris MP wrote to vice-chancellors to ask for a list of lecturers’ names and course content, “With particular reference to Brexit”.  Academics on social media spoke out in protest. There has been little reaction however, to a range of new laws that permit the incremental expansion of the database state on paper and in practice.

The government is building ever more sensitive lists of names and addresses, without oversight. They will have access to information about our bank accounts. They are using our admin data to create distress-by-design in a ‘hostile environment.’ They are writing laws that give away young people’s confidential data, ignoring new EU law that says children’s data merits special protections.

Earlier this year, Part 5 of the new Digital Economy Act reduced the data protection infrastructure between different government departments. This week, in discussion on the Codes of Practice, some local government data users were already asking whether safeguards can be further relaxed to permit increased access to civil registration data and use our identity data for more purposes.

Now in the Data Protection Bill, the government has included clauses in Schedule 2, to reduce our rights to question how our data are used and that will remove a right to redress where things go wrong.  Clause 15 designs-in open ended possibilities of Statutory Instruments for future change.

The House of Lords Select Committee on the Constitution point out  on the report on the Bill, that the number and breadth of the delegated powers, are, “an increasingly common feature of legislation which, as we have repeatedly stated, causes considerable concern.”

Concern needs to translate into debate, better wording and safeguards to ensure Parliament maintains its role of scrutiny and where necessary constrains executive powers.

Take as case studies, three new Statutory Instruments on personal data  from pupils, students, and staff. They all permit more data to be extracted from individuals and to be sent to national level:

  • SI 807/2017 The Education (Information About Children in Alternative Provision) (England) (Amendment) Regulations 2017
  • SI No. 886 The Education (Student Information) (Wales) Regulations 2017 (W. 214) and
  • SL(5)128 – The Education (Supply of Information about the School Workforce) (Wales) Regulations 2017

The SIs typically state “impact assessment has not been prepared for this Order as no impact on businesses or civil society organisations is foreseen. The impact on the public sector is minimal.” Privacy Impact Assessments are either not done, not published or refused via FOI.

Ever expanding national databases of names

Our data are not always used for the purposes we expect in practice, or what Ministers tell us they will be used for.

Last year the government added nationality to the school census in England, and snuck the change in law through Parliament in the summer holidays.  (SI 808/2016). Although the Department for Education conceded after public pressure, “These data will not be passed to the Home Office,” the intention was very real to hand over “Nationality (once collected)” for immigration purposes. The Department still hands over children’s names and addresses every month.

That SI should have been a warning, not a process model to repeat.

From January, thanks to yet another rushed law without debate, (SI 807/2017) teen pregnancy, young offender and mental health labels will be added to children’s records for life in England’s National Pupil Database. These are on a named basis, and highly sensitive. Data from the National Pupil Database, including special needs data (SEN) are passed on for a broad range of purposes to third parties, and are also used across government in Troubled Families, shared with National Citizen Service, and stored forever; on a named basis, all without pupils’ consent or parents’ knowledge. Without a change in policy, young offender and pregnancy, will be handed out too.

Our children’s privacy has been outsourced to third parties since 2012. Not anonymised data, but  identifiable and confidential pupil-level data is handed out to commercial companies, charities and press, hundreds of times a year, without consent.

Near-identical wording  that was used in 2012 to change the law in England, reappears in the new SI for student data in Wales.

The Wales government introduced regulations for a new student database of names, date of birth and ethnicity, home address including postcode, plus exam results. The third parties listed who will get given access to the data without asking for students’ consent, include the Student Loans Company and “persons who, for the purpose of promoting the education or well-being of students in Wales, require the information for that purpose”, in SI No. 886, the Education (Student Information) (Wales) Regulations 2017 (W. 214).

The consultation was conflated with destinations data, and while it all sounds for the right reasons, the SI is broad on purposes and prescribed persons. It received 10 responses.

Separately, a 2017 consultation on the staff data collection received 34 responses about building a national database of teachers, including names, date of birth, National Insurance numbers, ethnicity, disability, their level of Welsh language skills, training, salary and more. Unions and the Information Commissioner’s Office both asked basic questions in the consultation that remain unanswered, including who will have access. It’s now law thanks  to SL(5)128 – The Education (Supply of Information about the School Workforce) (Wales) Regulations 2017. The questions are open.

While I have been assured this weekend in writing that these data will not be used for commercial purposes or immigration enforcement, any meaningful safeguards are missing.

More failings on fairness

Where are the communications to staff, students and parents? What oversight will there be? Will a register of uses be published? And why does government get to decide without debate, that our fundamental right to privacy can be overwritten by a few lines of law? What protections will pupils, students and staff have in future how these data will be used and uses expanded for other things?

Scope creep is an ever present threat. In 2002 MPs were assured on the changes to the “Central Pupil Database”, that the Department for Education had no interest in the identity of individual pupils.

But come 2017 and the Department for Education has become the Department for Deportation.

Children’s names are used to match records in an agreement with the Home Office handing over up to 1,500 school pupils’ details a month. The plan was parliament and public should never know.

This is not what people expect or find reasonable. In 2015 UCAS had 37,000 students respond to an Applicant Data Survey. 62% of applicants think sharing their personal data for research is a good thing, and 64% see personal benefits in data sharing.  But over 90% of applicants say they should be asked first, regardless of whether their data is to be used for research, or other things. This SI takes away their right to control their data and their digital identity.

It’s not in young people’s best interests to be made more digitally disempowered and lose control over their digital identity. The GDPR requires data privacy by design. This approach should be binned.

Meanwhile, the Digital Economy Act codes of practice talk about fair and lawful processing as if it is a real process that actually happens.

That gap between words on paper, and reality, is a caredata style catastrophe across every sector of public data and government waiting to happen. When will the public be told how data are used?

Better data must be fairer and safer in the future

The new UK Data Protection Bill is in Parliament right now, and its wording will matter. Safe data, transparent use, and independent oversight are not empty slogans to sling into the debate.

They must shape practical safeguards to prevent there being no course of redress if you are slung into a Border Force van at dawn, your bank account is frozen, or you get a 30 days notice-to-leave letter all by mistake.

To ensure our public [personal] data are used well, we need to trust why they’re collected and see how they are used. But instead the government has drafted their own get-out-of-jail-free-card to remove all our data protection rights to know in the name of immigration investigation and enforcement, and other open ended public interest exemptions.

The pursuit of individuals and their rights under an anti-immigration rhetoric without evidence of narrow case need, in addition to all the immigration law we have, is not the public interest, but ideology.

If these exemptions becomes law, every one of us loses right to ask where our data came from, why it was used for that purpose, or course of redress.

The Digital Economy Act removed some of the infrastructure protections between Departments for datasharing. These clauses will remove our rights to know where and why that data has been passed around between them.

These lines are not just words on a page. They will have real effects on real people’s lives. These new databases are lists of names, and addresses, or attach labels to our identity that last a lifetime.

Even the advocates in favour of the Database State know that if we want to have good public services, their data use must be secure and trustworthy, and we have to be able to trust staff with our data.

As the Committee sits this week to review the bill line by line, the Lords must make sure common sense sees off the scattering of substantial public interest and immigration exemptions in the Data Protection Bill. Excessive exemptions need removed, not our rights.

Otherwise we can kiss goodbye to the UK as a world leader in tech that uses our personal data, or research that uses public data. Because if the safeguards are weak, the commercial players who get it wrong in trials of selling patient data,  or who try to skip around the regulatory landscape asking to be treated better than everyone else, and fail to comply with Data Protection law, or when government is driven to chasing children out of education, it doesn’t  just damage their reputation, or the potential of innovation for all, they damage public trust from everyone, and harm all data users.

Clause 15 leaves any future change open ended by Statutory Instrument. We can already see how SIs like these are used to create new national databases that can pop up at any time, without clear evidence of necessity, and without chance for proper scrutiny. We already see how data can be used, beyond reasonable expectations.

If we don’t speak out for our data privacy, the next time they want a list of names, they won’t need to ask. They’ll already know.


First they came …” is with reference to the poem written by German Lutheran pastor Martin Niemöller (1892–1984).