Tag Archives: politics

Leading AI literacy to further the common good

The UK Department for Science and Technology has been criticised online for its publication of a links list to commercial AI resources packaged as practical AI skills for work.

There are two major problems if you enable AI “literacy” and policy to be led this way. The first, is the framing as something prioritised for employment, and it’s notable that many of these providers are the employers who are often the very same companies seeking to increase their profits through cost reduction from increased efficiency, or having fewer humans in their workforce. A position the UK government has accepted as caused by AI and an inevitability.

The second, is that the subject, and what society understands about its salience and meaning, is steered by the same hands of Big Tech, it plays into, through the effect of consolidation of power.

To present ‘teaching about AI’ as being about skills for the workforce (and a narrow range of workplaces at that), is not only misguided because it narrows learning to being only about technical skills, but because it misdirects us all to look away, more broadly, from what “AI” is being used for, how, why, and by whom.

The critique is therefore important to understand not just about quality of courses, but about the narrowing of AI literacy itself.

AI Literacy is in fact, vital democratic infrastructure.

Problem 1: AI Literacy as Workforce Optimisation

The recommendation of the AI Skills for Life and Work: Rapid Evidence Review, published on January 28th seems not to have been taken here, to involve professional organisations, such as the British Computer Society (BCS) and Royal Academy of Engineering (RAE),  in defining and policing standards that training courses should meet. These expert organisations are notably absent from the list of new and founding partners.

Though the announcements claimed these courses were checked against Skills England’s AI foundation skills for work benchmark, also published on January 28th, something seems to have gone badly wrong in any basic due diligence to check even that the links all worked. That should have included checks being done for claims that free courses were actually at zero cost to users, before the public was steered towards those providers in media coverage.

If Skills England wants to restore both its own credibility and public trust in the providers, it could publish its criteria and findings about how the courses chosen for the AI Skills Boost programme, and evaluation of their assessment against Skills England’s new AI foundation skills for work benchmark and how that was designed.

The second challenge, is the Westminster government is focussed only on skills for some work, and ‘the rest’ of life is vague at best.

Problem 2: Narrative Capture by Big Tech distorts the big picture

Evidence from organisations that have scrutinised UK real-world AI in practice; one recent synthesis is by Data Justice Lab for example, of cancelled systems in the public sector, may not fit the narrow scope of AI skills for some types of work, but it does offer valuable lessons to learn from for other areas, in particular how AI affects public sector services, which in turn affects so many of us on a daily basis.

The government has repeatedly disagreed on AI policy, with recommendations from peers, from experts, and with what the public is saying. In stark contrast with other European countries approach, the UK refuses to legislate on unacceptable risk levels.

The public are already paying the price for this. The prioritisation of move fast and break things “route to impact” has so far come at the cost to citizens and broken everyday lives in welfare systems. Loss of agency and everyday friction are making life harder, less efficient, more stressful in many ways, the opposite of what many felt was the promise of technology and early Internet.

AI is already shaping the justice system through police surveillance, legal research, and citizens advice bots and making AI the cornerstone of its approach, while the basic courts’ IT tools are totally dysfunctional and those in charge won’t listen and won’t invest in the infrastructure to fix it.

[Notable aside, don’t let this put you off having your say and speaking out. There are a few days left to have your say in a consultation on the Wild West of facial recognition used for law enforcement.]

The youth backlash to AI slop has become incessant and the average older person in the street is fed up they need a multitude of apps and a smartphone to perform everyday tasks that used to be simpler to get done. (40% of drivers said that paying for parking with cash was their preferred choice in a 2025 poll of 13,755 drivers for The AA.)

Thousands of workers are run ragged by the algorithmic slave drivers of gig-economy apps, in precarious jobs, and less protected than European counterparts with weaker workers rights post Brexit, so tragically dramatised in the Loach film, Sorry we missed you.

The question is not, do we need literacy to live in a world of AI vs human? It is, how do we live everyday life well, under powerful, undemocratic, often unaccountable, corporate control that is being accelerated and intensified by tech tools we have no say over?

Any AI literacy approach that fails to address this, fails full stop.

Why we must prioritise AI Literacy as democratic infrastructure

How do you democratize a technology that itself, in the form we’re seeing it now, is a product of concentrated power?”

The AI media narrative will, given time, not be driven by what government says about AI, but by how it makes us feel. Increasingly, that is, more vulnerable under uncertainty over income; fear of losing our jobs; increased surveillance; and loss of freedom; indeed a loss of power over our everyday life and need to “take back control“. We saw where that led in 2016. The government will pay the price for those feelings again, if it does not act now to address them.

We now have choices about whose version of AI literacy we follow in the UK. I have the privilege of contributing to work at the Council of Europe, in an approach that I hope will be adopted by the UK later this year, and we could lead on, instead of following ‘what tech says’.

It is an alternative comprehensive framework that addresses all the dimensions of AI literacy—particularly the human dimension— not only to more holistically train technologically skilled citizens to design or use AI, but prepares everyone for living with AI, with a focus on the values of democracy, human rights, and the rule of law.

Being AI literate means understanding how technology and companies affect fundamental economic, human, social and political rights and how we can protect ourselves, so that we can act in ways we choose.

Our parliamentary sovereignty and democratic processes, depend on the power to control our own national narratives and parliamentary processes, including the outcome of elections.

The media and public’s ability to be informed in an election and beyond, depends on the ability to identify and challenge misinformation, to use independent critical thought; to question power; and that depends on an informed and critical citizenry empowered with our own social agency.

We cannot centre these things if the government direction of travel is steered by U.S. led Open AI, Accenture, Google, IBM and Microsoft. Narrow media messaging is conflicted, both saying ‘use AI for furthering economic growth’ and at the same time, excusing those same companies for making job cuts as if they really can’t help it and it is in fact they who have no choice thanks to AI. ‘Blame the AI, don’t blame us (but please forget we chose to build / buy / use it)’.

Education and the role of AI and literacy in the Public Interest

The public interest depends on the state to offer education free from commercial influence and gain, and to objectively understand the implications of AI, not as products that may become obsolete from one day to the next, but with a human-centric, technology neutral approach that looks for outcomes rather than product skills.

We also need a UK government that is committed to doing what it says it will do on AI, not one that simply tells others how to do it.

Whitehall departments are not adequately transparent over the ways they use AI and algorithms and the use of the (perhaps overly complex) AI register is low, despite it being “a requirement for all government departments”.

As AI systems become increasingly embedded in social, economic, and political systems, we must ensure everyone has the necessary level of awareness and critical understanding, to navigate an AI-transformed world in everyday life. Not only to use AI effectively, but to ensure that those responsible for AI development and deployment can respect and enhance human dignity, rights, and democratic values.

We need to protect those people who are excluded in life, or over-policed, without the freedom necessary to what being fully human requires, especially for those who are marginalised, “the outliers” in society and often excluded in the biometric training data from which AI are built—by race, language, gender, age, health or disability.

We need to protect our biometric data, our faces and voices, to be able to show up and speak up when it matters.

As the Pope summed up in his recent World Communications Day message, AI literacy must prioritise understanding, “how algorithms shape our perception of reality, how AI biases work, what mechanisms determine the presence of certain content in our feeds, what the economic principles and models of the AI economy are and how they might change.

The future of freedom in society in the UK, our humanity, our democracy, our trust, depend not on a handful of companies who strive for a brave new world, nor on AI infrastructure they are selling us well-packaged in hype. Our collective future depends on one digital Minister having the courage to take a new direction.

Policing thoughts, proactive technology, and the Online Safety Bill

Former counter-terrorism police chief attacks Rishi Sunak’s Prevent plans“, reads a headline in today’s Guardian. “Former counter-terrorism chief Sir Peter Fahy […] said: “The widening of Prevent could damage its credibility and reputation. It makes it more about people’s thoughts and opinions. Fahy said: “The danger is the perception it creates that teachers and health workers are involved in state surveillance.”

This article leaves out that today’s reality is already far ahead of proposals or perception. School children and staff are already surveilled in these ways. Not only are things monitored that people think type or read or search for online and offline in the digital environment, but copies may be collected, retained by companies and interventions made.

The products don’t only permit monitoring of trends on aggregated data in overviews of student activity but the behaviours of individual students. And these can be deeply intrusive and sensitive when you are talking about self harm, abuse, and terrorism.

(For more on the safety tech sector, often using AI in proactive monitoring, see my previous post (May 2021) The Rise of Safety Tech.)

Intrusion through inference and interventions

From 1 July 2015 all schools have been subject to the Prevent duty under section 26 of the Counter-Terrorism and Security Act 2015, in the exercise of their functions, to have “due regard to the need to prevent people from being drawn into terrorism”.  While these products are about monitoring far more than the remit of Prevent,  many companies actively market online filtering, blocking and monitoring safety products as a way of meeting that in the digital environment. Such as, “Lightspeed Filter™ helps you meet all of the Prevent Duty’s online regulations…

Despite there being no obligation to date, to fulfil this duty through technology, some companies’ way of selling such tools could be interpreted as threatening if schools don’t use it. Like this example:

“Failure to comply with the requirements may result in intervention from the Prevent Oversight Board, prompt an Ofsted inspection or incur loss of funding.”

Such products may create and send real-time alerts to company or school staff when children attempt to reach sites or type “flagged words” related to radicalisation or extremism on any online platform.

Under the auspices of the safeguarding-in-schools data sharing and web monitoring in the Prevent programme children may be labelled with terrorism or extremism labels, data which may be passed on to others or stored outside the UK without their knowledge. The drift in what is considered significant, has been from terrorism into now more vague and broad terms of extremism and radicalisation. Away from some assessment of intent and capability of action, into interception and interventions for potentially insignificant potential vulnerabilities and inferred assumptions of disposition towards such ideas. This is not potentially going to police thoughts as suggested by Fahy of Sunak’s views. It is already doing so. Policing thoughts in the developing child and holding them accountable for it like this in ways that are unforeseeable, is inappropriate and requires thorough investigation into its effects on children, including mental health.

But it’s important to understand that these libraries of thousands of words, ever changing and in multiple languages, and what the systems are looking for and flag, often claiming to do it using Artificial Intelligence, go far beyond Prevent. ‘Legal but harmful’ is their bread and butter. Self harm, harm to or from others.

While companies have no obligations to publish how the monitoring or flagging operates, what the words or phrases or blocked websites are, their error rates (positive and negative) or the effects on children or school staff and their behaviour as a result, these companies have a great deal of influence what gets inferred from what children do online, and who decides what to act on.

Why does it matter?

Schools have normalized the premise that systems they introduce should monitor activity outside of the school network, and hours. And that strangers or their private companies’ automated systems should be involved in inferring or deciding what children are ‘up to’ before the school staff who know the children in front of them.

In a defenddigitalme report, The State of Data 2020, we included a case study on one company that has since been bought out.  And bought again. As of August 2018 eSafe was monitoring approximately one million school children plus staff across the UK. This case study they used in their public marketing raised all sorts of questions on professional  confidentiality and school boundaries, personal privacy, ethics, and companies’ role and technical capability, as well as the lack of any safety tech accountability.

“A female student had been writing an emotionally charged letter to her Mum using Microsoft Word, in which she revealed she’d been raped. Despite the device used being offline, eSafe picked this up and alerted John and his care team who were able to quickly intervene.”

Their then CEO  had told the House of Lords 2016 Communication Committee enquiry on the Children and the Internet, how the products are not only monitoring children in school or school hours:

“Bearing in mind we are doing this throughout the year, the behaviours we detect are not confined to the school bell starting in the morning and ringing in the afternoon, clearly; it is 24/7 and it is every day of the year. Lots of our incidents are escalated through activity on evenings, weekends and school holidays.”

Similar products offer a photo capturing feature of users (pupils while using the device being monitored) described as “common across most solutions in the sector” by this company:

When a critical safeguarding keyword is copied, typed or searched for across the school network, schools can turn on NetSupport DNA’s webcams capture feature (this feature is turned-off by default) to capture an image of the user (not a recording) who has triggered the keyword.

How many webcam photos have been taken of children by school staff or others through those systems, and for what purposes, kept by whom? In the U.S. in 2010, Lower Merion School District, Philadelphia settled a lawsuit for using laptop webcams to take photos of students.  Thousands of photos had been taken even at home, out of hours, without their knowledge.

Who decides what does and does not trigger interventions across different products? In the month of December 2017 alone, eSafe claims they added 2254 words to their threat libraries.

Famously, Impero’s system even included the word “biscuit” which they say is a term used to define a gun. Their system was used by more than “half a million students and staff in the UK” in 2018. And students had better not talk about “taking a wonderful bath.” Currently there is no understanding or oversight of the accuracy of this kind of software and black-box decision-making is often trusted without openness to human question or correction.

Aside from how the range of tools that are all different work, there are very basic questions about whether such policies and tools help or harm children in various ways at all. The UN Special Rapporteur’s 2014 report on children’s rights and freedom of expression stated:

“The result of vague and broad definitions of harmful information, for example in determining how to set Internet filters, can prevent children from gaining access to information that can support them to make informed choices, including honest, objective and age-appropriate information about issues such as sex education and drug use. This may exacerbate rather than diminish children’s vulnerability to risk.” (2014)

U.S. safety tech creates harms

Today in the U.S. the CDT published a report on school monitoring systems there, many of which are also used over here. The report revealed that 13 percent of students knew someone who had been outed as a result of student-monitoring software. Another conclusion the CDT draws, is that monitoring is used for discipline more often than for student safety.

We don’t have that same research for the UK, but we’ve seen IT staff openly admit to using the webcam feature to take photos of young boys who are “mucking about” on the school library computer.

The Online Safety Bill scales up problems like this

The Online Safety Bill seeks to expand how such ‘behavioural identification technology’ can be expanded outside schools.

“Proactive technology include content moderation technology, user profiling technology or behaviour identification technology which utilises artificial intelligence or machine learning.” (p151 Online Safety Bill, August 3, 2022)

The “proactive technology requirement” is as yet rather open ended, left to OFCOM in Codes of Practice but the scope creep of such AI-based tools has become ever more intrusive in education. Legal but harmful is decided by companies and the IWF and any number of opaque third parties whose process and decision-making we know little about. It’s important not to conflate filtering, blocking lists of ‘unsuitable’ websites that can be accessed in schools, with monitoring and tracking individual behaviours.

‘Technological developments that have the capacity to interfere with our freedom of thought fall clearly within the scope of “morally unacceptable harm,”‘ according to Algere (2017), and yet this individual interference is at the very core of school safeguarding tech and policy by design.

In 2018, the ‘lawful but harmful’ list of activities in the Online Harms White paper was nearly identical with those terms used by school Safety Tech companies. The Bill now appears to be trying to create a new legitimate basis for these practices, more about underpinning a developing market, than supporting children’s safety or rights.

Chilling speech is itself controlling content

While a lot of debate about the Bill has been the free speech impacts of content removal, there has been less about what is unwritten but how it will operate to prevent speech and participation in the digital environment for children. The chilling effect of surveillance on access and participation online is well documented. Younger people and women are more likely to be negatively affected (Penney, 2017). The chilling effect on thought and opinion is worsened in these types of tools that trigger an alert even when what is typed is quickly deleted or remains unsent or shared. Thoughts are no longer private.

The ability to use end-to-end encryption on private messaging platforms is simply worked around by these kinds of tools, trading security for claims of children’s safety. Anything on screen may be read in the clear by some systems, even capturing passwords and bank details.

Graham Smith has written, “It may seem like overwrought hyperbole to suggest that the [Online Harms] Bill lays waste to several hundred years of fundamental procedural protections for speech. But consider that the presumption against prior restraint appeared in Blackstone’s Commentaries (1769). It endures today in human rights law. That presumption is overturned by legal duties that require proactive monitoring and removal before an independent tribunal has made any determination of illegality.”

More than this, there is no determination of illegality in legal but harmful activity. It’s opinion. The government is prone to argue that, “nothing in the Bill says X…” but you need to understand this context of how such proactive behavioural monitoring tools work is through threat and the resultant chilling effect to impose unwritten control. This Bill does not create a safer digital environment, it creates threat models for users and companies, to control how we think and behave.

What do children and parents think?

Young people’s own views that don’t fit the online harms narrative have been ignored by Westminster scrutiny Committees. A 2019 survey by the Australian e-safety commissioner found that over half (57%) of child respondents were uncomfortable with background monitoring processes, and 43 %were unsure about these tools’ effectiveness in ensuring online safety.

And what of the role of parents? Article 3(2) of the UNCRC says: “States Parties undertake to ensure the child such protection and care as is necessary for his or her wellbeing, taking into account the rights and duties of his or her parents, legal guardians, or other individuals  legally responsible for him or her, and, to this end, shall take all appropriate legislative and administrative measures.” (my emphasis)

In 2018, 84% of 1,004 parents in England who we polled through Survation, agreed that children and guardians should be informed how this monitoring activity works and wanted to know what the keywords were. (We didn’t ask if it should happen at all or not.)

The wide ranging nature [of general monitoring] rather than targeted and proportionate interference has been judged to be in breach of law and a serious interference with rights, previously. Neither policy makers nor companies should assume parents want safety tech companies to remove autonomy, or make inferences about our children’s lives. Parents if asked, reject the secrecy in which it happens today and demand transparency and accountability. Teachers can feel anxious talking about it at all. There’s no clear routes for error corrections, in fact it’s not done because some claim in building up profiles staff should not delete anything and ignore claims of errors, in case a pattern of behaviour is missed. But there’s no independent assessments available to evidence these tools work or are worth the costs. There are no routes for redress or responsibility taken for tech-made mistakes. None of which makes children safer online.

Before broadening out where such monitoring tools are used, their use and effects on school children need to be understood and openly debated. Policy makers may justify turning a blind eye to harms created by one set of technology providers while claiming that only the other tech providers are the problem, because it suits political agendas or industry aims, but children’s rights and their wellbeing should not be sacrificed in doing so.  Opaque, unlawful and unsafe practice must stop. A quid pro quo for getting access to millions of children’s intimate behaviour, should be transparent access to their product workings, and accepting standards on universal safe accountable practices. Families need to know what’s recorded. To have routes for redress when a daughter researching ‘cliff walks’ gets flagged as a suicide risk or an environmentally interested teenage son searching for information on ‘black rhinos’ is asked about his potential gang membership. The tools sold as solutions to online harms, shouldn’t create more harm like these reported real-life case studies.

Teachers are ‘involved in state surveillance’ as Fahy put it, through Prevent. Sunak was wrong to point away from the threats of the far right in his comments. But the far broader unspoken surveillance of children’s personal lives, behaviours and thoughts through general monitoring in schools, and what will be imposed through the Online Safety Bill more broadly, should concern us far more than was said.

“Michal Serzycki” Data Protection Award 2021

It is a privilege to be a joint-recipient in the fourth year of the “Michal Serzycki” Data Protection Award, and I thank the Data Protection Authority in Poland (UODO) for the recognition of work for the benefit of promoting data protection values and the right to privacy.

I appreciate the award in particular as the founder of an NGO, and the indirect acknowledgement of the value of NGOs to be able to contribute to public policy, including openness towards international perspectives, standards, the importance of working together, and our role in holding the actions of state authorities and power to account, under the rule of law.

The award is shared with Mrs Barbara Gradkowska, Director of the Special School and Educational Center in Zamość, whose work in Poland has been central to the initiative, Your Data — Your Concern, an educational Poland-wide programme for schools that is supported and recognized by the UODO. It offers support to teachers in vocational training centres, primary, middle and high schools related to personal data protection and the right to privacy in education.

And it is also shared with Mr Maciej Gawronski, Polish legal advisor and authority in data protection, information technology, cloud computing, cybersecurity, intellectual property and business law.

The UODO has long been a proactive advocate in the schools’ sector in Poland for the protection of children’s data rights, including recent enforcement after finding the processing of children’s biometric data using fingerprint readers unlawful, when using a school canteen and ensuring destruction of pupil data obtained unlawfully.

In the rush to remote learning in 2020 in response to school closures in COVID-19, the UODO warmly received our collective international call for action, a letter in which over thirty organisations worldwide called on policy makers, data protection authorities and technology providers, to take action, and encouraged international collaboration to protect children around the world during the rapid adoption of digital educational technologies (“edTech”). The UODO issued statements and a guide on school IT security and data protection.

In September 2020, I worked with their Data Protection Office at a distance, in delivering a seminar for teachers, on remote education.

The award also acknowledges my part in the development of the Guidelines on Children’s Data Protection in an Education Setting adopted in November 2020, working in collaboration with country representatives at the Council of Europe Committee for Convention 108, as well as with observers, and the Committee’s incredible staff.

2020 was a difficult year for people around the world under COVID-19 to uphold human rights and hold the space to push back on encroachmentespecially for NGOs, and in community struggles from the Black Lives Matter movement to environmental action to  UK students on the streets of London to protest algorithmic unfairness. In Poland the direction of travel is to reduce women’s rights in particular. Poland’s ruling Law and Justice (PiS) party has been accused of politicising the constitutional tribunal and using it to push through its own agenda on abortion, and the government appears set on undermining the rule of law creating a ‘chilling effect’ for judges. The women of Poland are again showing the world, what it means and what it can cost to lose progress made.

In England at defenddigitalme, we are waiting to hear later this month, what our national Department for Education will do to better protect millions of children’s rights, in the management of national pupil records, after our Data Protection regulator, the ICO’s audit and intervention. Among other sensitive content, the National Pupil Database holds sexual orientation data on almost 3.2 million students’ named records, and religious belief on 3.7 million.

defenddigitalme is a call to action to protect children’s rights to privacy across the education sector in England, and beyond. Data protection has a role to playwithin the broader rule of law to protect and uphold the right to privacy, to prevent state interference in private and family life, and in the protection of the full range of human rights necessary in a democratic society. Fundamental human rights must be universally protected to foster human flourishing, to protect the personal dignity and freedoms of every individual, to promote social progress and better standards of life in larger freedoms.


The award was announced at the conference,Real personal data protection in remote reality,” organized by the Personal Data Protection Office UODO, as part of the celebration of the 15th Data Protection Day on 28th January, 2021 with an award ceremony held on its eve in Warsaw.

Thoughts on the Online Harms White Paper (I)

“Whatever the social issue we want to grasp – the answer should always begin with family.”

Not my words, but David Cameron’s. Just five years ago, Conservative policy was all about “putting families at the centre of domestic policy-making.”

Debate on the Online Harms White Paper, thanks in part to media framing of its own departmental making, is almost all about children. But I struggle with the debate that leaves out our role as parents almost entirely, other than as bereft or helpless victims ourselves.

I am conscious wearing my other hat of defenddigitalme, that not all families are the same, and not all children have families. Yet it seems counter to conservative values,  for a party that places the family traditionally at the centre of policy, to leave out or abdicate parents of responsibility for their children’s actions and care online.

Parental responsibility cannot be outsourced to tech companies, or accept it’s too hard to police our children’s phones. If we as parents are concerned about harms, it is our responsibility to enable access to that which is not, and be aware and educate ourselves and our children on what is. We are aware of what they read in books. I cast an eye over what they borrow or buy. I play a supervisory role.

Brutal as it may be, the Internet is not responsible for suicide. It’s just not that simple. We cannot bring children back from the dead. We certainly can as society and policy makers, try and create the conditions that harms are not normalised, and do not become more common.  And seek to reduce risk. But few would suggest social media is a single source of children’s mental health issues.

What policy makers are trying to regulate is in essence, not a single source of online harms but 2.1 billion users’ online behaviours.

It follows that to see social media as a single source of attributable fault per se, is equally misplaced. A one-size-fits-all solution is going to be flawed, but everyone seems to have accepted its inevitability.

So how will we make the least bad law?

If we are to have sound law that can be applied around what is lawful,  we must reduce the substance of debate by removing what is already unlawful and has appropriate remedy and enforcement.

Debate must also try to be free from emotive content and language.

I strongly suspect the language around ‘our way of life’ and ‘values’ in the White Paper comes from the Home Office. So while it sounds fair and just, we must remember reality in the background of TOEIC, of Windrush, of children removed from school because their national records are being misused beyond educational purposes. The Home Office is no friend of child rights, and does not foster the societal values that break down discrimination and harm. It instead creates harms of its own making, and division by design.

I’m going to quote Graham Smith, for I cannot word it better.

“Harms to society, feature heavily in the White Paper, for example: content or activity that:

“threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration.”

Similarly:

“undermine our democratic values and debate”;

“encouraging us to make decisions that could damage our health, undermining our respect and tolerance for each other and confusing our understanding of what is happening in the wider world.”

This kind of prose may befit the soapbox or an election manifesto, but has no place in or near legislation.”

[Cyberleagle, April 18, 2019,Users Behaving Badly – the Online Harms White Paper]

My key concern in this area is that through a feeling of ‘it is all awful’ stems the sense that ‘all regulation will be better than now’, and  comes with a real risk of increasing current practices that would not be better than now, and in fact need fixing.

More monitoring

The first, is today’s general monitoring of school children’s Internet content for risk and harms, which creates unintended consequences and very real harms of its own — at the moment, without oversight.

In yesterday’s House of Lords debate, Lord Haskel, said,

“This is the practicality of monitoring the internet. When the duty of care required by the White Paper becomes law, companies and regulators will have to do a lot more of it. ” [April 30, HOL]

The Brennan Centre yesterday published its research on the spend by US schools purchasing social media monitoring software from 2013-18, and highlighted some of the issues:

Aside from anecdotes promoted by the companies that sell this software, there is no proof that these surveillance tools work [compared with other practices]. But there are plenty of risks. In any context, social media is ripe for misinterpretation and misuse.” [Brennan Centre for Justice, April 30, 209]

That monitoring software focuses on two things —

a) seeing children through the lens of terrorism and extremism, and b) harms caused by them to others, or as victims of harms by others, or self-harm.

It is the near same list of ‘harms’ topics that the White Paper covers. Co-driven by the same department interested in it in schools — the Home Office.

These concerns are set in the context of the direction of travel of law and policy making, its own loosening of accountability and process.

It was preceded by a House of Commons discussion on Social Media and Health, lead by the former Minister for Digital, Culture, Media and Sport who seems to feel more at home in that sphere, than in health.

His unilateral award of funds to the Samaritans for work with Google and Facebook on a duty of care, while the very same is still under public consultation, is surprising to say the least.

But it was his response to this question, which points to the slippery slope such regulations may lead. The Freedom of Speech champions should be most concerned not even by what is potentially in any legislation ahead, but in the direction of travel and debate around it.

“Will he look at whether tech giants such as Amazon can be brought into the remit of the Online Harms White Paper?

He replied, that “Amazon sells physical goods for the most part and surely has a duty of care to those who buy them, in the same way that a shop has a responsibility for what it sells. My hon. Friend makes an important point, which I will follow up.”

Mixed messages

The Center for Democracy and Technology recommended in its 2017 report, Mixed Messages? The Limits of Automated Social Media Content Analysis, that the use of automated content analysis tools to detect or remove illegal content should never be mandated in law.

Debate so far has demonstrated broad gaps between what is wanted, in knowledge, and what is possible. If behaviours are to be stopped because they are undesirable rather than unlawful, we open up a whole can of worms if not done with the greatest attention to  detail.

Lord Stevenson and Lord McNally both suggested that pre-legislative scrutiny of the Bill, and more discussion would be positive. Let’s hope it happens.

Here’s my personal first reflections on the Online Harms White Paper discussion so far.

Six suggestions:

Suggestion one: 

The Law Commission Review, mentioned in the House of Lords debate,  may provide what I have been thinking of crowd sourcing and now may not need to. A list of laws that the Online Harms White Paper related discussion reaches into, so that we can compare what is needed in debate versus what is being sucked in. We should aim to curtail emotive discussion of broad risk and threat that people experience online. This would enable the themes which are already covered in law to be avoided, and focus on the gaps.  It would make for much tighter and more effective legislation. For example, the Crown Prosecution Service offers Guidelines on prosecuting cases involving communications sent via social media, but a wider list of law is needed.

Suggestion two:
After (1) defining what legislation is lacking, definitions must be very clear, narrow, and consistent across other legislation. Not for the regulator to determine ad-hoc and alone.

Suggestion three:
If children’s rights are at to be so central in discussion on this paper, then their wider rights must including privacy and participation, access to information and freedom of speech must be included in debate. This should include academic research-based evidence of children’s experience online when making the regulations.

Suggestion four:
Internet surveillance software in schools should be publicly scrutinised. A review should establish the efficacy, boundaries and oversight of policy and practice regards Internet monitoring for harms and not embed even more, without it. Boundaries should be put into legislation for clarity and consistency.

Suggestion five:
Terrorist activity or child sexual exploitation and abuse (CSEA) online are already unlawful and should not need additional Home Office powers. Great caution must be exercised here.

Suggestion six: 
Legislation could and should encapsulate accountability and oversight for micro-targeting and algorithmic abuse.


More detail behind my thinking, follows below, after the break. [Structure rearranged on May 14, 2019]


Continue reading Thoughts on the Online Harms White Paper (I)

Policy shapers, product makers, and profit takers (1)

In 2018, ethics became the new fashion in UK data circles.

The launch of the Women Leading in AI principles of responsible AI, has prompted me to try and finish and post these thoughts, which have been on my mind for some time. If two parts of 1K is tl:dr for you, then in summary, we need more action on:

  • Ethics as a route to regulatory avoidance.
  • Framing AI and data debates as a cost to the Economy.
  • Reframing the debate around imbalance of risk.
  • Challenging the unaccountable and the ‘inevitable’.

And in the next post on:

  • Corporate Capture.
  • Corporate Accountability, and
  • Creating Authentic Accountability.

Ethics as a route to regulatory avoidance

In 2019, the calls to push aside old wisdoms for new, for everyone to focus on the value-laden words of ‘innovation’ and ‘ethics’, appears an ever louder attempt to reframe regulation and law as barriers to business, asking to cast them aside.

On Wednesday evening, at the launch of the Women Leading in AI principles of responsible AI, the chair of the CDEI said in closing, he was keen to hear from companies where, “they were attempting to use AI effectively and encountering difficulties due to regulatory structures.”

In IBM’s own words to government recently,

A rush to further regulation can have the effect of chilling innovation and missing out on the societal and economic benefits that AI can bring.”

The vague threat is very clear, if you regulate, you’ll lose. But the the societal and economic benefits are just as vague.

So far, many talking about ethics are trying to find a route to regulatory avoidance. ‘We’ll do better,’ they promise.

In Ben Wagner’s recent paper, Ethics as an Escape from Regulation: From ethics-washing to ethics-shopping,he asks how to ensure this does not become the default engagement with ethical frameworks or rights-based design. He sums up, “In this world, ‘ethics’ is the new ‘industry self-regulation.”

Perhaps it’s ingenious PR to make sure that what is in effect self-regulation, right across the business model, looks like it comes imposed from others, from the very bodies set up to fix it.

But as I think about in part 2, is this healthy for UK public policy and the future not of an industry sector, but a whole technology, when it comes to AI?

Framing AI and data debates as a cost to the Economy

Companies, organisations and individuals arguing against regulation are framing the debate as if it would come at a great cost to society and the economy. But we rarely hear, what effect do they expect on their company. What’s the cost/benefit expected for them. It’s disingenuous to have only part of that conversation. In fact the AI debate would be richer were it to be included. If companies think their innovation or profits are at risk from non-use, or regulated use, and there is risk to the national good associated with these products, we should be talking about all of that.

And in addition, we can talk about use and non-use in society. Too often, the whole debate is intangible. Show me real costs, real benefits. Real risk assessments. Real explanations that speak human. Industry should show society what’s in it for them.

You don’t want it to ‘turn out like GM crops’? Then learn their lessons on transparency, trustworthiness, and avoid the hype. And understand sometimes there is simply tech, people do not want.

Reframing the debate around imbalance of risk

And while we often hear about the imbalance of power associated with using AI, we also need to talk about the imbalance of risk.

While a small false positive rate for a company product may be a great success for them, or for a Local Authority buying the service, it might at the same time, mean lives forever changed, children removed from families, and individual reputations ruined.

And where company owners may see no risk from the product they assure is safe, there are intangible risks that need factored in, for example in education where a child’s learning pathway is determined by patterns of behaviour, and how tools shape individualised learning, as well as the model of education.

Companies may change business model, ownership, and move on to other sectors after failure. But with the levels of unfairness already felt in the relationship between the citizen and State — in programmes like Troubled Families, Universal Credit, Policing, and Prevent — where use of algorithms and ever larger datasets is increasing, long term harm from unaccountable failure will grow.

Society needs a rebalance of the system urgently to promote transparent fairness in interactions, including but not only those with new applications of technology.

We must find ways to reframe how this imbalance of risk is assessed, and is distributed between companies and the individual, or between companies and state and society, and enable access to meaningful redress when risks turn into harm.

If we are to do that, we need first to separate truth from hype, public good from self-interest and have a real discussion of risk across the full range from individual, to state, to society at large.

That’s not easy against a non-neutral backdrop and scant sources of unbiased evidence and corporate capture.

Challenging the unaccountable and the ‘inevitable’.

In 2017 the Care Quality Commission reported into online services in the NHS, and found serious concerns of unsafe and ineffective care. They have a cross-regulatory working group.

By contrast, no one appears to oversee that risk and the embedded use of automated tools involved in decision-making or decision support, in children’s services, or education. Areas where AI and cognitive behavioural science and neuroscience are already in use, without ethical approval, without parental knowledge or any transparency.

Meanwhile, as all this goes on, academics many are busy debating fixing algorithmic bias, accountability and its transparency.

Few are challenging the narrative of the ‘inevitability’ of AI.

Julia Powles and Helen Nissenbaum recently wrote that many of these current debates are an academic distraction, removed from reality. It is under appreciated how deeply these tools are already embedded in UK public policy. “Trying to “fix” A.I. distracts from the more urgent questions about the technology. It also denies us the possibility of asking: Should we be building these systems at all?”

Challenging the unaccountable and the ‘inevitable’ is the title of the conclusion of the Women Leading in AI report on principles, and makes me hopeful.

“There is nothing inevitable about how we choose to use this disruptive technology. […] And there is no excuse for failing to set clear rules so that it remains accountable, fosters our civic values and allows humanity to be stronger and better.”

[1] Powles, Nissenbaum, 2018,The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence, Medium

Next: Part  2– Policy shapers, product makers, and profit takers on

  • Corporate Capture.
  • Corporate Accountability, and
  • Creating Authentic Accountability.

The power of imagination in public policy

“A new, a vast, and a powerful language is developed for the future use of analysis, in which to wield its truths so that these may become of more speedy and accurate practical application for the purposes of mankind than the means hitherto in our possession have rendered possible.” [on Ada Lovelace, The First tech Visionary, New Yorker, 2013]

What would Ada Lovelace have argued for in today’s AI debates? I think she may have used her voice not only to call for the good use of data analysis, but for her second strength.The power of her imagination.

James Ball recently wrote in The European [1]:

“It is becoming increasingly clear that the modern political war isn’t one against poverty, or against crime, or drugs, or even the tech giants – our modern political era is dominated by a war against reality.”

My overriding take away from three days spent at the Conservative Party Conference this week, was similar. It reaffirmed the title of a school debate I lost at age 15, ‘We only believe what we want to believe.’

James writes that it is, “easy to deny something that’s a few years in the future“, and that Conservatives, “especially pro-Brexit Conservatives – are sticking to that tried-and-tested formula: denying the facts, telling a story of the world as you’d like it to be, and waiting for the votes and applause to roll in.”

These positions are not confined to one party’s politics, or speeches of future hopes, but define perception of current reality.

I spent a lot of time listening to MPs. To Ministers, to Councillors, and to party members. At fringe events, in coffee queues, on the exhibition floor. I had conversations pressed against corridor walls as small press-illuminated swarms of people passed by with Queen Johnson or Rees-Mogg at their centre.

In one panel I heard a primary school teacher deny that child poverty really exists, or affects learning in the classroom.

In another, in passing, a digital Minister suggested that Pupil Referral Units (PRU) are where most of society’s ills start, but as a Birmingham head wrote this week, “They’ll blame the housing crisis on PRUs soon!” and “for the record, there aren’t gang recruiters outside our gates.”

This is no tirade on failings of public policymakers however. While it is easy to suspect malicious intent when you are at, or feel, the sharp end of policies which do harm, success is subjective.

It is clear that an overwhelming sense of self-belief exists in those responsible, in the intent of any given policy to do good.

Where policies include technology, this is underpinned by a self re-affirming belief in its power. Power waiting to be harnessed by government and the public sector. Even more appealing where it is sold as a cost-saving tool in cash strapped councils. Many that have cut away human staff are now trying to use machine power to make decisions. Some of the unintended consequences of taking humans out of the process, are catastrophic for human rights.

Sweeping human assumptions behind such thinking on social issues and their causes, are becoming hard coded into algorithmic solutions that involve identifying young people who are in danger of becoming involved in crime using “risk factors” such as truancy, school exclusion, domestic violence and gang membership.

The disconnect between perception of risk, the reality of risk, and real harm, whether perceived or felt from these applied policies in real-life, is not so much, ‘easy to deny something that’s a few years in the future‘ as Ball writes, but a denial of the reality now.

Concerningly, there is lack of imagination of what real harms look like.There is no discussion where sometimes these predictive policies have no positive, or even a negative effect, and make things worse.

I’m deeply concerned that there is an unwillingness to recognise any failures in current data processing in the public sector, particularly at scale, and where it regards the well-known poor quality of administrative data. Or to be accountable for its failures.

Harms, existing harms to individuals, are perceived as outliers. Any broad sweep of harms across policy like Universal Credit, seem perceived as political criticism, which makes the measurable failures less meaningful, less real, and less necessary to change.

There is a worrying growing trend of finger-pointing exclusively at others’ tech failures instead. In particular, social media companies.

Imagination and mistaken ideas are reinforced where the idea is plausible, and shared. An oft heard and self-affirming belief was repeated in many fora between policymakers, media, NGOs regards children’s online safety. “There is no regulation online”. In fact, much that applies offline applies online. The Crown Prosecution Service Social Media Guidelines is a good place to start. [2] But no one discusses where children’s lives may be put at risk or less safe, through the use of state information about them.

Policymakers want data to give us certainty. But many uses of big data, and new tools appear to do little more than quantify moral fears, and yet still guide real-life interventions in real-lives.

Child abuse prediction, and school exclusion interventions should not be test-beds for technology the public cannot scrutinise or understand.

In one trial attempting to predict exclusion, this recent UK research project in 2013-16 linked children’s school records of 800 children in 40 London schools, with Metropolitan Police arrest records of all the participants. It found interventions created no benefit, and may have caused harm. [3]

“Anecdotal evidence from the EiE-L core workers indicated that in some instances schools informed students that they were enrolled on the intervention because they were the “worst kids”.”

Keeping students in education, by providing them with an inclusive school environment, which would facilitate school bonds in the context of supportive student–teacher relationships, should be seen as a key goal for educators and policy makers in this area,” researchers suggested.

But policy makers seem intent to use systems that tick boxes, and create triggers to single people out, with quantifiable impact.

Some of these systems are known to be poor, or harmful.

When it comes to predicting and preventing child abuse, there is concern with the harms in US programmes ahead of us, such as both Pittsburgh, and Chicago that has scrapped its programme.

The Illinois Department of Children and Family Services ended a high-profile program that used computer data mining to identify children at risk for serious injury or death after the agency’s top official called the technology unreliable, and children still died.

“We are not doing the predictive analytics because it didn’t seem to be predicting much,” DCFS Director Beverly “B.J.” Walker told the Tribune.

Many professionals in the UK share these concerns. How long will they be ignored and children be guinea pigs without transparent error rates, or recognition of the potential harmful effects?

Helen Margetts, Director of the Oxford Internet Institute and Programme Director for Public Policy at the Alan Turing Institute, suggested at the IGF event this week, that stopping the use of these AI in the public sector is impossible. We could not decide that, “we’re not doing this until we’ve decided how it’s going to be.” It can’t work like that.” [45:30]

Why on earth not? At least for these high risk projects.

How long should children be the test subjects of machine learning tools at scale, without transparent error rates, audit, or scrutiny of their systems and understanding of unintended consequences?

Is harm to any child a price you’re willing to pay to keep using these systems to perhaps identify others, while we don’t know?

Is there an acceptable positive versus negative outcome rate?

The evidence so far of AI in child abuse prediction is not clearly showing that more children are helped than harmed.

Surely it’s time to stop thinking, and demand action on this.

It doesn’t take much imagination, to see the harms. Safe technology, and safe use of data, does not prevent the imagination or innovation, employed for good.

If we continue to ignore views from Patrick Brown, Ruth Gilbert, Rachel Pearson and Gene Feder, Charmaine Fletcher, Mike Stein, Tina Shaw and John Simmonds I want to know why.

Where you are willing to sacrifice certainty of human safety for the machine decision, I want someone to be accountable for why.

 


References

[1] James Ball, The European, Those waging war against reality are doomed to failure, October 4, 2018.

[2] Thanks to Graham Smith for the link. “Social Media – Guidelines on prosecuting cases involving communications sent via social media. The Crown Prosecution Service (CPS) , August 2018.”

[3] Obsuth, I., Sutherland, A., Cope, A. et al. J Youth Adolescence (2017) 46: 538. https://doi.org/10.1007/s10964-016-0468-4 London Education and Inclusion Project (LEIP): Results from a Cluster-Randomized Controlled Trial of an Intervention to Reduce School Exclusion and Antisocial Behavior (March 2016)

Data Protection Bill 2017: summary of source links

The Data Protection Bill [Exemptions from GDPR] was introduced to the House of Lords on 13 September 2017
*current status April 6, 2018* Report Stage House of Commons — dates, to be announced
Debates

Dates for all stages of the passage of the Bill, including links to the debates.

EU GDPR Progress Overviews

Updates of GDPR age of consent mapping: Better Internet for Kids

Bird and Bird GDPR Tracker [Shows how and where GDPR has been supplemented locally, highlighting where Member States have taken the opportunities available in the law for national variation.]

ISiCo Tracker (Site in German language) with links.

UK Data Protection Bill Overview
  • Data Protection Bill Explanatory Notes [PDF], 1.2MB, 112 pages
  • Data Protection Bill Overview Factsheet [PDF], 229KB, 4 pages
  • Data Protection Bill Impact Assessment [PDF], 123KB, 5 pages
The General Data Protection Regulation

The General Data Protection Regulation [PDF] 959KB, 88 pages

Related Factsheets
  • General Processing Factsheet, [PDF], 141KB, 3 pages
  • Law Enforcement Data Processing Factsheet [PDF], 226KB, 3 pages
  • National Security Data Processing Factsheet [PDF], 231KB, 4 pages
These parts of the bill concern the function of the Information Commissioner and her powers of enforcement
  • Information Commissioner and Enforcement Factsheet [PDF] 223KB, 4 pages
  • Data sharing code of practice [PDF]
GDPR possible derogations

Source credit Amberhawk: Chris Pounder

Member State law can allow modifications to Articles 4(7), 4(9),  6(2), 6(3)(b), 6(4),  8(1), 8(3), 9(2)(a), 9(2)(b), 9(2)(g), 9(2)(h), 9(2)(i), 9(2)(j), 9(3), 9(4),  10,  14(5)(b), 14(5)(c), 14(5)(d),  17(1)(e), 17(3)(b), 17(3)(d), 22(2)(b),  23(1)(e),  26(1),  28(3), 28(3)(a), 28(3)(g), 28(3)(h), 28(4),  29,  32(4),  35(10), 36(5),  37(4),  38(5),  49(1)(g), 49(4), 49(5),  53(1), 53(3),  54(1), 54(2),  58(1)(f), 58(2), 58(3), 58(4), 58(5),  59,  61(4)(b),  62(3),  80,  83(5)(d), 83(7), 83(8),  85,  86,  87,  88,  89,  and 90 of the GDPR.

Other relevant significant connected legislation
  • The Police and Crime Directive [web link] 
  • EU Charter of Fundamental Rights – European Commission [link]
  • The proposed Regulation on Privacy and Electronic Communications [web link]
  • Draft modernised convention for the protection of individuals with regard to the processing of personal data (convention 108)
Data Protection Bill Statement of Intent
  • DCMS Statement of Intent [PDF] 229KB, 4 pages
  • Letter to Stakeholders [PDF] 184KB, 2 pages 7 Aug 2017
Other links on derogations and data processing
  • On Adequacy: Data transfers between the EU and UK post Brexit? Andrew D. Murray Article [link]
  • Two Birds [web link]
  • ICO legal basis for processing and children [link]
  • Public authorities under the Freedom of Information Act (ICO) Public authorities under FOIA 120160901 Version: 2.2 [link] 
  • ICO information for education [link]

Blogs on key issues [links in date of post]

  • Amberhawk
    • DP Bill’s new immigration exemption can put EU citizens seeking a right to remain at considerable disadvantage [09.10] re: Schedule 2, paragraph 4, new Immigration exemption.
    • On Adequacy:  Draconian powers in EU Withdrawal Bill can negate new Data Protection law [13.09]
    • Queen’s Speech, and the promised “Data Protection (Exemptions from GDPR) Bill [29.06]
  • defenddigitalme
    • Response to the Data Protection Bill debate and Green Paper on Online Strategy [11.10.2017]
  • Jon Baines
    • Serious DCMS error about consent data protection [11.08]
  • Eoin O’Dell
    • The UK’s Data Protection Bill 2017: repeals and compensation – updated: On DCMS legislating for Art 82 GDPR. [14.09]

Data Protection Bill Consultation: General Data Protection Regulation Call for Views on exemptions
  • New Data Protection Bill: Our planned reforms [PDF] 952KB, 30 pages
  • London Economics: Research and analysis to quantify benefits arising from personal data rights under the GDPR [PDF] 3.76MB 189 pages
  • ICO response to DCMS [link]
  • ESRC joint submissions on EU General Data Protection Regulation in the UK – Wellcome led multi org submission plus submission from British Academy / Erdos [link]
  • defenddigitalme response to the DCMS [link]
Minister for Digital Matt Hancock’s keynote address to the UK Internet Governance Forum, 13 September [link].

“…the Data Protection Bill, which will bring our data protection regime into the twenty first century, giving citizens more sovereignty over their data, and greater penalties for those who break the rules.

“With AI and machine learning, data use is moving fast. Good use of data isn’t just about complying with the regulations, it’s about the ethical use of data too.

“So good governance of data isn’t just about legislation – as important as that is – it’s also about establishing ethical norms and boundaries, as a society.  And this is something our Digital Charter will address too.”

Media links

14.09 BBC UK proposes exemptions to Data Protection Bill


Edits:

11.10.2017 to add links to the Second Reading in the House of Lords

Failing a generation is not what post-Brexit Britain needs

Basically Britain needs Prof. Brian Cox shaping education policy:

“If it were up to me I would increase pay and conditions and levels of responsibility and respect significantly, because it is an investment that would pay itself back many times over in the decades to come.”

Don’t use children as ‘measurement probes’ to test schools

What effect does using school exam results to reform the school system have on children? And what effect does it have on society?

Last autumn Ofqual published a report and their study on consistency of exam marking and metrics.

The report concluded that half of pupils in English Literature, as an example, are not awarded the “correct” grade on a particular exam paper due to marking inconsistencies and the design of the tests.
Given the complexity and sensitivity of the data, Ofqual concluded, it is essential that the metrics stand up to scrutiny and that there is a very clear understanding behind the meaning and application of any quality of marking.  They wrote that, “there are dangers that information from metrics (particularly when related to grade boundaries) could be used out of context.”

Context and accuracy are fundamental to the value of and trust in these tests. And at the moment, trust is not high in the system behind it. There must also be trust in policy behind the system.

This summer two sets of UK school tests, will come under scrutiny. GCSEs and SATS. The goal posts are moving for children and schools across the country. And it’s bad for children and bad for Britain.

Grades A-G will be swapped for numbers 1 -9

GCSE sitting 15-16 year olds will see their exams shift to a numerical system, scoring from the highest Grade 9 to Grade 1, with the three top grades replacing the current A and A*. The alphabetical grading system will be fully phased out by 2019.

The plans intended that roughly the same proportion of students as have achieved a Grade C will be awarded a new Grade 4 and as Schools Week reported: “There will be two GCSE pass rates in school performance tables.”

One will measure grade 5s or above, and this will be called the ‘strong’ pass rate. And the other will measure grade 4s or above, and this will be the ‘standard’ pass rate.

Laura McInerney summed up, “in some senses, it’s not a bad idea as it will mean it is easier to see if the measures are comparable. We can check if the ‘standard’ rate is better or worse over the next few years. (This is particularly good for the DfE who have been told off by the government watchdog for fiddling about with data so much that no one can tell if anything has worked anymore).”

There’s plenty of confusion in parents, how the numerical grading system will work. The confusion you can gauge in playground conversations, is also reflected nationally in a more measurable way.

Market research in a range of audiences – including businesses, head teachers, universities, colleges, parents and pupils – found that just 31 per cent of secondary school pupils and 30 per cent of parents were clear on the new numerical grading system.

So that’s a change in the GCSE grading structure. But why? If more differentiators are needed, why not add one or two more letters and shift grade boundaries? A policy need for these changes is unclear.

Machine marking is training on ten year olds

I wonder if any of the shift to numerical marking, is due in any part to a desire to move GCSEs in future to machine marking?

This year, ten and eleven year olds, children in their last year of primary school, will have their SATs tests computer marked.

That’s everything in maths and English. Not multiple choice papers or one word answers, but full written responses. If their f, b or g doesn’t look like the correct  letter in the correct place in the sentence, then it gains no marks.

Parents are concerned about children whose handwriting is awful, but their knowledge is not. How well can they hope to be assessed? If exams are increasingly machine marked out of sight, many sent to India, where is our oversight of the marking process and accuracy?

The concerns I’ve heard simply among local parents and staff, seem reflected in national discussions and the assessor, Oftsed. TES has reported Ofsted’s most senior officials as saying that the inspectorate is just as reluctant to use this year’s writing assessments as it was in 2016. Teachers and parents locally are united in feeling it is not accurate, not fair, and not right.

The content is also to be tougher.

How will we know what is being accurately measured and the accuracy of the metrics with content changes at the same time? How will we know if children didn’t make the mark, or if the marks were simply not awarded?

The accountability of the process is less than transparent to pupils and parents. We have little opportunity for Ofqual’s recommended scrutiny of these metrics, or the data behind the system on our kids.

Causation, correlation and why we should care

The real risk is that no one will be able to tell if there is an error, where it stems from, and where there is a reason if pass rates should be markedly different from what was expected.

After the wide range of changes across pupil attainment, exam content, school progress scores, and their interaction and dependencies, can they all fit together and be comparable with the past at all?

If the SATS are making lots of mistakes simply due to being bad at reading ten year’ old’s handwriting, how will we know?

Or if GCSE scores are lower, will we be able to see if it is because they have genuinely differentiated the results in a wider spread, and stretched out the fail, pass and top passes more strictly than before?

What is likely, is that this year’s set of children who were expecting As and A star at GCSE but fail to be the one of the two children nationally who get the new grade 9, will be disappointed to feel they are not, after all, as great as they thought they were.

And next year, if you can’t be the one or two to get the top mark, will the best simply stop stretching themselves and rest a bit easier, because, whatever, you won’t get that straight grade As anyway?

Even if children would not change behaviours were they to know, the target range scoring sent by third party data processors to schools, discourages teachers from stretching those at the top.

Politicians look for positive progress, but policies are changing that will increase the number of schools deemed to have failed. Why?

Our children’s results are being used to reform the school system.

Coasting and failing schools can be compelled to become academies.

Government policy on this forced academisation was rejected by popular revolt. It appears that the government is determined that schools *will* become academies with the same fervour that they *will* re-introduce grammar schools. Both are unevidenced and unwanted. But there is a workaround.  Create evidence. Make the successful scores harder to achieve, and more will be seen to fail.

A total of 282 secondary schools in England were deemed to be failing by the government this January, as they “have not met a new set of national standards”.

It is expected that even more will attain ‘less’ this summer. Tim Leunig, Chief Analyst & Chief Scientific Adviser Department for Education, made a personal guess at two reaching the top mark.

The context of this GCSE ‘failure’ is the changes in how schools are measured. Children’s progress over 8 subjects, or “P8” is being used as an accountability measure of overall school quality.

But it’s really just: “a school’s average Attainment 8 score adjusted for pupils’ Key Stage 2 attainment.” [Dave Thomson, Education Datalab]

Work done by FFT Education Datalab showed that contextualising P8 scores can lead to large changes for some schools.  (Read more here and here). You cannot meaningfully compare schools with different types of intake, but it appears that the government is determined to do so. Starting ever younger if new plans go ahead.

Data is being reshaped to tell stories to fit to policy.

Shaping children’s future

What this reshaping doesn’t factor in at all, is the labelling of a generation or more, with personal failure, from age ten and up.

All this tinkering with the data, isn’t just data.

It’s tinkering badly with our kids sense of self, their sense of achievement, aspiration, and with that; the country’s future.

Education reform has become the aim, and it has replaced the aims of education.

Post-Brexit Britain doesn’t need policy that delivers ideology. We don’t need “to use children as ‘measurement probes’ to test schools.

Just as we shouldn’t use children’s educational path to test their net worth or cost to the economy. Or predict it in future.

Children’s education and human value cannot be measured in data.

Information society services: Children in the GDPR, Digital Economy Bill & Digital Strategy

In preparation for The General Data Protection Regulation (GDPR) there  must be an active UK decision about policy in the coming months for children and the Internet – provision of ‘Information Society Services’. The age of consent for online content aimed at children from May 25, 2018 will be 16 by default unless UK law is made to lower it.

Age verification for online information services in the GDPR, will mean capturing parent-child relationships. This could mean a parent’s email or credit card unless there are other choices made. What will that mean for access to services for children and to privacy? It is likely to offer companies an opportunity for a data grab, and mean privacy loss for the public, as more data about family relationships will be created and collected than the content provider would get otherwise.

Our interactions create a blended identity of online and offline attributes which I suggested in a previous post, create synthesised versions of our selves raises questions on data privacy and security.

The goal may be to protect the physical child. The outcome will mean it simultaneously expose children and parents to risks that we would not otherwise be put through increased personal data collection. By increasing the data collected, it increases the associated risks of loss, theft, and harm to identity integrity. How will legislation balance these risks and rights to participation?

The UK government has various work in progress before then, that could address these questions:

But will they?

As Sonia Livingstone wrote in the post on the LSE media blog about what to expect from the GDPR and its online challenges for children:

“Now the UK, along with other Member States, has until May 2018 to get its house in order”.

What will that order look like?

The Digital Strategy and Ed Tech

The Digital Strategy commits to changes in National Pupil Data  management. That is, changes in the handling and secondary uses of data collected from pupils in the school census, like using it for national research and planning.

It also means giving data to commercial companies and the press. Companies such as private tutor pupil matching services, and data intermediaries. Journalists at the Times and the Telegraph.

Access to NPD via the ONS VML would mean safe data use, in safe settings, by safe (trained and accredited) users.

Sensitive data — it remains to be seen how DfE intends to interpret ‘sensitive’ and whether that is the DPA1998 term or lay term meaning ‘identifying’ as it should — will no longer be seen by users for secondary uses outside safe settings.

However, a grey area on privacy and security remains in the “Data Exchange” which will enable EdTech products to “talk to each other”.

The aim of changes in data access is to ensure that children’s data integrity and identity are secure.  Let’s hope the intention that “at all times, the need to preserve appropriate privacy and security will remain paramount and will be non-negotiable” applies across all closed pupil data, and not only to that which may be made available via the VML.

This strategy is still far from clear or set in place.

The Digital Strategy and consumer data rights

The Digital Strategy commits under the heading of “Unlocking the power of data in the UK economy and improving public confidence in its use” to the implementation of the General Data Protection Regulation by May 2018. The Strategy frames this as a business issue, labelling data as “a global commodity” and as such, its handling is framed solely as a requirements needed to ensure “that our businesses can continue to compete and communicate effectively around the world” and that adoption “will ensure a shared and higher standard of protection for consumers and their data.”

The GDPR as far as children goes, is far more about protection of children as people. It focuses on returning control over children’s own identity and being able to revoke control by others, rather than consumer rights.

That said, there are data rights issues which are also consumer issues and  product safety failures posing real risk of harm.

Neither The Digital Economy Bill nor the Digital Strategy address these rights and security issues, particularly when posed by the Internet of Things with any meaningful effect.

In fact, the chapter Internet of Things and Smart Infrastructure [ 9/19]  singularly miss out anything on security and safety:

“We want the UK to remain an international leader in R&D and adoption of IoT. We are funding research and innovation through the three year, £30 million IoT UK Programme.”

There was much more thoughtful detail in the 2014 Blackett Review on the IoT to which I was signposted today after yesterday’s post.

If it’s not scary enough for the public to think that their sex secrets and devices are hackable, perhaps it will kill public trust in connected devices more when they find strangers talking to their children through a baby monitor or toy. [BEUC campaign report on #Toyfail]

“The internet-connected toys ‘My Friend Cayla’ and ‘i-Que’ fail miserably when it comes to safeguarding basic consumer rights, security, and privacy. Both toys are sold widely in the EU.”

Digital skills and training in the strategy doesn’t touch on any form of change management plans for existing working sectors in which we expect to see machine learning and AI change the job market. This is something the digital and industrial strategy must be addressing hand in glove.

The tactics and training providers listed sound super, but there does not appear to be an aspirational strategy hidden between the lines.

The Digital Economy Bill and citizens’ data rights

While the rest of Europe in this legislation has recognised that a future thinking digital world without boundaries, needs future thinking on data protection and empowered citizens with better control of identity, the UK government appears intent on taking ours away.

To take only one example for children, the Digital Economy Bill in Cabinet Office led meetings was explicit about use for identifying and tracking individuals labelled under “Troubled Families” and interventions with them. Why, when consent is required to work directly with people, that consent is being ignored to access their information is baffling and in conflict with both the spirit and letter of GDPR. Students and Applicants will see their personal data sent to the Student Loans Company without their consent or knowledge. This overrides the current consent model in place at UCAS.

It is baffling that the government is pursuing the Digital Economy Bill data copying clauses relentlessly, that remove confidentiality by default, and will release our identities in birth, marriage and death data for third party use without consent through Chapter 2, the opening of the Civil Registry, without any safeguards in the bill.

Government has not only excluded important aspects of Parliamentary scrutiny in the bill, it is trying to introduce “almost untrammeled powers” (paragraph 21), that will “very significantly broaden the scope for the sharing of information” and “specified persons”  which applies “whether the service provider concerned is in the public sector or is a charity or a commercial organisation” and non-specific purposes for which the information may be disclosed or used. [Reference: Scrutiny committee comments]

Future changes need future joined up thinking

While it is important to learn from the past, I worry that the effort some social scientists put into looking backwards,  is not matched by enthusiasm to look ahead and making active recommendations for a better future.

Society appears to have its eyes wide shut to the risks of coercive control and nudge as research among academics and government departments moves in the direction of predictive data analysis.

Uses of administrative big data and publicly available social media data for example, in research and statistics, needs further new regulation in practice and policy but instead the Digital Economy Bill looks only at how more data can be got out of Department silos.

A certain intransigence about data sharing with researchers from government departments is understandable. What’s the incentive for DWP to release data showing its policy may kill people?

Westminster may fear it has more to lose from data releases and don’t seek out the political capital to be had from good news.

The ethics of data science are applied patchily at best in government, and inconsistently in academic expectations.

Some researchers have identified this but there seems little will to action:

 “It will no longer be possible to assume that secondary data use is ethically unproblematic.”

[Data Horizons: New forms of Data for Social Research, Elliot, M., Purdam, K., Mackey, E., School of Social Sciences, The University Of Manchester, 2013.]

Research and legislation alike seem hell bent on the low hanging fruit but miss out the really hard things. What meaningful benefit will it bring by spending millions of pounds on exploiting these personal data and opening our identities to risk just to find out whether X course means people are employed in Y tax bracket 5 years later, versus course Z where everyone ends up self employed artists? What ethics will be applied to the outcomes of those questions asked and why?

And while government is busy joining up children’s education data throughout their lifetimes from age 2 across school, FE, HE, into their HMRC and DWP interactions, there is no public plan in the Digital Strategy for the coming 10 to 20 years employment market, when many believe, as do these authors in American Scientific, “around half of today’s jobs will be threatened by algorithms. 40% of today’s top 500 companies will have vanished in a decade.”

What benefit will it have to know what was, or for the plans around workforce and digital skills list ad hoc tactics, but no strategy?

We must safeguard jobs and societal needs, but just teaching people to code is not a solution to a fundamental gap in what our purpose will be, and the place of people as a world-leading tech nation after Brexit. We are going to have fewer talented people from across the world staying on after completing academic studies, because they’re not coming at all.

There may be investment in A.I. but where is the investment in good data practices around automation and machine learning in the Digital Economy Bill?

To do this Digital Strategy well, we need joined up thinking.

Improving online safety for children in The Green Paper on Children’s Internet Safety should mean one thing:

Children should be able to use online services without being used and abused by them.

This article arrived on my Twitter timeline via a number of people. Doteveryone CEO Rachel Coldicutt summed up various strands of thought I started to hear hints of last month at #CPDP2017 in Brussels:

“As designers and engineers, we’ve contributed to a post-thought world. In 2017, it’s time to start making people think again.

“We need to find new ways of putting friction and thoughtfulness back into the products we make.” [Glanceable truthiness, 30.1.2017]

Let’s keep the human in discussions about technology, and people first in our products

All too often in technology and even privacy discussions, people have become ‘consumers’ and ‘customers’ instead of people.

The Digital Strategy may seek to unlock “the power of data in the UK economy” but policy and legislation must put equal if not more emphasis on “improving public confidence in its use” if that long term opportunity is to be achieved.

And in technology discussions about AI and algorithms we hear very little about people at all.  Discussions I hear seem siloed instead into three camps: the academics, the designers and developers,  the politicians and policy makers.  And then comes the lowest circle, ‘the public’ and ‘society’.

It is therefore unsurprising that human rights have fallen down the ranking of importance in some areas of technology development.

It’s time to get this house in order.

A vanquished ghost returns as details of distress required in NHS opt out

It seems the ugly ghosts of care.data past were alive and well at NHS Digital this Christmas.

Old style thinking, the top-down patriarchal ‘no one who uses a public service should be allowed to opt out of sharing their records. Nor can people rely on their record being anonymised,‘ that you thought was vanquished, has returned with a vengeance.

The Secretary of State for Health, Jeremy Hunt, has reportedly  done a U-turn on opt out of the transfer of our medical records to third parties without consent.

That backtracks on what he said in Parliament on January 25th, 2014 on opt out of anonymous data transfers, despite the right to object in the NHS constitution [1].

So what’s the solution? If the new opt out methods aren’t working, then back to the old ones and making Section 10 requests? But it seems the Information Centre isn’t keen on making that work either.

All the data the HSCIC holds is sensitive and as such, its release risks patients’ significant harm or distress [2] so it shouldn’t be difficult to tell them to cease and desist, when it comes to data about you.

But how is NHS Digital responding to people who make the effort to write directly?

Someone who “got a very unhelpful reply” is being made to jump through hoops.

If anyone asks that their hospital data should not be used in any format and passed to third parties, that’s surely for them to decide.

Let’s take the case study of a woman who spoke to me during the whole care.data debacle who had been let down by the records system after rape. Her NHS records subsequently about her mental health care were inaccurate, and had led to her being denied the benefit of private health insurance at a new job.

Would she have to detail why selling her medical records would cause her distress? What level of detail is fair and who decides? The whole point is, you want to keep info confidential.

Should you have to state what you fear? “I have future distress, what you might do to me?” Once you lose control of data, it’s gone. Based on past planning secrecy and ideas for the future, like mashing up health data with retail loyalty cards as suggested at Strata in November 2013 [from 16:00] [2] no wonder people are sceptical. 

Given the long list of commercial companies,  charities, think tanks and others that passing out our sensitive data puts at risk and given the Information Centre’s past record, HSCIC might be grateful they have only opt out requests to deal with, and not millions of medical ethics court summonses. So far.

HSCIC / NHS Digital has extracted our identifiable records and has given them away, including for commercial product use, and continues give them away, without informing us. We’ve accepted Ministers’ statements and that a solution would be found. Two years on, patience wears thin.

“Without that external trust, we risk losing our public mandate and then cannot offer the vital insights that quality healthcare requires.”

— Sir Nick Partridge on publication of the audit report of 10% of 3,059 releases by the HSCIC between 2005-13

— Andy WIlliams said, “We want people to be certain their choices will be followed.”

Jeremy Hunt said everyone should be able to opt out of having their anonymised data used. David Cameron did too when the plan was  announced in 2012.

In 2014 the public was told there should be no more surprises. This latest response is not only a surprise but enormously disrespectful.

When you’re trying to rebuild trust, assuming that we accept that ‘is’ the aim, you can’t say one thing, and do another.  Perhaps the Department for Health doesn’t like the public answer to what the public wants from opt out, but that doesn’t make the DH view right.

Perhaps NHS Digital doesn’t want to deal with lots of individual opt out requests, that doesn’t make their refusal right.

Kingsley Manning recognised in July 2014, that the Information Centre “had made big mistakes over the last 10 years.” And there was “a once-in-a-generation chance to get it right.”

I didn’t think I’d have to move into the next one before they fix it.

The recent round of 2016 public feedback was the same as care.data 1.0. Respect nuanced opt outs and you will have all the identifiable public interest research data you want. Solutions must be better for other uses, opt out requests must be respected without distressing patients further in the process, and anonymous must mean  anonymous.

Pseudonymised data requests that go through the DARS process so that a Data Sharing Framework Contract and Data Sharing Agreement are in place are considered to be compliant with the ICO code of practice – fine, but they are not anonymous. If DARS is still giving my family’s data to Experian, Harvey Walsh, and co, despite opt out, I’ll be furious.

The [Caldicott 2] Review Panel found “that commissioners do not need dispensation from confidentiality, human rights & data protection law.

Neither do our politicians, their policies or ALBs.


[1] https://www.england.nhs.uk/ourwork/tsd/ig/ig-fair-process/further-info-gps/

“A patient can object to their confidential personal information from being disclosed out of the GP Practice and/or from being shared onwards by the HSCIC for non-direct care purposes (secondary purposes).”

[2] Minimum Mandatory Measures http://www.nationalarchives.gov.uk/documents/information-management/cross-govt-actions.pdf p7