Thoughts on the Online Harms White Paper (I)

“Whatever the social issue we want to grasp – the answer should always begin with family.”

Not my words, but David Cameron’s. Just five years ago, Conservative policy was all about “putting families at the centre of domestic policy-making.”

Debate on the Online Harms White Paper, thanks in part to media framing of its own departmental making, is almost all about children. But I struggle with the debate that leaves out our role as parents almost entirely, other than as bereft or helpless victims ourselves.

I am conscious wearing my other hat of defenddigitalme, that not all families are the same, and not all children have families. Yet it seems counter to conservative values,  for a party that places the family traditionally at the centre of policy, to leave out or abdicate parents of responsibility for their children’s actions and care online.

Parental responsibility cannot be outsourced to tech companies, or accept it’s too hard to police our children’s phones. If we as parents are concerned about harms, it is our responsibility to enable access to that which is not, and be aware and educate ourselves and our children on what is. We are aware of what they read in books. I cast an eye over what they borrow or buy. I play a supervisory role.

Brutal as it may be, the Internet is not responsible for suicide. It’s just not that simple. We cannot bring children back from the dead. We certainly can as society and policy makers, try and create the conditions that harms are not normalised, and do not become more common.  And seek to reduce risk. But few would suggest social media is a single source of children’s mental health issues.

What policy makers are trying to regulate is in essence, not a single source of online harms but 2.1 billion users’ online behaviours.

It follows that to see social media as a single source of attributable fault per se, is equally misplaced. A one-size-fits-all solution is going to be flawed, but everyone seems to have accepted its inevitability.

So how will we make the least bad law?

If we are to have sound law that can be applied around what is lawful,  we must reduce the substance of debate by removing what is already unlawful and has appropriate remedy and enforcement.

Debate must also try to be free from emotive content and language.

I strongly suspect the language around ‘our way of life’ and ‘values’ in the White Paper comes from the Home Office. So while it sounds fair and just, we must remember reality in the background of TOEIC, of Windrush, of children removed from school because their national records are being misused beyond educational purposes. The Home Office is no friend of child rights, and does not foster the societal values that break down discrimination and harm. It instead creates harms of its own making, and division by design.

I’m going to quote Graham Smith, for I cannot word it better.

“Harms to society, feature heavily in the White Paper, for example: content or activity that:

“threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration.”

Similarly:

“undermine our democratic values and debate”;

“encouraging us to make decisions that could damage our health, undermining our respect and tolerance for each other and confusing our understanding of what is happening in the wider world.”

This kind of prose may befit the soapbox or an election manifesto, but has no place in or near legislation.”

[Cyberleagle, April 18, 2019,Users Behaving Badly – the Online Harms White Paper]

My key concern in this area is that through a feeling of ‘it is all awful’ stems the sense that ‘all regulation will be better than now’, and  comes with a real risk of increasing current practices that would not be better than now, and in fact need fixing.

More monitoring

The first, is today’s general monitoring of school children’s Internet content for risk and harms, which creates unintended consequences and very real harms of its own — at the moment, without oversight.

In yesterday’s House of Lords debate, Lord Haskel, said,

“This is the practicality of monitoring the internet. When the duty of care required by the White Paper becomes law, companies and regulators will have to do a lot more of it. ” [April 30, HOL]

The Brennan Centre yesterday published its research on the spend by US schools purchasing social media monitoring software from 2013-18, and highlighted some of the issues:

Aside from anecdotes promoted by the companies that sell this software, there is no proof that these surveillance tools work [compared with other practices]. But there are plenty of risks. In any context, social media is ripe for misinterpretation and misuse.” [Brennan Centre for Justice, April 30, 209]

That monitoring software focuses on two things —

a) seeing children through the lens of terrorism and extremism, and b) harms caused by them to others, or as victims of harms by others, or self-harm.

It is the near same list of ‘harms’ topics that the White Paper covers. Co-driven by the same department interested in it in schools — the Home Office.

These concerns are set in the context of the direction of travel of law and policy making, its own loosening of accountability and process.

It was preceded by a House of Commons discussion on Social Media and Health, lead by the former Minister for Digital, Culture, Media and Sport who seems to feel more at home in that sphere, than in health.

His unilateral award of funds to the Samaritans for work with Google and Facebook on a duty of care, while the very same is still under public consultation, is surprising to say the least.

But it was his response to this question, which points to the slippery slope such regulations may lead. The Freedom of Speech champions should be most concerned not even by what is potentially in any legislation ahead, but in the direction of travel and debate around it.

“Will he look at whether tech giants such as Amazon can be brought into the remit of the Online Harms White Paper?

He replied, that “Amazon sells physical goods for the most part and surely has a duty of care to those who buy them, in the same way that a shop has a responsibility for what it sells. My hon. Friend makes an important point, which I will follow up.”

Mixed messages

The Center for Democracy and Technology recommended in its 2017 report, Mixed Messages? The Limits of Automated Social Media Content Analysis, that the use of automated content analysis tools to detect or remove illegal content should never be mandated in law.

Debate so far has demonstrated broad gaps between what is wanted, in knowledge, and what is possible. If behaviours are to be stopped because they are undesirable rather than unlawful, we open up a whole can of worms if not done with the greatest attention to  detail.

Lord Stevenson and Lord McNally both suggested that pre-legislative scrutiny of the Bill, and more discussion would be positive. Let’s hope it happens.

Here’s my personal first reflections on the Online Harms White Paper discussion so far.

Six suggestions:

Suggestion one: 

The Law Commission Review, mentioned in the House of Lords debate,  may provide what I have been thinking of crowd sourcing and now may not need to. A list of laws that the Online Harms White Paper related discussion reaches into, so that we can compare what is needed in debate versus what is being sucked in. We should aim to curtail emotive discussion of broad risk and threat that people experience online. This would enable the themes which are already covered in law to be avoided, and focus on the gaps.  It would make for much tighter and more effective legislation. For example, the Crown Prosecution Service offers Guidelines on prosecuting cases involving communications sent via social media, but a wider list of law is needed.

Suggestion two:
After (1) defining what legislation is lacking, definitions must be very clear, narrow, and consistent across other legislation. Not for the regulator to determine ad-hoc and alone.

Suggestion three:
If children’s rights are at to be so central in discussion on this paper, then their wider rights must including privacy and participation, access to information and freedom of speech must be included in debate. This should include academic research-based evidence of children’s experience online when making the regulations.

Suggestion four:
Internet surveillance software in schools should be publicly scrutinised. A review should establish the efficacy, boundaries and oversight of policy and practice regards Internet monitoring for harms and not embed even more, without it. Boundaries should be put into legislation for clarity and consistency.

Suggestion five:
Terrorist activity or child sexual exploitation and abuse (CSEA) online are already unlawful and should not need additional Home Office powers. Great caution must be exercised here.

Suggestion six: 
Legislation could and should encapsulate accountability and oversight for micro-targeting and algorithmic abuse.


More detail behind my thinking, follows below, after the break. [Structure rearranged on May 14, 2019]


1. The Internet is not an unregulated space.

Suggestion one:
The Law Commission Review may be what I have been thinking of crowd sourcing and now may not need to. A list of laws that the Online Harms White Paper related discussion reaches into, so that we can compare what is needed in debate versus what is being sucked in. We should aim to curtail emotive discussion of broad risk and threat that people experience online. This would enable the themes which are already covered in law to be avoided, and focus on the gaps.  It would make for much tighter and more effective legislation. For example, the Crown Prosecution Service offers Guidelines on prosecuting cases involving communications sent via social media, but a wider list of law is needed. 

While the idea is pervasive, thanks to lobbying that the Internet is the Wild West Web, there is in fact legislation already which applies online.

What there is not, is a clear fit-gap of what the government perceives as needing regulation, and where there is a genuine gap in legislation. We should know concretely what needs done versus what is missing as opposed to a very general debate. This will make for vague, excessive, or regulations that are impossible to implement or make for inconsistent regulatory practice.

According to today’s debate, “we are still waiting for the Law Commission to finalise its review of the current law on abusive and offensive online communications and of what action might be required by Parliament to rectify weaknesses in the current regime.

2. Conflation of unlawful and undesirable content.

Suggestion two:
After (1) defining what legislation is lacking, d
efinitions must be very tight and consistent across other legislation.  Not for the regulator to determine ad-hoc and alone.

Child sexual abuse and terrorist content which is already illegal will remain illegal, and should not be in the remit of this debate.

The Harms listed on page 31 of the White Paper  are not as clearly defined at all, as the list of “Harms with a clear definition” would suggest. Harms with a “less clear definition”, will be drawn in and there is conflation with what is currently seen as undesirable.

Much of Matt Hancock’s efforts are around what is badly termed ‘fake news’ and misinformation. The White Paper identifies misinformation as harm, and at the same time says, that, “We are clear that the regulator will not be responsible for policing truth and accuracy online.” [cyberleagle, April 18]

Even for example “terrorist activity” while seeming obvious, is not at all. It is not what is unlawful that is at debate here remember, it is what is lawful.

Under the Prevent duty, which seeks to ‘prevent children from being vaguely ‘drawn into terrorism’, children are already wrongly accused through their online activity of extremism, which includes a wide range of activism including environmental.

Baroness Gender among others, summed up that,”Regulation and enforcement must be based on clear evidence of well-defined harm“, but it is lacking. “We are still not clear about what the real issues are between harmful and illegal content, particularly the contextual issues raised about questions of harm.”

3. Children’s rights should not be played off against each other.

Suggestion three:
If children’s rights are at to be so central in discussion on this paper, then their wider rights must including privacy and participation, access to information and freedom of speech must be included in debate.

This is the real risk in the no doubt long debates ahead, that the horrors of child protection — already illegal content — trump any discussion of children’s rights to participation online and everything else, including children rights to freedom of speech, children’s rights to access information, and to privacy.

This is unfortunately politically popular and a publicly palatable gift right now, for print media to beat up their content distributors and competitors, the social media platforms.

I cannot help thinking every time I hear of a child who has taken their own life, or been bullied or hurt through the actions of others connected or transmitted online, of my own children.  Before I had my own children I heard such stories and thought it was really sad. Since I have my own, I feel it is really sad. My empathy levels have shifted in ways I could not imagine pre-parenthood.

I also think that they are being used.  Their images and memories should not be part of the media circus promotion of political aims.

This White Paper seems less about children’s genuine and rounded needs, than political wants. In part, to be seen to be doing something but often not tackling any of the causes of the problems these children have. Because we must remember, for a child to take their own life it is their personal, often private, unique and complex problems that lead to it — not one size fits all, ‘social media’ fault.

4. “The age verification issue”: Identity and over datification is a gift to commercial companies for more exploitation.

Suggestion four:
A Prevent review should establish the efficacy, boundaries and oversight of policy and practice and not embed more monitoring, without it. These should be put into legislation for clarity and consistency.

There is a real risk that the focus on content and blocking certain users from certain content, implies the need for identification of both.

a) To identify *what* is the harmful content among *all* the content uploaded to the Internet, and

b) to identify  *which* user of all users, are those that should not be able to access it.

Profiling certain content not as unlawful but as undesirable is a very slippery slope, to move from identifying undesirable content online, to identifying undesirable online users. From a commercial perspective, the datification of children to ‘protect them’ will simply be used by companies to target their ads, develop future customer profiles, and exploit further.

From another perspective, the risks can be even more chilling — for example to identify not only terrorists posting unlawful content, but anyone the system labels as a potential ’terrorist’ and ‘extremist’ and that is a slippery slope to target anyone the Home Office simply feels undesirable.

Technical solutions to monitoring children’s harms online are often simply surveillance software, or even spyware where the children do not know what the tools do, especially out of school grounds.

Research at defenddigitalme suggests at least 70% of schools employ software that scans children’s social media and every other online activity.

Schools software identify lawful but undesirable content, and many of the software even track the children outside school. Suggestions of ‘gang membership’ and radicalisation which fall under the Prevent duty’s growing scope, are unclear and inconsistent, and profiles are created simply because the child typed the words that matched what the system looks for as a risk of harm, into a search bar. This results in wrongful accusations. Figures are unknown, since the commercial companies do not publish them.

We know of children wrongly labelled as gang members or at risk of suicide, having searched for terms such as ‘black rhinos’ and ‘cliffs’, and schools will not delete the profiles in case it ’should be accused one day of ignoring the evidence’. Staff are not trained in this area which is outside teaching. Software companies have a free hand to expand their product scope and its development in secret.

Children are now profiled by content filters and monitoring software as ‘at risk of mental health’ and ‘vulnerable to radicalisation’ without the child or family’s knowledge. Some of these software monitor a child’s Internet use 24/7. Some monitor the child’s mobile phone. Some software enable the use of the webcam built into the child’s laptop to take photos of the user.

“This is the practicality of monitoring the internet. When the duty of care required by the White Paper becomes law, companies and regulators will have to do a lot more of it. ” [Lord Haskel, April 30, HOL]

5. It appears to seek to grant untrammelled powers to The Home Office to regulate both content and users’ behaviour on the Internet.

Suggestion five:
Terrorist
activity or child sexual exploitation and abuse (CSEA) online are already unlawful and should not need additional Home Office powers. Great caution must be exercised here.

Paragraph 21 of the White Paper suggests that the Minister for the Home Department would become the Regulator-in-Chief, since, “the government will have the power to direct the regulator in relation to codes of practice on terrorist activity or child sexual exploitation and abuse (CSEA) online, and these codes must be signed off by the Home Secretary.”  

We must consider how such a power would be used, not only by Sajid Javid, but by future Home Secretaries and perceived by other governments around the world.

Who would oversee this and what would its checks and balances be?

6. “Abusive approaches of algorithms “.

Suggestion six: 
Legislation could and should encapsulate accountability and oversight for micro-targetting and algorithmic abuse.

Calls for new powers for the Information Commissioner and the Electoral Commission, particularly in respect of the use of algorithms, explainability, transparency and micro-targeting are gaining ground.

Lord Stevenson set out most eloquently, it matters to do well.

“Paragraph 23 refers only to the regulator having powers to require additional information about the impact of algorithms in selecting content for users. The bulk of the argument that has been made today is that we need to know what these algorithms are doing and what they will make happen in relation to people’s data and how the information provided will be used. This issue came up in the discussion on the Statement, when it was quite clear that those who might be at risk from harms created on social media are also receiving increasingly complex and abusive approaches because of algorithms rather than general activity. This issue is important and we will need to come back to it.”


References you may find useful in one place> All Online Harms and Age Appropriate Code of Practice official links, and some other references  https://jenpersson.com/online-harms-and-children/
The ICO Age Appropriate Code of Practice under development, as part of the ICO’s Data Protection Act 2018 obligations, has some overlap in this area. The Code is open for consultation until May 31.

 

Various UNCRC rights of the child, most affected in the current online harms and age appropriate design debate, are set out from page 51 in the defenddigitalme ICO Code Consultation submission.