Tag Archives: privacy

Pirates and their stochastic parrots

It’s a privilege to have a letter published in the FT as I do today, and thanks to the editors for all their work in doing so.

I’m a bit sorry that it lost the punchline which was supposed to bring a touch of AI humour about pirates and their stochastic parrots. And its rather key point was cut that,

“Nothing in current European laws, including Convention 108 for the UK, prevents companies developing AI lawfully.”

So for the record, and since it’s (£), my agreed edited version was:

“The multi-signatory open letter advertisement, paid for by Meta, entitled “Europe needs regulatory certainty on AI” (September 19) was fittingly published on International Talk Like a Pirate Day.

It seems the signatories believe they cannot do business in Europe without “pillaging” more of our data and are calling for new law.

Since many companies lobbied against the General Data Protection Regulation or for the EU AI Act to be weaker, or that the Council of Europe’s AI regulation should not apply to them, perhaps what they really want is approval to turn our data into their products without our permission.

Nothing in current European laws, including Convention 108 for the UK, prevents companies developing AI lawfully. If companies want more consistent enforcement action, I suggest Data Protection Authorities comply and act urgently to protect us from any pirates out there, and their greedy stochastic parrots. “

Prior to print they asked to cut out a middle paragraph too.

“In the same week, LinkedIn sneakily switched on a ‘use me for AI development’ feature for UK users without telling us (paused the next day); Larry Ellison suggested at Oracle’s Financial Analyst Meeting  that more AI should usher in an era of mass citizen surveillance, and our Department for Education has announced it will allow third parties to exploit school children’s assessment data for AI product building, and can’t rule out it will include personal data.”

It is in fact the cumulative effect around the recent flurry of AI activities by various parties, state and commercial, that deserves greater attention rather than being only about this Meta-led complaint. Who is grabbing what data and what infrastructure contracts and creating what state dependencies and strengths to what end game?  While some present the “AI race” as China or India versus the EU or the US to become AI “super powers”, is what “Silicon Valley” offers, their way is the only way, a better offer?

It’s not in fact, “Big Tech” I’m concerned about, but the arrogance of so many companies that in the middle of regulatory scrutiny  would align themselves with one that would rather put out PR that omits the fact they are under it, instead only calling for the law to be changed, and frankly misleading the public by suggesting it is all for our own good than talk about how this serves their own interests.

Who do they think they are to dictate what new laws must look like when they seem simply unwilling to stick to those we have?

Perhaps this open letter serves as a useful starting point to direct DPAs to the companies in need of most scrutiny around their data practices. They seem to be saying they want weaker laws or more enforcement. Some are already well known for challenging both. Who could forget Meta (Facebook’s) secret emotional contagion study involving children in which friends’ postings were moved to influence moods, or the case of letting third parties, including Cambridge Analytica access users’ data? Then there’s the data security issues or the fine over international transfers or the anti-trust issues. And there’s the legal problems with their cookies. And all of this built from humble beginnings by the same founder of Facemash “a prank website” to rate women as hot or not.

As Congressman Long reportedly told Zuckerberg in 2018, “You’re the guy to fix this. We’re not. You need to save your ship.”

The Meta-led ad called for “harmonisation enshrined in regulatory frameworks like the GDPR” and I absolutely agree. The DPAs need to stand tall and stand up to OpenAI and friends (ever dwindling in number so it seems) and reassert the basic, fundamental principles of data protection laws from the GDPR to Convention 108 to protect fundamental human rights. Our laws should do so whether companies like them or not. After all, it is often abuse of data rights by companies, and states, that populations need protection from.

Data protection ‘by design and by default’ is not optional under European data laws established for decades. It is not enough to argue that processing is necessary because you have chosen to operate your business in a particular way, nor a necessary part of your chosen methods.

The Netherlands DPA is right to say scraping is almost always unlawful. A legitimate interest cannot be simply plucked from thin air by anyone who is neither an existing data controller nor processor and has no prior relationship to the data subjects who have no reasonable expectation of their re-use of data online that was not posted for the purposes that the scraper has grabbed it and without any informed processing and offer of an opt out. Instead the only possible basis for this kind of brand new controller should be consent. Having to break the law, hardly screams ‘innovation’.

Regulators do not exist to pander to wheedling, but to independently uphold the law in a democratic society in order to protect people, not prioritise the creation of products:

  • Lawfulness, fairness and transparency.
  • Purpose limitation.
  • Data minimisation.
  • Accuracy.
  • Storage limitation.
  • Integrity and confidentiality (security)
    and
  • Accountability.

In my view, it is the lack of dissausive enforcement as part of checks-and-balances on big power like this, regardless of where it resides, that poses one of the biggest data-related threats to humanity.

Not AI, nor being “left out” of being used to build it for their profit.

Policing thoughts, proactive technology, and the Online Safety Bill

Former counter-terrorism police chief attacks Rishi Sunak’s Prevent plans“, reads a headline in today’s Guardian. “Former counter-terrorism chief Sir Peter Fahy […] said: “The widening of Prevent could damage its credibility and reputation. It makes it more about people’s thoughts and opinions. Fahy said: “The danger is the perception it creates that teachers and health workers are involved in state surveillance.”

This article leaves out that today’s reality is already far ahead of proposals or perception. School children and staff are already surveilled in these ways. Not only are things monitored that people think type or read or search for online and offline in the digital environment, but copies may be collected, retained by companies and interventions made.

The products don’t only permit monitoring of trends on aggregated data in overviews of student activity but the behaviours of individual students. And these can be deeply intrusive and sensitive when you are talking about self harm, abuse, and terrorism.

(For more on the safety tech sector, often using AI in proactive monitoring, see my previous post (May 2021) The Rise of Safety Tech.)

Intrusion through inference and interventions

From 1 July 2015 all schools have been subject to the Prevent duty under section 26 of the Counter-Terrorism and Security Act 2015, in the exercise of their functions, to have “due regard to the need to prevent people from being drawn into terrorism”.  While these products are about monitoring far more than the remit of Prevent,  many companies actively market online filtering, blocking and monitoring safety products as a way of meeting that in the digital environment. Such as, “Lightspeed Filter™ helps you meet all of the Prevent Duty’s online regulations…

Despite there being no obligation to date, to fulfil this duty through technology, some companies’ way of selling such tools could be interpreted as threatening if schools don’t use it. Like this example:

“Failure to comply with the requirements may result in intervention from the Prevent Oversight Board, prompt an Ofsted inspection or incur loss of funding.”

Such products may create and send real-time alerts to company or school staff when children attempt to reach sites or type “flagged words” related to radicalisation or extremism on any online platform.

Under the auspices of the safeguarding-in-schools data sharing and web monitoring in the Prevent programme children may be labelled with terrorism or extremism labels, data which may be passed on to others or stored outside the UK without their knowledge. The drift in what is considered significant, has been from terrorism into now more vague and broad terms of extremism and radicalisation. Away from some assessment of intent and capability of action, into interception and interventions for potentially insignificant potential vulnerabilities and inferred assumptions of disposition towards such ideas. This is not potentially going to police thoughts as suggested by Fahy of Sunak’s views. It is already doing so. Policing thoughts in the developing child and holding them accountable for it like this in ways that are unforeseeable, is inappropriate and requires thorough investigation into its effects on children, including mental health.

But it’s important to understand that these libraries of thousands of words, ever changing and in multiple languages, and what the systems are looking for and flag, often claiming to do it using Artificial Intelligence, go far beyond Prevent. ‘Legal but harmful’ is their bread and butter. Self harm, harm to or from others.

While companies have no obligations to publish how the monitoring or flagging operates, what the words or phrases or blocked websites are, their error rates (positive and negative) or the effects on children or school staff and their behaviour as a result, these companies have a great deal of influence what gets inferred from what children do online, and who decides what to act on.

Why does it matter?

Schools have normalized the premise that systems they introduce should monitor activity outside of the school network, and hours. And that strangers or their private companies’ automated systems should be involved in inferring or deciding what children are ‘up to’ before the school staff who know the children in front of them.

In a defenddigitalme report, The State of Data 2020, we included a case study on one company that has since been bought out.  And bought again. As of August 2018 eSafe was monitoring approximately one million school children plus staff across the UK. This case study they used in their public marketing raised all sorts of questions on professional  confidentiality and school boundaries, personal privacy, ethics, and companies’ role and technical capability, as well as the lack of any safety tech accountability.

“A female student had been writing an emotionally charged letter to her Mum using Microsoft Word, in which she revealed she’d been raped. Despite the device used being offline, eSafe picked this up and alerted John and his care team who were able to quickly intervene.”

Their then CEO  had told the House of Lords 2016 Communication Committee enquiry on the Children and the Internet, how the products are not only monitoring children in school or school hours:

“Bearing in mind we are doing this throughout the year, the behaviours we detect are not confined to the school bell starting in the morning and ringing in the afternoon, clearly; it is 24/7 and it is every day of the year. Lots of our incidents are escalated through activity on evenings, weekends and school holidays.”

Similar products offer a photo capturing feature of users (pupils while using the device being monitored) described as “common across most solutions in the sector” by this company:

When a critical safeguarding keyword is copied, typed or searched for across the school network, schools can turn on NetSupport DNA’s webcams capture feature (this feature is turned-off by default) to capture an image of the user (not a recording) who has triggered the keyword.

How many webcam photos have been taken of children by school staff or others through those systems, and for what purposes, kept by whom? In the U.S. in 2010, Lower Merion School District, Philadelphia settled a lawsuit for using laptop webcams to take photos of students.  Thousands of photos had been taken even at home, out of hours, without their knowledge.

Who decides what does and does not trigger interventions across different products? In the month of December 2017 alone, eSafe claims they added 2254 words to their threat libraries.

Famously, Impero’s system even included the word “biscuit” which they say is a term used to define a gun. Their system was used by more than “half a million students and staff in the UK” in 2018. And students had better not talk about “taking a wonderful bath.” Currently there is no understanding or oversight of the accuracy of this kind of software and black-box decision-making is often trusted without openness to human question or correction.

Aside from how the range of tools that are all different work, there are very basic questions about whether such policies and tools help or harm children in various ways at all. The UN Special Rapporteur’s 2014 report on children’s rights and freedom of expression stated:

“The result of vague and broad definitions of harmful information, for example in determining how to set Internet filters, can prevent children from gaining access to information that can support them to make informed choices, including honest, objective and age-appropriate information about issues such as sex education and drug use. This may exacerbate rather than diminish children’s vulnerability to risk.” (2014)

U.S. safety tech creates harms

Today in the U.S. the CDT published a report on school monitoring systems there, many of which are also used over here. The report revealed that 13 percent of students knew someone who had been outed as a result of student-monitoring software. Another conclusion the CDT draws, is that monitoring is used for discipline more often than for student safety.

We don’t have that same research for the UK, but we’ve seen IT staff openly admit to using the webcam feature to take photos of young boys who are “mucking about” on the school library computer.

The Online Safety Bill scales up problems like this

The Online Safety Bill seeks to expand how such ‘behavioural identification technology’ can be expanded outside schools.

“Proactive technology include content moderation technology, user profiling technology or behaviour identification technology which utilises artificial intelligence or machine learning.” (p151 Online Safety Bill, August 3, 2022)

The “proactive technology requirement” is as yet rather open ended, left to OFCOM in Codes of Practice but the scope creep of such AI-based tools has become ever more intrusive in education. Legal but harmful is decided by companies and the IWF and any number of opaque third parties whose process and decision-making we know little about. It’s important not to conflate filtering, blocking lists of ‘unsuitable’ websites that can be accessed in schools, with monitoring and tracking individual behaviours.

‘Technological developments that have the capacity to interfere with our freedom of thought fall clearly within the scope of “morally unacceptable harm,”‘ according to Algere (2017), and yet this individual interference is at the very core of school safeguarding tech and policy by design.

In 2018, the ‘lawful but harmful’ list of activities in the Online Harms White paper was nearly identical with those terms used by school Safety Tech companies. The Bill now appears to be trying to create a new legitimate basis for these practices, more about underpinning a developing market, than supporting children’s safety or rights.

Chilling speech is itself controlling content

While a lot of debate about the Bill has been the free speech impacts of content removal, there has been less about what is unwritten but how it will operate to prevent speech and participation in the digital environment for children. The chilling effect of surveillance on access and participation online is well documented. Younger people and women are more likely to be negatively affected (Penney, 2017). The chilling effect on thought and opinion is worsened in these types of tools that trigger an alert even when what is typed is quickly deleted or remains unsent or shared. Thoughts are no longer private.

The ability to use end-to-end encryption on private messaging platforms is simply worked around by these kinds of tools, trading security for claims of children’s safety. Anything on screen may be read in the clear by some systems, even capturing passwords and bank details.

Graham Smith has written, “It may seem like overwrought hyperbole to suggest that the [Online Harms] Bill lays waste to several hundred years of fundamental procedural protections for speech. But consider that the presumption against prior restraint appeared in Blackstone’s Commentaries (1769). It endures today in human rights law. That presumption is overturned by legal duties that require proactive monitoring and removal before an independent tribunal has made any determination of illegality.”

More than this, there is no determination of illegality in legal but harmful activity. It’s opinion. The government is prone to argue that, “nothing in the Bill says X…” but you need to understand this context of how such proactive behavioural monitoring tools work is through threat and the resultant chilling effect to impose unwritten control. This Bill does not create a safer digital environment, it creates threat models for users and companies, to control how we think and behave.

What do children and parents think?

Young people’s own views that don’t fit the online harms narrative have been ignored by Westminster scrutiny Committees. A 2019 survey by the Australian e-safety commissioner found that over half (57%) of child respondents were uncomfortable with background monitoring processes, and 43 %were unsure about these tools’ effectiveness in ensuring online safety.

And what of the role of parents? Article 3(2) of the UNCRC says: “States Parties undertake to ensure the child such protection and care as is necessary for his or her wellbeing, taking into account the rights and duties of his or her parents, legal guardians, or other individuals  legally responsible for him or her, and, to this end, shall take all appropriate legislative and administrative measures.” (my emphasis)

In 2018, 84% of 1,004 parents in England who we polled through Survation, agreed that children and guardians should be informed how this monitoring activity works and wanted to know what the keywords were. (We didn’t ask if it should happen at all or not.)

The wide ranging nature [of general monitoring] rather than targeted and proportionate interference has been judged to be in breach of law and a serious interference with rights, previously. Neither policy makers nor companies should assume parents want safety tech companies to remove autonomy, or make inferences about our children’s lives. Parents if asked, reject the secrecy in which it happens today and demand transparency and accountability. Teachers can feel anxious talking about it at all. There’s no clear routes for error corrections, in fact it’s not done because some claim in building up profiles staff should not delete anything and ignore claims of errors, in case a pattern of behaviour is missed. But there’s no independent assessments available to evidence these tools work or are worth the costs. There are no routes for redress or responsibility taken for tech-made mistakes. None of which makes children safer online.

Before broadening out where such monitoring tools are used, their use and effects on school children need to be understood and openly debated. Policy makers may justify turning a blind eye to harms created by one set of technology providers while claiming that only the other tech providers are the problem, because it suits political agendas or industry aims, but children’s rights and their wellbeing should not be sacrificed in doing so.  Opaque, unlawful and unsafe practice must stop. A quid pro quo for getting access to millions of children’s intimate behaviour, should be transparent access to their product workings, and accepting standards on universal safe accountable practices. Families need to know what’s recorded. To have routes for redress when a daughter researching ‘cliff walks’ gets flagged as a suicide risk or an environmentally interested teenage son searching for information on ‘black rhinos’ is asked about his potential gang membership. The tools sold as solutions to online harms, shouldn’t create more harm like these reported real-life case studies.

Teachers are ‘involved in state surveillance’ as Fahy put it, through Prevent. Sunak was wrong to point away from the threats of the far right in his comments. But the far broader unspoken surveillance of children’s personal lives, behaviours and thoughts through general monitoring in schools, and what will be imposed through the Online Safety Bill more broadly, should concern us far more than was said.

Facebook View and Ray-Ban glasses: here’s looking at your kid

Ray-Ban (EssilorLuxxotica) is selling glasses with ‘Facebook View’. The questions have already been asked whether they  can be lawful in Europe, including in the UK, in particular in regards to enabling the processing of children’s personal data without consent.

The Italian data authority has asked the company to explain via the Irish regulator:

  • the legal basis on which Facebook processes personal data;
  • the measures in place to protect people recorded by the glasses, children in particular,
  • questions of anonymisation of the data collected; and
  • the voice assistant connected to the microphone in the glasses.

While the first questions in Europe may be bound to data protection law and privacy, there are also questions of why Facebook has gone ahead despite Google Glass that was removed from the market in 2013. You can see a pair displayed in a surveillance exhibit at the Victoria and Albert museum (September 2021).

We can’t wait to see the world from your perspective“, says Ray-ban Chief Wearables Officer Rocco Basilico in the promotional video together with Mark Zuckerberg.  I bet. But not as much as Facebook.

With cameras and microphones built-in, up to around 30 videos or 500 photos can be stored on the glasses, and shared with Facebook companion app. While the teensy light on a corner is supposed to be an indicator that recording is in progress, the glasses look much like any other and indistinguishable in the Ray-ban range. You can even buy them as prescription glasses, which intrigues me as to how that recording looks on playback, or shared via the companion apps.

While the Data Policy doesn’t explicitly mention Facebook View in the wording on how it uses data to “personalise and improve our Products,” and the privacy policy is vague on Facebook View, it seems pretty clear that Facebook will use the video capture to enhance its product development in augmented reality.

We believe this is an important step on the road to developing the ultimate augmented reality glasses“, says Mark Zuckerberg.(05:46)

The company needs a lawful basis to be able to process the data it receives for those purposes. It determines those purposes, and is therefore a data controller for that processing.

In the supplemental policy the company says that “Facebook View is intended solely for users who are 13 or older.” Data Protection law does not care about the age of the product user, but it does regulate under what basis a child’s data may be processed and that may be the user, setting up an account. It is also concerned about the data of the children who are recorded. By recognising  the legal limitations on who can be an account owner, it has a bit of a self-own here on what the law says on children’s data.

Personal privacy may have weak protection in data protection laws that offer the wearer exemptions for domestic** or journalistic purposes, but neither the user nor the company can avoid the fact that processing video and audio recordings may be without (a) adequately informing people whose data is processed or (b) appropriate purpose limitation for any processing that Facebook the company performs, across all of its front end apps and platforms or back-end processes.

I’ve asked Facebook how I would, as a parent or child, be able to get a wearer to destroy a child’s images and video or voice recorded in a public space, to which I did not consent. How would I get to see that content once held by Facebook, or request its processing be restricted by the company, or user, or the data destroyed?

Testing the Facebook ‘contact our DPO’ process as if I were a regular user, fails. It has sent me round the houses via automated forms.

Facebook is clearly wrong here on privacy grounds but if you can afford the best in the world on privacy law, why would you go ahead anyway? Might they believe after nearly twenty years of privacy invasive practice and a booming bottom line, that there is no risk to reputation, no risk to their business model, and no real risk to the company from regulation?

It’s an interesting partnership since Ray-Ban has no history in understanding privacy. Facebook has a well known controversial one.  Reputational risk shared, will not be reputational risk halved. And EssilorLuxottica has a share price to consider.  I wonder if they carried out any due diligence risk assessment for their investors?

If and when enforcement catches up and the product is withdrawn, regulators must act as the FTC did on the development of the product (in that case algorithms) from “ill gotten data”. (In the Matter of Everalbum and Paravision Commission File No. 1923172).

Destroy the data, destroy the knowledge gained, and remove it from any product development to  date.  All “Affected Work Product.”

Otherwise any penalty Facebook will get from this debacle, will be just the cost of doing business to have bought itself a very nice training dataset for its AR product development.

Ray-Ban of course, will take all the reputational hit if found enabling strangers to take covert video of our kids. No one expects any better from Facebook.  After all, we all know, Facebook takes your privacy, seriously.


Reference:  Rynes: On why your ring video doorbell may make you a controller under GDPR.

https://medium.com/golden-data/rynes-e78f09e34c52 (Golden Data, 2019)

Judgment of the Court (Fourth Chamber), 11 December 2014 František Ryneš v Úřad pro ochranu osobních údajů Case C‑212/13. Case file

exhibits from the Victoria and Albert museum (September 2021)

The Rise of Safety Tech

At the CRISP hosted, Rise of Safety Tech, event  this week,  the moderator asked an important question: What is Safety Tech? Very honestly Graham Francis of the DCMS answered among other things, “It’s an answer we are still finding a question to.”

From ISP level to individual users, limitations to mobile phone battery power and app size compatibility, a variety of aspects within a range of technology were discussed. There is a wide range of technology across this conflated set of products packaged under the same umbrella term. Each can be very different from the other, even within one set of similar applications, such as school Safety Tech.

It worries me greatly that in parallel to the run up to the Online Harms legislation that their promotion appears to have assumed the character of a done deal. Some of these tools are toxic to children’s rights because of the policy that underpins them. Legislation should not be gearing up to make the unlawful lawful, but fix what is broken.

The current drive is towards the normalisation of the adoption of such products in the UK, and to make them routine. It contrasts with the direction of travel of critical discussion outside the UK.

Some Safety Tech companies have human staff reading flagged content and making decisions on it, while others claim to use only AI. Both might be subject to any future EU AI Regulation for example.

In the U.S. they also come under more critical scrutiny. “None of these things are actually built to increase student safety, they’re theater, Lindsay Oliver,  project manager for the Electronic Frontier Foundation was quoted as saying in an article just this week.

Here in the U.K. their regulatory oversight is not only startlingly absent, but the government is becoming deeply invested in cultivating the sector’s growth.

The big questions include who watches the watchers, with what scrutiny and safeguards? Is it safe, lawful, ethical, and does it work?

Safety Tech isn’t only an answer we are still finding a question to. It is a world view, with a particular value set. Perhaps the only lens through which its advocates believe the world wide web should be seen, not only by children, but by anyone. And one that the DCMS is determined to promote with “the UK as a world-leader” in a worldwide export market.

As an example one of the companies the DCMS champions in its May 2020 report, ‘‘Safer technology, safer users” claims to export globally already. eSafe Global is now providing a service to about 1 million students and schools throughout the UK, UAE, Singapore, Malaysia and has been used in schools in Australia since 2011.

But does the Department understand what they are promoting? The DCMS Minister responsible, Oliver Dowden said in Parliament on December 15th 2020: “Clearly, if it was up to individuals within those companies to identify content on private channels, that would not be acceptable—that would be a clear breach of privacy.”

He’s right. It is. And yet he and his Department are promoting it.

So how is this going to play out if at all, in the Online Harms legislation expected soon, that he owns together with the Home Office? Sadly the needed level of understanding by the Minister or in the third sector and much of the policy debate in the media, is not only missing, but is actively suppressed by the moral panic whipped up in emotive personal stories around a Duty of Care and social media platforms. Discussion is siloed about identifying CSAM, or grooming, or bullying or self harm, and actively ignores the joined-up, wider context within which Safety Tech operates.

That context is the world of the Home Office. Of anti-terrorism efforts. Of mass surveillance and efforts to undermine encryption that are as nearly old as the Internet. The efforts to combat CSAM or child grooming online, operate in the same space. WePROTECT for example, sits squarely amid it all, established in 2014 by the UK Government and the then UK Prime Minister, David Cameron. Scrutiny of UK breaches of human rights law are well documented in ECHR rulings. Other state members of the alliance including the UAE stand accused of buying spyware to breach activists’ encrypted communications. It is disingenuous for any school Safety Tech actors to talk only of child protection without mention of this context. School Safety Tech while all different, operate by tagging digital activity with categories of risk, and these tags can include terrorism and extremism.

Once upon a time, school filtering and blocking services meant only denying access to online content that had no place in the classroom. Now it can mean monitoring all the digital activity of individuals, online and offline, using school or personal devices, working around encryption, whenever connected to the school network. And it’s not all about in-school activity. No matter where a child’s account is connected to the school network, or who is actually using it, their activity might be monitored 24/7, 365 days a year. A user’s activity that matches with the thousands of words or phrases on watchlists and in keyword libraries gets logged, and profiles individuals with ‘vulnerable’ behaviour tags, sometimes creating alerts. Their scope has crept from flagging up content, to flagging up children. Some schools create permanent records including false positives because they retain everything in a risk-averse environment, even things typed that a child subsequently deleted, and may be distributed and accessible by an indefinite number of school IT staff and stored in further third parties’ systems like CPOMS or Capita SIMS.

A wide range of the rights of the child are breached by mass monitoring in the UK, such as outlined in the UN Committee on the Rights of the Child General Comment No.25 which states that, “Any digital surveillance of children, together with any associated automated processing of personal data, should respect the child’s right to privacy and should not be conducted routinely, indiscriminately or without the child’s knowledge or, in the case of very young children, that of their parent or caregiver; nor should it take place without the right to object to such surveillance, in commercial settings and educational and care settings, and consideration should always be given to the least privacy-intrusive means available to fulfil the desired purpose.” (para 75)

Even the NSPCC, despite their recent public policy that opposes secure messaging using end-to-send encryption, recognises on its own Childline webpage the risk for children from content monitoring of children’s digital spaces, and that such monitoring may make them less safe.

In my work in 2018, one school Safety Tech company accepted our objections from defenddigitalme, that this monitoring went too far in its breach of children’s confidentially and safe spaces, and it agreed to stop monitoring counselling services. But there are roughly fifteen active companies here in the UK and the data protection regulator, the ICO despite being publicly so keen to be seen to protect children’s rights, has declined to act to protect children from the breach of their privacy and data protection rights across this field.

There are questions that should be straightforward to ask and answer, and while some CEOs are more willing to engage constructively with criticism and ideas for change than others, there is reluctance to address the key question: what is the lawful basis for monitoring children in school, at home, in- or out-side school hours?

Another important question often without an answer, is how do these companies train their algorithms whether in age verification or child safety tech?  How accurate are the language inferences for an AI designed to catch children out who are being deceitful and where  are assumptions, machine or man-made, wrong or discriminatory? It is overdue that our Regulator, the ICO, should do what the FTC did with Paravision, and require companies that develop tools through unlawful data processing to delete the output from it, the trained algorithm, plus products created from it.

Many of the harms from profiling children were recognised by the ICO in the Met Police gangs matrix: discrimination, conflation of victim and perpetrator, notions of ‘pre-crime’ without independent oversight,  data distributed out of context, and excessive retention.

Harm is after all why profiling of children should be prohibited. And where, in exceptional circumstances, States may lift this restriction, it is conditional that appropriate safeguards are provided for by law.

While I believe any of the Safety Tech generated category profiles could be harmful to a child through mis-interventions, being treated differently by staff as a result, or harm a trusted relationship,  perhaps the potentially most devastating to a child’s prospects are from mistakes that could be made under the Prevent duty.

The UK Home Office has pushed its Prevent agenda through schools since 2015, and it has been built into school Safety Tech by-design. School Safety Tech while all different, operate by tagging digital activity with categories of risk, and these tags can include terrorism and extremism.  I know of schools that have flags attached to children’s records that are terrorism related, but who have had no Prevent referral. But there is no transparency of these numbers at all. There is no oversight to ensure children do not stay wrongly tagged with those labels. Families may never know.

Perhaps the DCMS needs to ask itself, are the values of the UK Home Office really what the UK should export to children globally from “the UK as a world-leader” without independent legal analysis, without safeguards, and without taking accountability for their effects?

The Home Office values are demonstrated in its approach to the life and death of migrants at sea, children with no recourse to public funds, to discriminatory stop and search, a Department that doesn’t care enough to even understand or publish the impact of its interventions on children and their families.

The Home Office talk is of safeguarding children, but it is opposed to them having safe spaces online. School Safety Tech tools actively work around children’s digital security, can act as a man-in-the-middle, and can create new risks. There is no evidence I have seen that on balance convinces me that school Safety Tech does in fact make children safer. But plenty of evidence that the Home Office appears to want to create the conditions that make children less secure so that such tools could thrive, by weakening the security of digital activity through its assault on end-to-end encryption. My question is whether Online Harms is to be the excuse to give it a lawful basis.

Today there are zero statutory transparency obligations, testing or safety standards required of school Safety Tech before it can be procured in UK state education at scale.

So what would a safe and lawful framework for operation look like? It would be open to scrutiny and require regulatory action, and law.

There are no published numbers of how many records are created about how many school children each year. There are no safeguards in place to protect children’s rights or protection from harm in terms of false positives, error retention, transfer of records to the U.S. or third party companies, or how many covert photos they have enabled to be taken of children via webcam by school staff.  There is no equivalent of medical device ‘foreseeable misuse risk assessment’  such as ISO 14971 would require, despite systems being used for mental health monitoring with suicide risk flags. Children need to know what is on their record and to be able to seek redress when it is wrong. The law would set boundaries and safeguards and both existing and future law would need to be enforced. And we need independent research on the effects of school surveillance, and its chilling effects on the mental health and behaviour of developing young people.

Companies may argue they are transparent, and seek to prove how accurate their tools are. Perhaps they may become highly accurate.

But no one is yet willing to say in the school Safety Tech sector, these are thousands of words that if your child types may trigger a flag, or indeed, here’s an annual report of all the triggered flags and your own or your child’s saved profile. A school’s interactions with children’s social care already offers a framework for dealing with information that could put a child at risk from family members, so reporting should be do-able.

At the end of the event this week, the CRISP event moderator said of their own work, outside schools, that, “we are infiltrating bad actor networks across the globe and we are looking at everything they are saying. […] We have a viewpoint that there are certain lines where privacy doesn’t exist anymore.”

Their company website says their work involves, “uncovering and predicting the actions of bad actor, activist, agenda-driven and interest groups“. That’s a pretty broad conflation right there.  Their case studies include countering social media activism against a luxury apparel brand. And their legal basis of ‘legitimate interests‘ for their data processing might seem flimsy at best, for such a wide ranging surveillance activity where, ‘privacy doesn’t exist anymore’.

I must often remind myself that the people behind Safety Tech may epitomise the very best of what some believe is making the world safer online as they see it. But it is *as they see it*.  And if  policy makers or CEOs have convinced themselves that because ‘we are doing it for good, a social impact, or to safeguard children’, that breaking the law is OK, then it should be a red flag that these self-appointed ‘good guys’ appear to think themselves above the law.

My takeaway time and time again, is that companies alongside governments, policy makers, and a range of lobbying interests globally, want to redraw the lines around human rights, so that they can overstep them. There are “certain lines” that don’t suit their own business models or agenda. The DCMS may talk about seeing its first safety tech unicorn, but not about the private equity funding, or where they pay their taxes. Children may be the only thing they talk about protecting but they never talk of protecting children’s rights.

In the school Safety Tech sector, there is activity that I believe is unsafe, or unethical, or unlawful. There is no appetite or motivation so far to fix it. If in upcoming Online Harms legislation the government seeks to make lawful what is unlawful today, I wonder who will be held accountable for the unsafe and the unethical, that come with the package dealand will the Minister run that reputational risk?


“Michal Serzycki” Data Protection Award 2021

It is a privilege to be a joint-recipient in the fourth year of the “Michal Serzycki” Data Protection Award, and I thank the Data Protection Authority in Poland (UODO) for the recognition of work for the benefit of promoting data protection values and the right to privacy.

I appreciate the award in particular as the founder of an NGO, and the indirect acknowledgement of the value of NGOs to be able to contribute to public policy, including openness towards international perspectives, standards, the importance of working together, and our role in holding the actions of state authorities and power to account, under the rule of law.

The award is shared with Mrs Barbara Gradkowska, Director of the Special School and Educational Center in Zamość, whose work in Poland has been central to the initiative, Your Data — Your Concern, an educational Poland-wide programme for schools that is supported and recognized by the UODO. It offers support to teachers in vocational training centres, primary, middle and high schools related to personal data protection and the right to privacy in education.

And it is also shared with Mr Maciej Gawronski, Polish legal advisor and authority in data protection, information technology, cloud computing, cybersecurity, intellectual property and business law.

The UODO has long been a proactive advocate in the schools’ sector in Poland for the protection of children’s data rights, including recent enforcement after finding the processing of children’s biometric data using fingerprint readers unlawful, when using a school canteen and ensuring destruction of pupil data obtained unlawfully.

In the rush to remote learning in 2020 in response to school closures in COVID-19, the UODO warmly received our collective international call for action, a letter in which over thirty organisations worldwide called on policy makers, data protection authorities and technology providers, to take action, and encouraged international collaboration to protect children around the world during the rapid adoption of digital educational technologies (“edTech”). The UODO issued statements and a guide on school IT security and data protection.

In September 2020, I worked with their Data Protection Office at a distance, in delivering a seminar for teachers, on remote education.

The award also acknowledges my part in the development of the Guidelines on Children’s Data Protection in an Education Setting adopted in November 2020, working in collaboration with country representatives at the Council of Europe Committee for Convention 108, as well as with observers, and the Committee’s incredible staff.

2020 was a difficult year for people around the world under COVID-19 to uphold human rights and hold the space to push back on encroachmentespecially for NGOs, and in community struggles from the Black Lives Matter movement to environmental action to  UK students on the streets of London to protest algorithmic unfairness. In Poland the direction of travel is to reduce women’s rights in particular. Poland’s ruling Law and Justice (PiS) party has been accused of politicising the constitutional tribunal and using it to push through its own agenda on abortion, and the government appears set on undermining the rule of law creating a ‘chilling effect’ for judges. The women of Poland are again showing the world, what it means and what it can cost to lose progress made.

In England at defenddigitalme, we are waiting to hear later this month, what our national Department for Education will do to better protect millions of children’s rights, in the management of national pupil records, after our Data Protection regulator, the ICO’s audit and intervention. Among other sensitive content, the National Pupil Database holds sexual orientation data on almost 3.2 million students’ named records, and religious belief on 3.7 million.

defenddigitalme is a call to action to protect children’s rights to privacy across the education sector in England, and beyond. Data protection has a role to playwithin the broader rule of law to protect and uphold the right to privacy, to prevent state interference in private and family life, and in the protection of the full range of human rights necessary in a democratic society. Fundamental human rights must be universally protected to foster human flourishing, to protect the personal dignity and freedoms of every individual, to promote social progress and better standards of life in larger freedoms.


The award was announced at the conference,Real personal data protection in remote reality,” organized by the Personal Data Protection Office UODO, as part of the celebration of the 15th Data Protection Day on 28th January, 2021 with an award ceremony held on its eve in Warsaw.

Women Leading in AI — Challenging the unaccountable and the inevitable

Notes [and my thoughts] from the Women Leading in AI launch event of the Ten Principles of Responsible AI report and recommendations, February 6, 2019.

Speakers included Ivana Bartoletti (GemServ), Jo Stevens MP, Professor Joanna J Bryson, Lord Tim Clement-Jones, Roger Taylor (Centre for Data Ethics and Innovation, Chair), Sue Daley (techUK), Reema Patel, Nuffield Foundation and Ada Lovelace Institute.

Challenging the unaccountable and the ‘inevitable’ is the title of the conclusion of the Women Leading in AI report Ten Principles of Responsible AI, launched this week, and this makes me hopeful.

“There is nothing inevitable about how we choose to use this disruptive technology. […] And there is no excuse for failing to set clear rules so that it remains accountable, fosters our civic values and allows humanity to be stronger and better.”

Ivana Bartoletti, co-founder of Women Leading in AI, began the event, hosted at the House of Commons by Jo Stevens, MP for Cardiff Central, and spoke brilliantly of why it matters right now.

Everyone’s talking about ethics, she said, but it has limitations. I agree with that. This was by contrast very much a call to action.

It was nearly impossible not to cheer, as she set out without any of the usual bullshit, the reasons why we need to stop “churning out algorithms which discriminate against women and minorities.”

Professor Joanna J Bryson took up multiple issues, such as why

  • innovation, ‘flashes in the pan’ are not sustainable and not what we’re looking for things in that work for us [society].
  • The power dynamics of data, noting Facebook, Google et al are global assets, and are also global problems, and flagged the UK consultation on taxation open now.
  • And that it is critical that we do not have another nation with access to all of our data.

She challenged the audience to think about the fact that inequality is higher now than it has been since World War I. That the rich are getting richer and that imbalance of not only wealth, but of the control individuals have in their own lives, is failing us all.

This big picture thinking while zooming in on detailed social, cultural, political and tech issues, fascinated me most that evening. It frustrated the man next to me apparently, who said to me at the end, ‘but they haven’t addressed anything on the technology.’

[I wondered if that summed up neatly, some of why fixing AI cannot be a male dominated debate. Because many of these issues for AI, are not of the technology, but of people and power.] 

Jo Stevens, MP for Cardiff Central, hosted the event and was candid about politicians’ level of knowledge and the need to catch up on some of what matters in the tech sector.

We grapple with the speed of tech, she said. We’re slow at doing things and tech moves quickly. It means that we have to learn quickly.

While discussing how regulation is not something AI tech companies should fear, she suggested that a constructive framework whilst protecting society against some of the problems we see is necessary and just, because self-regulation has failed.

She talked about their enquiry which began about “fake news” and disinformation, but has grown to include:

  • wider behavioural economics,
  • how it affects democracy.
  • understanding the power of data.
  • disappointment with social media companies, who understand the power they have, and fail to be accountable.

She wants to see something that changes the way big business works, in the way that employment regulation challenged exploitation of the workforce and unsafe practices in the past.

The bias (conscious or unconscious) and power imbalance has some similarity with the effects on marginalised communities — women, BAME, disabilities — and she was looking forward to see the proposed solutions, and welcomed the principles.

Lord Clement-Jones, as Chair of the Select Committee on Artificial Intelligence, picked up the values they had highlighted in the March 2018 report, Artificial Intelligence, AI in the UK: ready, willing and able?

Right now there are so many different bodies, groups in parliament and others looking at this [AI / Internet / The Digital World] he said, so it was good that the topic is timely, front and centre with a focus on women, diversity and bias.

He highlighted, the importance of maintaining public trust. How do you understand bias? How do you know how algorithms are trained and understand the issues? He fessed up to being a big fan of DotEveryone and their drive for better ‘digital understanding’.

[Though sometimes this point is over complicated by suggesting individuals must understand how the AI works, the consensus of the evening was common sensed — and aligned with the Working Party 29 guidance — that data controllers must ensure they explain clearly and simply to individuals, how the profiling or automated decision-making process works, and what its effect is for them.]

The way forward he said includes:

  • Designing ethics into algorithms up front.
  • Data audits need to be diverse in order to embody fairness and diversity in the AI.
  • Questions of the job market and re-skilling.
  • The enforcement of ethical frameworks.

He also asked how far bodies will act, in different debates. Deciding who decides on that is still a debate to be had.

For example, aware of the social credit agenda and scoring in China, we should avoid the same issues. He also agreed with Joanna, that international cooperation is vital, and said it is important that we are not disadvantaged in this global technology. He expected that we [the Government Office for AI] will soon promote a common set of AI ethics, at the G20.

Facial recognition and AI are examples of areas that require regulation for safe use of the tech and to weed out those using it for the wrong purposes, he suggested.

However, on regulation he held back. We need to be careful about too many regulators he said. We’ve got the ICO, FCA, CMA, OFCOM, you name it, we’ve already got it, and they risk tripping over one another. [What I thought as CDEI was created para 31.]

We [the Lords Committee] didn’t suggest yet another regulator for AI, he said and instead the CDEI should grapple with those issues and encourage ethical design in micro-targeting for example.

Roger Taylor (Chair of the CDEI), — after saying it felt as if the WLinAI report was like someone had left their homework on his desk,  supported the concept of the WLinAI principles are important, and  agreed it was time for practical things, and what needs done.

Can our existing regulators do their job, and cover AI? he asked, suggesting new regulators will not be necessary. Bias he rightly recognised, already exists in our laws and bodies with public obligations, and in how AI is already operating;

  • CVs sorting. [problematic IMO > See Amazon, US teachers]
  • Policing.
  • Creditworthiness.

What evidence is needed, what process is required, what is needed to assure that we know how it is actually operating? Who gets to decide to know if this is fair or not? While these are complex decisions, they are ultimately not for technicians, but a decision for society, he said.

[So far so good.]

Then he made some statements which were rather more ambiguous. The standards expected of the police will not be the same as those for marketeers micro targeting adverts at you, for example.

[I wondered how and why.]

Start up industries pay more to Google and Facebook than in taxes he said.

[I wondered how and why.]

When we think about a knowledge economy, the output of our most valuable companies is increasingly ‘what is our collective truth? Do you have this diagnosis or not? Are you a good credit risk or not? Even who you think you are — your identity will be controlled by machines.’

What can we do as one country [to influence these questions on AI], in what is a global industry? He believes, a huge amount. We are active in the financial sector, the health service, education, and social care — and while we are at the mercy of large corporations, even large corporations obey the law, he said.

[Hmm, I thought, considering the Google DeepMind-Royal Free agreement that didn’t, and venture capitalists not renowned for their ethics, and yet advise on some of the current data / tech / AI boards. I am sceptical of corporate capture in UK policy making.]

The power to use systems to nudge our decisions, he suggested, is one that needs careful thought. The desire to use the tech to help make decisions is inbuilt into what is actually wrong with the technology that enables us to do so. [With this I strongly agree, and there is too little protection from nudge in data protection law.]

The real question here is, “What is OK to be owned in that kind of economy?” he asked.

This was arguably the neatest and most important question of the evening, and I vigorously agreed with him asking it, but then I worry about his conclusion in passing, that he was, “very keen to hear from anyone attempting to use AI effectively, and encountering difficulties because of regulatory structures.

[And unpopular or contradictory a view as it may be, I find it deeply ethically problematic for the Chair of the CDEI to be held by someone who had a joint-venture that commercially exploited confidential data from the NHS without public knowledge, and its sale to the Department of Health was described by the Public Accounts Committee, as a “hole and corner deal”. That was the route towards care.data, that his co-founder later led for NHS England. The company was then bought by Telstra, where Mr Kelsey went next on leaving NHS Engalnd. The whole commodification of confidentiality of public data, without regard for public trust, is still a barrier to sustainable UK data policy.]

Sue Daley (Tech UK) agreed this year needs to be the year we see action, and the report is a call to action on issues that warrant further discussion.

  • Business wants to do the right thing, and we need to promote it.
  • We need two things — confidence and vigilance.
  • We’re not starting from scratch, and talked about GDPR as the floor not the ceiling. A starting point.

[I’m not quite sure what she was after here, but perhaps it was the suggestion that data regulation is fundamental in AI regulation, with which I would agree.]

What is the gap that needs filled she asked? Gap analysis is what we need next and avoid duplication of effort —need to avoid complexity and duplicity of work with other bodies. If we can answer some of the big, profound questions need to be addressed to position the UK as the place where companies want to come to.

Sue was the only speaker that went on to talk about the education system that needs to frame what skills are needed for a future world for a generation, ‘to thrive in the world we are building for them.’

[The Silicon Valley driven entrepreneur narrative that the education system is broken, is not an uncontroversial position.]

She finished with the hope that young people watching BBC icons the night before would see, Alan Turing [winner of the title] and say yes, I want to be part of that.

Listening to Reema Patel, representative of the Ada Lovelace Institute, was the reason I didn’t leave early and missed my evening class. Everything she said resonated, and was some of the best I have heard in the recent UK debate on AI.

  • Civic engagement, the role of the public is as yet unclear with not one homogeneous, but many publics.
  • The sense of disempowerment is important, with disconnect between policy and decisions made about people’s lives.
  • Transparency and literacy are key.
  • Accountability is vague but vital.
  • What does the social contract look like on people using data?
  • Data may not only be about an individual and under their own responsibility, but about others and what does that mean for data rights, data stewardship and articulation of how they connect with one another, which is lacking in the debate.
  • Legitimacy; If people don’t believe it is working for them, it won’t work at all.
  • Ensuring tech design is responsive to societal values.

2018 was a terrible year she thought. Let’s make 2019 better. [Yes!]


Comments from the floor and questions included Professor Noel Sharkey, who spoke about the reasons why it is urgent to act especially where technology is unfair and unsafe and already in use. He pointed to Compass (Durham police), and predictive policing using AI and facial recognition, with 5% accuracy, and that the Met was not taking these flaws seriously. Liberty produced a strong report on it out this week.

Caroline, from Women in AI echoed my own comments on the need to get urgent review in place of these technologies used with children in education and social care. [in particular where used for prediction of child abuse and interventions in family life].

Joanna J Bryson added to the conversation on accountability, to say people are not following existing software and audit protocols,  someone just needs to go and see if people did the right thing.

The basic question of accountability, is to ask if any flaw is the fault of a corporation, of due diligence, or of the users of the tool? Telling people that this is the same problem as any other software, makes it much easier to find solutions to accountability.

Tim Clement-Jones asked, on how many fronts can we fight on at the same time? If government has appeared to exempt itself from some of these issues, and created a weak framework for itself on handing data, in the Data Protection Act — critically he also asked, is the ICO adequately enforcing on government and public accountability, at local and national levels?

Sue Daley also reminded us that politicians need not know everything, but need to know what the right questions are to be asking? What are the effects that this has on my constituents, in employment, my family? And while she also suggested that not using the technology could be unethical, a participant countered that it’s not the worst the thing to have to slow technology down and ensure it is safe before we all go along with it.

My takeaways of the evening, included that there is a very large body of women, of whom attendees were only a small part, who are thinking, building and engineering solutions to some of these societal issues embedded in policy, practice and technology. They need heard.

It was genuinely electric and empowering, to be in a room dominated by women, women reflecting diversity of a variety of publics, ages, and backgrounds, and who listened to one another. It was certainly something out of the ordinary.

There was a subtle but tangible tension on whether or not  regulation beyond what we have today is needed.

While regulating the human behaviour that becomes encoded in AI, we need to ensure ethics of human behaviour, reasonable expectations and fairness are not conflated with the technology [ie a question of, is AI good or bad] but how it is designed, trained, employed, audited, and assess whether it should be used at all.

This was the most effective group challenge I have heard to date, counter the usual assumed inevitability of a mythical omnipotence. Perhaps Julia Powles, this is the beginnings of a robust, bold, imaginative response.

Why there’s not more women or people from minorities working in the sector, was a really interesting if short, part of the discussion. Why should young women and minorities want to go into an environment that they can see is hostile, in which they may not be heard, and we still hold *them* responsible for making work work?

And while there were many voices lamenting the skills and education gaps, there were probably fewer who might see the solution more simply, as I do. Schools are foreshortening Key Stage 3 by a year, replacing a breadth of subjects, with an earlier compulsory 3 year GCSE curriculum which includes RE, and PSHE, but means that at 12, many children are having to choose to do GCSE courses in computer science / coding, or a consumer-style iMedia, or no IT at all, for the rest of their school life. This either-or content, is incredibly short-sighted and surely some blend of non-examined digital skills should be offered through to 16 to all, at least in parallel importance with RE or PSHE.

I also still wonder, about all that incredibly bright and engaged people are not talking about and solving, and missing in policy making, while caught up in AI. We need to keep thinking broadly, and keep human rights at the centre of our thinking on machines. Anaïs Nin wrote over 70 years ago about the risks of growth in technology to expand our potential for connectivity through machines, but diminish our genuine connectedness as people.

“I don’t think the [American] obsession with politics and economics has improved anything. I am tired of this constant drafting of everyone, to think only of present day events”.

And as I wrote about nearly 3 years ago, we still seem to have no vision for sustainable public policy on data, or establishing a social contract for its use as Reema said, which underpins the UK AI debate. Meanwhile, the current changing national public policies in England on identity and technology, are becoming catastrophic.

Challenging the unaccountable and the ‘inevitable’ in today’s technology and AI debate, is an urgent call to action.

I look forward to hearing how Women Leading in AI plan to make it happen.


References:

Women Leading in AI website: http://womenleadinginai.org/
WLiAI Report: 10 Principles of Responsible AI
@WLinAI #WLinAI

image credits 
post: creative commons Mark Dodds/Flickr
event photo:  / GemServ

Policy shapers, product makers, and profit takers (1)

In 2018, ethics became the new fashion in UK data circles.

The launch of the Women Leading in AI principles of responsible AI, has prompted me to try and finish and post these thoughts, which have been on my mind for some time. If two parts of 1K is tl:dr for you, then in summary, we need more action on:

  • Ethics as a route to regulatory avoidance.
  • Framing AI and data debates as a cost to the Economy.
  • Reframing the debate around imbalance of risk.
  • Challenging the unaccountable and the ‘inevitable’.

And in the next post on:

  • Corporate Capture.
  • Corporate Accountability, and
  • Creating Authentic Accountability.

Ethics as a route to regulatory avoidance

In 2019, the calls to push aside old wisdoms for new, for everyone to focus on the value-laden words of ‘innovation’ and ‘ethics’, appears an ever louder attempt to reframe regulation and law as barriers to business, asking to cast them aside.

On Wednesday evening, at the launch of the Women Leading in AI principles of responsible AI, the chair of the CDEI said in closing, he was keen to hear from companies where, “they were attempting to use AI effectively and encountering difficulties due to regulatory structures.”

In IBM’s own words to government recently,

A rush to further regulation can have the effect of chilling innovation and missing out on the societal and economic benefits that AI can bring.”

The vague threat is very clear, if you regulate, you’ll lose. But the the societal and economic benefits are just as vague.

So far, many talking about ethics are trying to find a route to regulatory avoidance. ‘We’ll do better,’ they promise.

In Ben Wagner’s recent paper, Ethics as an Escape from Regulation: From ethics-washing to ethics-shopping,he asks how to ensure this does not become the default engagement with ethical frameworks or rights-based design. He sums up, “In this world, ‘ethics’ is the new ‘industry self-regulation.”

Perhaps it’s ingenious PR to make sure that what is in effect self-regulation, right across the business model, looks like it comes imposed from others, from the very bodies set up to fix it.

But as I think about in part 2, is this healthy for UK public policy and the future not of an industry sector, but a whole technology, when it comes to AI?

Framing AI and data debates as a cost to the Economy

Companies, organisations and individuals arguing against regulation are framing the debate as if it would come at a great cost to society and the economy. But we rarely hear, what effect do they expect on their company. What’s the cost/benefit expected for them. It’s disingenuous to have only part of that conversation. In fact the AI debate would be richer were it to be included. If companies think their innovation or profits are at risk from non-use, or regulated use, and there is risk to the national good associated with these products, we should be talking about all of that.

And in addition, we can talk about use and non-use in society. Too often, the whole debate is intangible. Show me real costs, real benefits. Real risk assessments. Real explanations that speak human. Industry should show society what’s in it for them.

You don’t want it to ‘turn out like GM crops’? Then learn their lessons on transparency, trustworthiness, and avoid the hype. And understand sometimes there is simply tech, people do not want.

Reframing the debate around imbalance of risk

And while we often hear about the imbalance of power associated with using AI, we also need to talk about the imbalance of risk.

While a small false positive rate for a company product may be a great success for them, or for a Local Authority buying the service, it might at the same time, mean lives forever changed, children removed from families, and individual reputations ruined.

And where company owners may see no risk from the product they assure is safe, there are intangible risks that need factored in, for example in education where a child’s learning pathway is determined by patterns of behaviour, and how tools shape individualised learning, as well as the model of education.

Companies may change business model, ownership, and move on to other sectors after failure. But with the levels of unfairness already felt in the relationship between the citizen and State — in programmes like Troubled Families, Universal Credit, Policing, and Prevent — where use of algorithms and ever larger datasets is increasing, long term harm from unaccountable failure will grow.

Society needs a rebalance of the system urgently to promote transparent fairness in interactions, including but not only those with new applications of technology.

We must find ways to reframe how this imbalance of risk is assessed, and is distributed between companies and the individual, or between companies and state and society, and enable access to meaningful redress when risks turn into harm.

If we are to do that, we need first to separate truth from hype, public good from self-interest and have a real discussion of risk across the full range from individual, to state, to society at large.

That’s not easy against a non-neutral backdrop and scant sources of unbiased evidence and corporate capture.

Challenging the unaccountable and the ‘inevitable’.

In 2017 the Care Quality Commission reported into online services in the NHS, and found serious concerns of unsafe and ineffective care. They have a cross-regulatory working group.

By contrast, no one appears to oversee that risk and the embedded use of automated tools involved in decision-making or decision support, in children’s services, or education. Areas where AI and cognitive behavioural science and neuroscience are already in use, without ethical approval, without parental knowledge or any transparency.

Meanwhile, as all this goes on, academics many are busy debating fixing algorithmic bias, accountability and its transparency.

Few are challenging the narrative of the ‘inevitability’ of AI.

Julia Powles and Helen Nissenbaum recently wrote that many of these current debates are an academic distraction, removed from reality. It is under appreciated how deeply these tools are already embedded in UK public policy. “Trying to “fix” A.I. distracts from the more urgent questions about the technology. It also denies us the possibility of asking: Should we be building these systems at all?”

Challenging the unaccountable and the ‘inevitable’ is the title of the conclusion of the Women Leading in AI report on principles, and makes me hopeful.

“There is nothing inevitable about how we choose to use this disruptive technology. […] And there is no excuse for failing to set clear rules so that it remains accountable, fosters our civic values and allows humanity to be stronger and better.”

[1] Powles, Nissenbaum, 2018,The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence, Medium

Next: Part  2– Policy shapers, product makers, and profit takers on

  • Corporate Capture.
  • Corporate Accountability, and
  • Creating Authentic Accountability.

Policy shapers, product makers, and profit takers (2)

Corporate capture

Companies are increasingly in controlling positions of the tech narrative in the press. They are funding neutral third-sector orgs’ and think tanks’ research. Supporting organisations advising on online education. Closely involved in politics. And sit increasingly, within the organisations set up to lead the technology vision, advising government on policy and UK data analytics, or on social media, AI and ethics.

It is all subject to corporate capture.

But is this healthy for UK public policy and the future not of an industry sector, but a whole technology, when it comes to AI?

If a company’s vital business interests seem unfazed by the risk and harm they cause to individuals — from people who no longer trust the confidentiality of the system to measurable harms — why should those companies sit on public policy boards set up to shape the ethics they claim we need, to solve the problems and restore loss of trust that these very same companies are causing?

We laud people in these companies as co-founders and forward thinkers on new data ethics institutes. They are invited to sit on our national boards, or create new ones.

What does that say about the entire board’s respect for the law which the company breached? It is hard not to see it signal acceptance of the company’s excuses or lack of accountability.

Corporate accountability

The same companies whose work has breached data protection law, multiple ways, seemingly ‘by accident’ on national data extractions, are those companies that cross the t’s and dot the i’s on even the simplest conference call, and demand everything is said in strictest confidence. Meanwhile their everyday business practices ignore millions of people’s lawful rights to confidentiality.

The extent of commercial companies’ influence on these boards is  opaque. To allow this ethics bandwagon to be driven by the corporate giants surely eschews genuine rights-based values, and long-term integrity of the body they appear to serve.

I am told that these global orgs must be in the room and at the table, to use the opportunity to make the world a better place.

These companies already have *all* the opportunity. Not only monopoly positions on their own technology, but the datasets at scale which underpin it, excluding new entrants to the market. Their pick of new hires from universities. The sponsorship of events. The political lobbying. Access to the media. The lawyers. Bottomless pockets to pay for it all. And seats at board tables set up to shape UK policy responses.

It’s a struggle for power, and a stake in our collective future. The status quo is not good enough for many parts of society, and to enable Big Tech or big government to maintain that simply through the latest tools, is a missed chance to reshape for good.

You can see it in many tech boards’ make up, and pervasive white male bias. We hear it echoed in London think tank conferences, even independent tech design agencies, or set out in some Big Tech reports. All seemingly unconnected, but often funded by the same driving sources.

These companies are often those that made it worse to start with, and the very ethics issues the boards have been set up to deal with, are at the core of their business models and of their making.

The deliberate infiltration of influence on online safety policy for children, or global privacy efforts is very real, explicitly set out in the #FacebookEmails, for example.

We will not resolve these fundamental questions, as long as the companies whose business depend on them, steer national policy. The odds will be ever in their favour.

At the same time, some of these individuals are brilliant. In all senses.

So what’s the answer. If they are around the table, what should the UK public expect of their involvement, and ensure in whose best interests it is? How do we achieve authentic accountability?

Whether it be social media, data analytics, or AI in public policy, can companies be safely permitted to be policy shapers if they wear all the hats; product maker, profit taker, *and* process or product auditor?

Creating Authentic Accountability

At minimum we must demand responsibility for their own actions from board members who represent or are funded by companies.

  1. They must deliver on their own product problems first before being allowed to suggest solutions to societal problems.
  2. There should be credible separation between informing policy makers, and shaping policy.
  3. There must be total transparency of funding sources across any public sector boards, of members, and those lobbying them.
  4. Board members must be meaningfully held accountable for continued company transgressions on rights and freedoms, not only harms.
  5. Oversight of board decision making must be decentralised, transparent and available to scrutiny and meaningful challenge.

While these new bodies may propose solutions that include public engagement strategies, transparency, and standards, few propose meaningful oversight. The real test is not what companies say in their ethical frameworks, but in what they continue to do.

If they fail to meet legal or regulatory frameworks, minimum accountability should mean no more access to public data sets and losing positions of policy influence.

Their behaviour needs to go above and beyond meeting the letter of the law, scraping by or working around rights based protections. They need to put people ahead of profit and self interests. That’s what ethics should mean, not be a PR route to avoid regulation.

As long as companies think the consequences of their platforms and actions are tolerable and a minimal disruption to their business model, society will be expected to live with their transgressions, and our most vulnerable will continue to pay the cost.


This is part 2 of thoughts on Policy shapers, product makers, and profit takers — data and AI. Part 1 is here.

Can Data Trusts be trustworthy?

The Lords Select Committee report on AI in the UK in March 2018, suggested that,“the Government plans to adopt the Hall-Pesenti Review recommendation that ‘data trusts’ be established to facilitate the ethical sharing of data between organisations.”

Since data distribution already happens, what difference would a Data Trust model make to ‘ethical sharing‘?

A ‘set of relationships underpinned by a repeatable framework, compliant with parties’ obligations’ seems little better than what we have today, with all its problems including deeply unethical policy and practice.

The ODI set out some of the characteristics Data Trusts might have or share. As importantly, we should define what Data Trusts are not. They should not simply be a new name for pooling content and a new single distribution point. Click and collect.

But is a Data Trust little more than a new description for what goes on already? Either a physical space or legal agreements for data users to pass around the personal data from the unsuspecting, and sometimes unwilling, public. Friends-with-benefits who each bring something to the party to share with the others?

As with any communal risk, it is the standards of the weakest link, the least ethical, the one that pees in the pool, that will increase reputational risk for all who take part, and spoil it for everyone.

Importantly, the Lords AI Committee report recognised that there is an inherent risk how the public would react to Data Trusts, because there is no social license for this new data sharing.

“Under the current proposals, individuals who have their personal data contained within these trusts would have no means by which they could make their views heard, or shape the decisions of these trusts.

Views those keen on Data Trusts seem keen to ignore.

When the Administrative Data Research Network was set up in 2013, a new infrastructure for “deidentified” data linkage, extensive public dialogue was carried across across the UK. It concluded in a report with very similar findings as was apparent at dozens of care.data engagement events in 2014-15;

There is not public support for

  • “Creating large databases containing many variables/data from a large number of public sector sources,
  • Establishing greater permanency of datasets,
  • Allowing administrative data to be linked with business data, or
  • Linking of passively collected administrative data, in particular geo-location data”

The other ‘red-line’ for some participants was allowing “researchers for private companies to access data, either to deliver a public service or in order to make profit. Trust in private companies’ motivations were low.”

All of the above could be central to Data Trusts. All of the above highlight that in any new push to exploit personal data, the public must not be the last to know. And until all of the above are resolved, that social-license underpinning the work will always be missing.

Take the National Pupil Database (NPD) as a case study in a Data Trust done wrong.

It is a mega-database of over 20 other datasets. Raw data has been farmed out for years under terms and conditions to third parties, including users who hold an entire copy of the database, such as the somewhat secretive and unaccountable Fischer Family Trust, and others, who don’t answer to Freedom-of-Information, and whose terms are hidden under commercial confidentilaity. Buying and benchmarking data from schools and selling it back to some, profiling is hidden from parents and pupils, yet the FFT predictive risk scoring can shape a child’s school experience from age 2. They don’t really want to answer how staff tell if a child’s FFT profile and risk score predictions are accurate, or of they can spot errors or a wrong data input somewhere.

Even as the NPD moves towards risk reduction, its issues remain. When will children be told how data about them are used?

Is it any wonder that many people in the UK feel a resentment of institutions and orgs who feel entitled to exploit them, or nudge their behaviour, and a need to ‘take back control’?

It is naïve for those working in data policy and research to think that it does not apply to them.

We already have safe infrastructures in the UK for excellent data access. What users are missing, is the social license to do so.

Some of today’s data uses are ethically problematic.

No one should be talking about increasing access to public data, before delivering increased public understanding. Data users must get over their fear of what if the public found out.

If your data use being on the front pages would make you nervous, maybe it’s a clue you should be doing something differently. If you don’t trust the public would support it, then perhaps it doesn’t deserve to be trusted. Respect individuals’ dignity and human rights. Stop doing stupid things that undermine everything.

Build the social license that care.data was missing. Be honest. Respect our right to know, and right to object. Build them into a public UK data strategy to be understood and be proud of.


Part 1. Ethically problematic
Ethics is dissolving into little more than a buzzword. Can we find solutions underpinned by law, and ethics, and put the person first?

Part 2. Can Data Trusts be trustworthy?
As long as data users ignore data subjects rights, Data Trusts have no social license.



Is Hancock’s App Age Appropriate?

What can Matt Hancock learn from his app privacy flaws?

Note: since starting this blog, the privacy policy has been changed since what was live at 4.30 and the “last changed date” backdated on the version that is now live at 21.00. It shows the challenge I point out in 5:

It’s hard to trust privacy policy terms and conditions that are not strong and stable. 


The Data Protection Bill about to pass through the House of Commons requires the Information Commissioner to prepare and issue codes of practice — which must be approved by the Secretary of State — before they can become statutory and enforced.

One of those new codes (clause 124) is about age-appropriate data protection design. Any provider of an Information Society Service — as outlined in GDPR Article 8, where a child’s data are collected on the legal basis of consent — must have regard for the code, if they target the site use at a child.

For 13 -18 year olds what changes might mean compared with current practices can be demonstrated by the Minister for Digital, Culture, Media and Sport’s new app, launched today.

This app is designed to be used by children 13+. Regardless that the terms say, [more aligned with US COPPA laws rather than GDPR] the app requires parental approval 13-18, it still needs to work for the child.

Apps could and should be used to open up what politics is about to children. Younger users are more likely to use an app than read a paper for example. But it must not cost them their freedoms. As others have written, this app has privacy flaws by design.

Children merit specific protection with regard to their personal data, as they may be less aware of the risks, consequences and safeguards concerned and their rights in relation to the processing of personal data. (GDPR Recital 38).

The flaw in the intent to protect by age, in the app, GDPR and UK Bill overall, is that understanding needed for consent is not dependent on age, but on capacity. The age-based model to protect the virtual child, is fundamentally flawed. It’s shortsighted, if well intentioned, but bad-by-design and does little to really protect children’s rights.

Future age verification for example; if it is to be helpful, not harm, or  a nuisance like a new cookie law, must be “a narrow form of ‘identity assurance’ – where only one attribute (age) need be defined.” It must also respect Recital 57, and not mean a lazy data grab like GiffGaff’s.

On these 5 things this app fails to be age appropriate:

  1. Age appropriate participation, privacy, and consent design.
  2. Excessive personal data collection and permissions. (Article 25)
  3. The purposes of each data collected must be specified, explicit and not further processed for something incompatible with them. (Principle 2).
  4. The privacy policy terms and conditions must be easily understood by a child, and be accurate. (Recital 58)
  5. It’s hard to trust privacy policy terms and conditions that are not strong and stable. Among things that can change are terms on a free trial which should require active and affirmative action not continue the account forever, that may compel future costs.  Any future changes, should also be age-appropriate of themselves,  and in the way that consent is re-managed.

How much profiling does the app enable and what is it used for? The Article 29 WP recommends, “Because children represent a more vulnerable group of society, organisations should, in general, refrain from profiling them for marketing purposes.” What will this mean for any software that profile children’s meta-data to share with third parties, or commercial apps with in-app purchases, or “bait and switch” style models? As this app’s privacy policy refers to.

The Council of Europe 2016-21 Strategy on the Rights of the Child, recognises “provision for children in the digital environment ICT and digital media have added a new dimension to children’’s right to education” exposing them to new risk, “privacy and data protection issues” and that “parents and teachers struggle to keep up with technological developments. ” [6. Growing up in a Digital World, Para 21]

Data protection by design really matters to get right for children and young people.

This is a commercially produced app and will only be used on a consent and optional basis.

This app shows how hard it can be for people buying tech from developers to understand and to trust what’s legal and appropriate.

For developers with changing laws and standards they need clarity and support to get it right. For parents and teachers they will need confidence to buy and let children use safe, quality technology.

Without relevant and trustworthy guidance, it’s nigh on impossible.

For any Minister in charge of the data protection rights of children, we need the technology they approve and put out for use by children, to be age-appropriate, and of the highest standards.

This app could and should be changed to meet them.

For children across the UK, more often using apps offers them no choice whether or not to use it. Many are required by schools that can make similar demands for their data and infringe their privacy rights for life. How much harder then, to protect their data security and rights, and keep track of their digital footprint where data goes.

If the Data protection Bill could have an ICO code of practice for  children that goes beyond consent based data collection; to put clarity, consistency and confidence at the heart of good edTech for children, parents and schools, it would be warmly welcomed.


Here’s detailed examples what the Minister might change to make his app in line with GDPR, and age-appropriate for younger users.

1. Is the app age appropriate by design?

Unless otherwise specified in the App details on the applicable App Store, to use the App you must be 18 or older (or be 13 or older and have your parent or guardian’s consent).

Children over 13 can use the app, but this app needs parental consent. That’s different from GDPR– consent over and above the new laws as will apply in the UK from May. That age will vary across the EU. Inconsistent age policies are going to be hard to navigate.

Many of the things that matter to privacy, have not been included in the privacy policy (detailed below), but in the terms and conditions.

What else needs changed?

2. Personal data protection by design and default

Excessive personal data collection cannot be justified through a “consent” process, by agreeing to use the app. There must be data protection by design and default using the available technology. That includes data minimisation, and limited retention. (Article 25)

The apps powers are vast and collect far more personal data than is needed, and if you use it, even getting permission to listen to your mic. That is not data protection by design and default, which must implement data-protection principles, such as data minimisation.

If as has been suggested, in the newest version of android each permission is asked for at the point of use not on first install, that could be a serious challenge for parents who think they have reviewed and approved permissions pre-install (and fails beyond the scope of this app). An app only requires consent to install and can change the permissions behind the scenes at any time. It makes privacy and data protection by design even more important.

Here’s a copy of what the android Google library page says it can do. Once you click into “permissions” and scroll. This is excessive. “Matt Hancock” is designed to prevent your phone from sleeping, read and modify the contents of storage, and access your microphone.

Version 2.27 can access:
 
Location
  • approximate location (network-based)
Phone
  • read phone status and identity
Photos / Media / Files
  • read the contents of your USB storage
  • modify or delete the contents of your USB storage
Storage
  • read the contents of your USB storage
  • modify or delete the contents of your USB storage
Camera
  • take pictures and videos
Microphone
  • record audio
Wi-Fi connection information
  • view Wi-Fi connections
Device ID & call information
  • read phone status and identity
Other
  • control vibration
  • manage document storage
  • receive data from Internet
  • view network connections
  • full network access
  • change your audio settings
  • control vibration
  • prevent device from sleeping

“Matt Hancock” knows where you live

The app makers – and Matt Hancock – should have no necessity to know where your phone is at all times, where it is regularly, or whose other phones you are near, unless you switch it off. That is excessive.

It’s not the same as saying “I’m a constituent”. It’s 24/7 surveillance.

The Ts&Cs say more.

It places the onus on the user to switch off location services — which you may expect for other apps such as your Strava run — rather than the developers take responsibility for your privacy by design. [Click image to see larger] [Full source policy].

[update since writing this post on February 1, the policy has been greatly added to]

It also collects ill-defined “technical information”. How should a 13 year old – or parent for that matter – know what these information are? Those data are the meta-data, the address and sender tags etc.

By using the App, you consent to us collecting and using technical information about your device and related information for the purpose of helping us to improve the App and provide any services to you.

As NSA General Counsel Stewart Baker has said, “metadata absolutely tells you everything about somebody’s life. General Michael Hayden, former director of the NSA and the CIA, has famously said, “We kill people based on metadata.”

If you use this app and “approve” the use, do you really know what the location services are tracking and how that data are used? For a young person, it is impossible to know, or see where their digital footprint has gone, or knowledge about them, have been used.

3. Specified, explicit, and necessary purposes

As a general principle, personal data must be only collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes. The purposes of these very broad data collection, are not clearly defined. That must be more specifically explained, especially given the data are so broad, and will include sensitive data. (Principle 2).

While the Minister has told the BBC that you maintain complete editorial control, the terms and conditions are quite different.

The app can use user photos, files, your audio and location data, and that once content is shared it is “a perpetual, irrevocable” permission to use and edit, this is not age-appropriate design for children who might accidentally click yes, or not appreciate what that may permit. Or later wish they could get that photo back. But now that photo is on social media potentially worldwide —  “Facebook, Twitter, Pinterest, YouTube, Instagram and on the Publisher’s own websites,” and the child’s rights to privacy and consent, are lost forever.

That’s not age appropriate and not in line with GDPR on rights to withdraw consent, to object or to restrict processing. In fact the terms, conflict with the app privacy policy which states those rights [see 4. App User Data Rights] Just writing “there may be valid reasons why we may be unable to do this” is poor practice and a CYA card.

4. Any privacy policy and app must do what it says

A privacy policy and terms and conditions must be easily understood by a child, [indeed any user] and be accurate.

Journalists testing the app point out that even if the user clicks “don’t allow”, when prompted to permit access to the photo library, the user is allowed to post the photo anyway.


What does consent mean if you don’t know what you are consenting to? You’re not. GDPR requires that privacy policies are written in a way that their meaning can be understood by a child user (not only their parent). They need to be jargon-free and meaningful in “clear and plain language that the child can easily understand.” (Recital 58)

This privacy policy is not child-appropriate. It’s not even clear for adults.

5. What would age appropriate permissions for  charging and other future changes look like?

It should be clear to users if there may be up front or future costs, and there should be no assumption that agreeing once to pay for an app, means granting permission forever, without affirmative action.

Couching Bait-and-Switch, Hidden Costs

This is one of the flaws that the Matt Hancock app terms and conditions shares with many free education apps used in schools. At first, they’re free. You register, and you don’t even know when your child  starts using the app, that it’s a free trial. But after a while, as determined by the developer, the app might not be free any more.

That’s not to say this is what the Matt Hancock app will do, in fact it would be very odd if it did. But odd then, that its privacy policy terms and conditions state it could.

The folly of boiler plate policy, or perhaps simply wanting to keep your options open?

Either way, it’s bad design for children– indeed any user — to agree to something that in fact, is meaningless because it could change at any time, and automatic renewals are convenient but who has not found they paid for an extra month of a newspaper or something else they intended to only use for a limited time?  And to avoid any charges, you must cancel before the end of the free trial – but if you don’t know it’s free, that’s hard to do. More so for children.

From time to time we may offer a free trial period when you first register to use the App before you pay for the subscription.[…] To avoid any charges, you must cancel before the end of the free trial.

(And on the “For more details, please see the product details in the App Store before you download the App.” there aren’t any, in case you’re wondering).

What would age appropriate future changes be?

It should be clear to parents that what they consent to on behalf of a child, or if a child consents, at the time of install. What that means must empower them to better digital understanding and to stay in control, not allow the company to change the agreement, without the user’s clear and affirmative action.

One of the biggest flaws for parents in children using apps is that what they think they have reviewed, thought appropriate, and permitted, can change at any time, at the whim of the developer and as often as they like.

Notification “by updating the Effective Date listed above” is not any notification at all.  And PS. they changed the policy and backdated it today from February 1, 2018, to July 2017. By 8 months. That’s odd.

The statements in this “changes” contradict one another. It’s a future dated get-out-of-jail-free-card for the developer and a transparency and oversight nightmare for parents. “Your continued use” is not clear, affirmative, and freely given consent, as demanded by GDPR.

Perhaps the kindest thing to say about this policy, and its poor privacy approach to rights and responsibilities, is that maybe the Minister did not read it. Which highlights the basic flaw in privacy policies in the first place. Data usage reports how your personal data have actually been used, versus what was promised, are of much greater value and meaning. That’s what children need in schools.