Category Archives: Tech

Policing thoughts, proactive technology, and the Online Safety Bill

Former counter-terrorism police chief attacks Rishi Sunak’s Prevent plans“, reads a headline in today’s Guardian. “Former counter-terrorism chief Sir Peter Fahy […] said: “The widening of Prevent could damage its credibility and reputation. It makes it more about people’s thoughts and opinions. Fahy said: “The danger is the perception it creates that teachers and health workers are involved in state surveillance.”

This article leaves out that today’s reality is already far ahead of proposals or perception. School children and staff are already surveilled in these ways. Not only are things monitored that people think type or read or search for online and offline in the digital environment, but copies may be collected, retained by companies and interventions made.

The products don’t only permit monitoring of trends on aggregated data in overviews of student activity but the behaviours of individual students. And these can be deeply intrusive and sensitive when you are talking about self harm, abuse, and terrorism.

(For more on the safety tech sector, often using AI in proactive monitoring, see my previous post (May 2021) The Rise of Safety Tech.)

Intrusion through inference and interventions

From 1 July 2015 all schools have been subject to the Prevent duty under section 26 of the Counter-Terrorism and Security Act 2015, in the exercise of their functions, to have “due regard to the need to prevent people from being drawn into terrorism”.  While these products are about monitoring far more than the remit of Prevent,  many companies actively market online filtering, blocking and monitoring safety products as a way of meeting that in the digital environment. Such as, “Lightspeed Filter™ helps you meet all of the Prevent Duty’s online regulations…

Despite there being no obligation to date, to fulfil this duty through technology, some companies’ way of selling such tools could be interpreted as threatening if schools don’t use it. Like this example:

“Failure to comply with the requirements may result in intervention from the Prevent Oversight Board, prompt an Ofsted inspection or incur loss of funding.”

Such products may create and send real-time alerts to company or school staff when children attempt to reach sites or type “flagged words” related to radicalisation or extremism on any online platform.

Under the auspices of the safeguarding-in-schools data sharing and web monitoring in the Prevent programme children may be labelled with terrorism or extremism labels, data which may be passed on to others or stored outside the UK without their knowledge. The drift in what is considered significant, has been from terrorism into now more vague and broad terms of extremism and radicalisation. Away from some assessment of intent and capability of action, into interception and interventions for potentially insignificant potential vulnerabilities and inferred assumptions of disposition towards such ideas. This is not potentially going to police thoughts as suggested by Fahy of Sunak’s views. It is already doing so. Policing thoughts in the developing child and holding them accountable for it like this in ways that are unforeseeable, is inappropriate and requires thorough investigation into its effects on children, including mental health.

But it’s important to understand that these libraries of thousands of words, ever changing and in multiple languages, and what the systems are looking for and flag, often claiming to do it using Artificial Intelligence, go far beyond Prevent. ‘Legal but harmful’ is their bread and butter. Self harm, harm to or from others.

While companies have no obligations to publish how the monitoring or flagging operates, what the words or phrases or blocked websites are, their error rates (positive and negative) or the effects on children or school staff and their behaviour as a result, these companies have a great deal of influence what gets inferred from what children do online, and who decides what to act on.

Why does it matter?

Schools have normalized the premise that systems they introduce should monitor activity outside of the school network, and hours. And that strangers or their private companies’ automated systems should be involved in inferring or deciding what children are ‘up to’ before the school staff who know the children in front of them.

In a defenddigitalme report, The State of Data 2020, we included a case study on one company that has since been bought out.  And bought again. As of August 2018 eSafe was monitoring approximately one million school children plus staff across the UK. This case study they used in their public marketing raised all sorts of questions on professional  confidentiality and school boundaries, personal privacy, ethics, and companies’ role and technical capability, as well as the lack of any safety tech accountability.

“A female student had been writing an emotionally charged letter to her Mum using Microsoft Word, in which she revealed she’d been raped. Despite the device used being offline, eSafe picked this up and alerted John and his care team who were able to quickly intervene.”

Their then CEO  had told the House of Lords 2016 Communication Committee enquiry on the Children and the Internet, how the products are not only monitoring children in school or school hours:

“Bearing in mind we are doing this throughout the year, the behaviours we detect are not confined to the school bell starting in the morning and ringing in the afternoon, clearly; it is 24/7 and it is every day of the year. Lots of our incidents are escalated through activity on evenings, weekends and school holidays.”

Similar products offer a photo capturing feature of users (pupils while using the device being monitored) described as “common across most solutions in the sector” by this company:

When a critical safeguarding keyword is copied, typed or searched for across the school network, schools can turn on NetSupport DNA’s webcams capture feature (this feature is turned-off by default) to capture an image of the user (not a recording) who has triggered the keyword.

How many webcam photos have been taken of children by school staff or others through those systems, and for what purposes, kept by whom? In the U.S. in 2010, Lower Merion School District, Philadelphia settled a lawsuit for using laptop webcams to take photos of students.  Thousands of photos had been taken even at home, out of hours, without their knowledge.

Who decides what does and does not trigger interventions across different products? In the month of December 2017 alone, eSafe claims they added 2254 words to their threat libraries.

Famously, Impero’s system even included the word “biscuit” which they say is a term used to define a gun. Their system was used by more than “half a million students and staff in the UK” in 2018. And students had better not talk about “taking a wonderful bath.” Currently there is no understanding or oversight of the accuracy of this kind of software and black-box decision-making is often trusted without openness to human question or correction.

Aside from how the range of tools that are all different work, there are very basic questions about whether such policies and tools help or harm children in various ways at all. The UN Special Rapporteur’s 2014 report on children’s rights and freedom of expression stated:

“The result of vague and broad definitions of harmful information, for example in determining how to set Internet filters, can prevent children from gaining access to information that can support them to make informed choices, including honest, objective and age-appropriate information about issues such as sex education and drug use. This may exacerbate rather than diminish children’s vulnerability to risk.” (2014)

U.S. safety tech creates harms

Today in the U.S. the CDT published a report on school monitoring systems there, many of which are also used over here. The report revealed that 13 percent of students knew someone who had been outed as a result of student-monitoring software. Another conclusion the CDT draws, is that monitoring is used for discipline more often than for student safety.

We don’t have that same research for the UK, but we’ve seen IT staff openly admit to using the webcam feature to take photos of young boys who are “mucking about” on the school library computer.

The Online Safety Bill scales up problems like this

The Online Safety Bill seeks to expand how such ‘behavioural identification technology’ can be expanded outside schools.

“Proactive technology include content moderation technology, user profiling technology or behaviour identification technology which utilises artificial intelligence or machine learning.” (p151 Online Safety Bill, August 3, 2022)

The “proactive technology requirement” is as yet rather open ended, left to OFCOM in Codes of Practice but the scope creep of such AI-based tools has become ever more intrusive in education. Legal but harmful is decided by companies and the IWF and any number of opaque third parties whose process and decision-making we know little about. It’s important not to conflate filtering, blocking lists of ‘unsuitable’ websites that can be accessed in schools, with monitoring and tracking individual behaviours.

‘Technological developments that have the capacity to interfere with our freedom of thought fall clearly within the scope of “morally unacceptable harm,”‘ according to Algere (2017), and yet this individual interference is at the very core of school safeguarding tech and policy by design.

In 2018, the ‘lawful but harmful’ list of activities in the Online Harms White paper was nearly identical with those terms used by school Safety Tech companies. The Bill now appears to be trying to create a new legitimate basis for these practices, more about underpinning a developing market, than supporting children’s safety or rights.

Chilling speech is itself controlling content

While a lot of debate about the Bill has been the free speech impacts of content removal, there has been less about what is unwritten but how it will operate to prevent speech and participation in the digital environment for children. The chilling effect of surveillance on access and participation online is well documented. Younger people and women are more likely to be negatively affected (Penney, 2017). The chilling effect on thought and opinion is worsened in these types of tools that trigger an alert even when what is typed is quickly deleted or remains unsent or shared. Thoughts are no longer private.

The ability to use end-to-end encryption on private messaging platforms is simply worked around by these kinds of tools, trading security for claims of children’s safety. Anything on screen may be read in the clear by some systems, even capturing passwords and bank details.

Graham Smith has written, “It may seem like overwrought hyperbole to suggest that the [Online Harms] Bill lays waste to several hundred years of fundamental procedural protections for speech. But consider that the presumption against prior restraint appeared in Blackstone’s Commentaries (1769). It endures today in human rights law. That presumption is overturned by legal duties that require proactive monitoring and removal before an independent tribunal has made any determination of illegality.”

More than this, there is no determination of illegality in legal but harmful activity. It’s opinion. The government is prone to argue that, “nothing in the Bill says X…” but you need to understand this context of how such proactive behavioural monitoring tools work is through threat and the resultant chilling effect to impose unwritten control. This Bill does not create a safer digital environment, it creates threat models for users and companies, to control how we think and behave.

What do children and parents think?

Young people’s own views that don’t fit the online harms narrative have been ignored by Westminster scrutiny Committees. A 2019 survey by the Australian e-safety commissioner found that over half (57%) of child respondents were uncomfortable with background monitoring processes, and 43 %were unsure about these tools’ effectiveness in ensuring online safety.

And what of the role of parents? Article 3(2) of the UNCRC says: “States Parties undertake to ensure the child such protection and care as is necessary for his or her wellbeing, taking into account the rights and duties of his or her parents, legal guardians, or other individuals  legally responsible for him or her, and, to this end, shall take all appropriate legislative and administrative measures.” (my emphasis)

In 2018, 84% of 1,004 parents in England who we polled through Survation, agreed that children and guardians should be informed how this monitoring activity works and wanted to know what the keywords were. (We didn’t ask if it should happen at all or not.)

The wide ranging nature [of general monitoring] rather than targeted and proportionate interference has been judged to be in breach of law and a serious interference with rights, previously. Neither policy makers nor companies should assume parents want safety tech companies to remove autonomy, or make inferences about our children’s lives. Parents if asked, reject the secrecy in which it happens today and demand transparency and accountability. Teachers can feel anxious talking about it at all. There’s no clear routes for error corrections, in fact it’s not done because some claim in building up profiles staff should not delete anything and ignore claims of errors, in case a pattern of behaviour is missed. But there’s no independent assessments available to evidence these tools work or are worth the costs. There are no routes for redress or responsibility taken for tech-made mistakes. None of which makes children safer online.

Before broadening out where such monitoring tools are used, their use and effects on school children need to be understood and openly debated. Policy makers may justify turning a blind eye to harms created by one set of technology providers while claiming that only the other tech providers are the problem, because it suits political agendas or industry aims, but children’s rights and their wellbeing should not be sacrificed in doing so.  Opaque, unlawful and unsafe practice must stop. A quid pro quo for getting access to millions of children’s intimate behaviour, should be transparent access to their product workings, and accepting standards on universal safe accountable practices. Families need to know what’s recorded. To have routes for redress when a daughter researching ‘cliff walks’ gets flagged as a suicide risk or an environmentally interested teenage son searching for information on ‘black rhinos’ is asked about his potential gang membership. The tools sold as solutions to online harms, shouldn’t create more harm like these reported real-life case studies.

Teachers are ‘involved in state surveillance’ as Fahy put it, through Prevent. Sunak was wrong to point away from the threats of the far right in his comments. But the far broader unspoken surveillance of children’s personal lives, behaviours and thoughts through general monitoring in schools, and what will be imposed through the Online Safety Bill more broadly, should concern us far more than was said.

On #IWD2022 gender bias in #edTech

I’m a mother of three girls at secondary school. For international women’s day 2022 I’ve been thinking about the role of school technology in my life.

Could some of it be improved to stop baking-in gender discrimination norms to home-school relationships?

Families come in all shapes and sizes and not every family has defined Mum and Dad roles. I wonder if edTech could be better at supporting families if it offered the choice of a multi-parent-per-child relationship by-default?

School-home communications rarely come home in school bags anymore, but digitally, and routinely sent to one-parent-per-child. If something needs actioned, it’s typically going to one parent, not both. The design of digital tools can lock-in the responsibility for action to a single nominated person. Schools send the edTech company the ‘pupil parent contact’ email, but, at least in my experience, don’t ever ask what that should be after it’s been collected once. (And don’t do a good job of communicating data rights each time before doing so either, but that’s another story.)

Whether it’s about learning updates with report cards about the child, or weekly newsletters, changes of school clubs, closures, events or other ‘things you should know’ I filter emails I get daily from a number of different email accounts for relevance, and forward them on to Dad.

To administer cashless payments to school for contributions to art, cooking, science and technology lessons, school trips, other extras or to manage my child’s lunch money, there is a single email log-in and password for a parent role allocated to the child’s account.

And it might be just my own unrepresentative circle of friends, but it’s usually Mum who’s on the receiving end of demands at all hours.

In case of illness, work commitments, otherwise being unable to carry on as usual, it’s no longer as easy for a second designated parent role to automatically pick up or share the responsibilities.

One common cashless payment system’s approach does permit more than one parent role, but it’s manual and awkward to set up. “For a second parent to have access it is necessary for the school to send a second letter with a second temporary username and password combo to activate a second account. In short, the only way to do this is to ask your school.”

Some messaging services allow a school-to-multiple-parent email, but the message itself often forms an individual not group thread with the teacher, i.e designed for a class not a family.

Some might suggest it is easy enough to set up automatic email forwarding, but again this pushes back the onus onto the parent and doesn’t solve the problem of only one person able to perform transactions.

I wonder if one-way communications tools offered a second email address by default what difference it would make to overall parental engagement?

What if for financial management edTech permitted an option to have a ‘temporary re-route’ to another email address, or default second role with notification to the other something had been paid?

Why can’t one parent, once confirmed with secure access to the child-parent account, add a second parent role? These need not be the parent, but another relation managing the outgoing money. You can only make outgoing payments to the school, or withdraw money to the same single bank account it comes from, so fraud isn’t likely.

I wonder what research would look like at each of these tools, to assess whether there is a gender divide built into default admin?

What could it improve in work-life balance for staff and families, if emails were restricted to send or receive in preferred time windows?

Technology can be amazing and genuinely make life easier for some. But not everyone fits the default and I believe the defaults are rarely built to best suit users, but rather the institutions that procure them. In many cases edTech aren’t working well for the parents that make up their main user base.

If I were designing these, they’d be school not third-party cloud based, and distributed systems, centred on the child. I think we can do better, not only for women, but everyone.


PS When my children come home from school today, I’ll be showing them the Gender Pay Gap Bot @PayGapApp thread with explanations of mode, mean and median and worth a look.

Women Leading in AI — Challenging the unaccountable and the inevitable

Notes [and my thoughts] from the Women Leading in AI launch event of the Ten Principles of Responsible AI report and recommendations, February 6, 2019.

Speakers included Ivana Bartoletti (GemServ), Jo Stevens MP, Professor Joanna J Bryson, Lord Tim Clement-Jones, Roger Taylor (Centre for Data Ethics and Innovation, Chair), Sue Daley (techUK), Reema Patel, Nuffield Foundation and Ada Lovelace Institute.

Challenging the unaccountable and the ‘inevitable’ is the title of the conclusion of the Women Leading in AI report Ten Principles of Responsible AI, launched this week, and this makes me hopeful.

“There is nothing inevitable about how we choose to use this disruptive technology. […] And there is no excuse for failing to set clear rules so that it remains accountable, fosters our civic values and allows humanity to be stronger and better.”

Ivana Bartoletti, co-founder of Women Leading in AI, began the event, hosted at the House of Commons by Jo Stevens, MP for Cardiff Central, and spoke brilliantly of why it matters right now.

Everyone’s talking about ethics, she said, but it has limitations. I agree with that. This was by contrast very much a call to action.

It was nearly impossible not to cheer, as she set out without any of the usual bullshit, the reasons why we need to stop “churning out algorithms which discriminate against women and minorities.”

Professor Joanna J Bryson took up multiple issues, such as why

  • innovation, ‘flashes in the pan’ are not sustainable and not what we’re looking for things in that work for us [society].
  • The power dynamics of data, noting Facebook, Google et al are global assets, and are also global problems, and flagged the UK consultation on taxation open now.
  • And that it is critical that we do not have another nation with access to all of our data.

She challenged the audience to think about the fact that inequality is higher now than it has been since World War I. That the rich are getting richer and that imbalance of not only wealth, but of the control individuals have in their own lives, is failing us all.

This big picture thinking while zooming in on detailed social, cultural, political and tech issues, fascinated me most that evening. It frustrated the man next to me apparently, who said to me at the end, ‘but they haven’t addressed anything on the technology.’

[I wondered if that summed up neatly, some of why fixing AI cannot be a male dominated debate. Because many of these issues for AI, are not of the technology, but of people and power.] 

Jo Stevens, MP for Cardiff Central, hosted the event and was candid about politicians’ level of knowledge and the need to catch up on some of what matters in the tech sector.

We grapple with the speed of tech, she said. We’re slow at doing things and tech moves quickly. It means that we have to learn quickly.

While discussing how regulation is not something AI tech companies should fear, she suggested that a constructive framework whilst protecting society against some of the problems we see is necessary and just, because self-regulation has failed.

She talked about their enquiry which began about “fake news” and disinformation, but has grown to include:

  • wider behavioural economics,
  • how it affects democracy.
  • understanding the power of data.
  • disappointment with social media companies, who understand the power they have, and fail to be accountable.

She wants to see something that changes the way big business works, in the way that employment regulation challenged exploitation of the workforce and unsafe practices in the past.

The bias (conscious or unconscious) and power imbalance has some similarity with the effects on marginalised communities — women, BAME, disabilities — and she was looking forward to see the proposed solutions, and welcomed the principles.

Lord Clement-Jones, as Chair of the Select Committee on Artificial Intelligence, picked up the values they had highlighted in the March 2018 report, Artificial Intelligence, AI in the UK: ready, willing and able?

Right now there are so many different bodies, groups in parliament and others looking at this [AI / Internet / The Digital World] he said, so it was good that the topic is timely, front and centre with a focus on women, diversity and bias.

He highlighted, the importance of maintaining public trust. How do you understand bias? How do you know how algorithms are trained and understand the issues? He fessed up to being a big fan of DotEveryone and their drive for better ‘digital understanding’.

[Though sometimes this point is over complicated by suggesting individuals must understand how the AI works, the consensus of the evening was common sensed — and aligned with the Working Party 29 guidance — that data controllers must ensure they explain clearly and simply to individuals, how the profiling or automated decision-making process works, and what its effect is for them.]

The way forward he said includes:

  • Designing ethics into algorithms up front.
  • Data audits need to be diverse in order to embody fairness and diversity in the AI.
  • Questions of the job market and re-skilling.
  • The enforcement of ethical frameworks.

He also asked how far bodies will act, in different debates. Deciding who decides on that is still a debate to be had.

For example, aware of the social credit agenda and scoring in China, we should avoid the same issues. He also agreed with Joanna, that international cooperation is vital, and said it is important that we are not disadvantaged in this global technology. He expected that we [the Government Office for AI] will soon promote a common set of AI ethics, at the G20.

Facial recognition and AI are examples of areas that require regulation for safe use of the tech and to weed out those using it for the wrong purposes, he suggested.

However, on regulation he held back. We need to be careful about too many regulators he said. We’ve got the ICO, FCA, CMA, OFCOM, you name it, we’ve already got it, and they risk tripping over one another. [What I thought as CDEI was created para 31.]

We [the Lords Committee] didn’t suggest yet another regulator for AI, he said and instead the CDEI should grapple with those issues and encourage ethical design in micro-targeting for example.

Roger Taylor (Chair of the CDEI), — after saying it felt as if the WLinAI report was like someone had left their homework on his desk,  supported the concept of the WLinAI principles are important, and  agreed it was time for practical things, and what needs done.

Can our existing regulators do their job, and cover AI? he asked, suggesting new regulators will not be necessary. Bias he rightly recognised, already exists in our laws and bodies with public obligations, and in how AI is already operating;

  • CVs sorting. [problematic IMO > See Amazon, US teachers]
  • Policing.
  • Creditworthiness.

What evidence is needed, what process is required, what is needed to assure that we know how it is actually operating? Who gets to decide to know if this is fair or not? While these are complex decisions, they are ultimately not for technicians, but a decision for society, he said.

[So far so good.]

Then he made some statements which were rather more ambiguous. The standards expected of the police will not be the same as those for marketeers micro targeting adverts at you, for example.

[I wondered how and why.]

Start up industries pay more to Google and Facebook than in taxes he said.

[I wondered how and why.]

When we think about a knowledge economy, the output of our most valuable companies is increasingly ‘what is our collective truth? Do you have this diagnosis or not? Are you a good credit risk or not? Even who you think you are — your identity will be controlled by machines.’

What can we do as one country [to influence these questions on AI], in what is a global industry? He believes, a huge amount. We are active in the financial sector, the health service, education, and social care — and while we are at the mercy of large corporations, even large corporations obey the law, he said.

[Hmm, I thought, considering the Google DeepMind-Royal Free agreement that didn’t, and venture capitalists not renowned for their ethics, and yet advise on some of the current data / tech / AI boards. I am sceptical of corporate capture in UK policy making.]

The power to use systems to nudge our decisions, he suggested, is one that needs careful thought. The desire to use the tech to help make decisions is inbuilt into what is actually wrong with the technology that enables us to do so. [With this I strongly agree, and there is too little protection from nudge in data protection law.]

The real question here is, “What is OK to be owned in that kind of economy?” he asked.

This was arguably the neatest and most important question of the evening, and I vigorously agreed with him asking it, but then I worry about his conclusion in passing, that he was, “very keen to hear from anyone attempting to use AI effectively, and encountering difficulties because of regulatory structures.

[And unpopular or contradictory a view as it may be, I find it deeply ethically problematic for the Chair of the CDEI to be held by someone who had a joint-venture that commercially exploited confidential data from the NHS without public knowledge, and its sale to the Department of Health was described by the Public Accounts Committee, as a “hole and corner deal”. That was the route towards care.data, that his co-founder later led for NHS England. The company was then bought by Telstra, where Mr Kelsey went next on leaving NHS Engalnd. The whole commodification of confidentiality of public data, without regard for public trust, is still a barrier to sustainable UK data policy.]

Sue Daley (Tech UK) agreed this year needs to be the year we see action, and the report is a call to action on issues that warrant further discussion.

  • Business wants to do the right thing, and we need to promote it.
  • We need two things — confidence and vigilance.
  • We’re not starting from scratch, and talked about GDPR as the floor not the ceiling. A starting point.

[I’m not quite sure what she was after here, but perhaps it was the suggestion that data regulation is fundamental in AI regulation, with which I would agree.]

What is the gap that needs filled she asked? Gap analysis is what we need next and avoid duplication of effort —need to avoid complexity and duplicity of work with other bodies. If we can answer some of the big, profound questions need to be addressed to position the UK as the place where companies want to come to.

Sue was the only speaker that went on to talk about the education system that needs to frame what skills are needed for a future world for a generation, ‘to thrive in the world we are building for them.’

[The Silicon Valley driven entrepreneur narrative that the education system is broken, is not an uncontroversial position.]

She finished with the hope that young people watching BBC icons the night before would see, Alan Turing [winner of the title] and say yes, I want to be part of that.

Listening to Reema Patel, representative of the Ada Lovelace Institute, was the reason I didn’t leave early and missed my evening class. Everything she said resonated, and was some of the best I have heard in the recent UK debate on AI.

  • Civic engagement, the role of the public is as yet unclear with not one homogeneous, but many publics.
  • The sense of disempowerment is important, with disconnect between policy and decisions made about people’s lives.
  • Transparency and literacy are key.
  • Accountability is vague but vital.
  • What does the social contract look like on people using data?
  • Data may not only be about an individual and under their own responsibility, but about others and what does that mean for data rights, data stewardship and articulation of how they connect with one another, which is lacking in the debate.
  • Legitimacy; If people don’t believe it is working for them, it won’t work at all.
  • Ensuring tech design is responsive to societal values.

2018 was a terrible year she thought. Let’s make 2019 better. [Yes!]


Comments from the floor and questions included Professor Noel Sharkey, who spoke about the reasons why it is urgent to act especially where technology is unfair and unsafe and already in use. He pointed to Compass (Durham police), and predictive policing using AI and facial recognition, with 5% accuracy, and that the Met was not taking these flaws seriously. Liberty produced a strong report on it out this week.

Caroline, from Women in AI echoed my own comments on the need to get urgent review in place of these technologies used with children in education and social care. [in particular where used for prediction of child abuse and interventions in family life].

Joanna J Bryson added to the conversation on accountability, to say people are not following existing software and audit protocols,  someone just needs to go and see if people did the right thing.

The basic question of accountability, is to ask if any flaw is the fault of a corporation, of due diligence, or of the users of the tool? Telling people that this is the same problem as any other software, makes it much easier to find solutions to accountability.

Tim Clement-Jones asked, on how many fronts can we fight on at the same time? If government has appeared to exempt itself from some of these issues, and created a weak framework for itself on handing data, in the Data Protection Act — critically he also asked, is the ICO adequately enforcing on government and public accountability, at local and national levels?

Sue Daley also reminded us that politicians need not know everything, but need to know what the right questions are to be asking? What are the effects that this has on my constituents, in employment, my family? And while she also suggested that not using the technology could be unethical, a participant countered that it’s not the worst the thing to have to slow technology down and ensure it is safe before we all go along with it.

My takeaways of the evening, included that there is a very large body of women, of whom attendees were only a small part, who are thinking, building and engineering solutions to some of these societal issues embedded in policy, practice and technology. They need heard.

It was genuinely electric and empowering, to be in a room dominated by women, women reflecting diversity of a variety of publics, ages, and backgrounds, and who listened to one another. It was certainly something out of the ordinary.

There was a subtle but tangible tension on whether or not  regulation beyond what we have today is needed.

While regulating the human behaviour that becomes encoded in AI, we need to ensure ethics of human behaviour, reasonable expectations and fairness are not conflated with the technology [ie a question of, is AI good or bad] but how it is designed, trained, employed, audited, and assess whether it should be used at all.

This was the most effective group challenge I have heard to date, counter the usual assumed inevitability of a mythical omnipotence. Perhaps Julia Powles, this is the beginnings of a robust, bold, imaginative response.

Why there’s not more women or people from minorities working in the sector, was a really interesting if short, part of the discussion. Why should young women and minorities want to go into an environment that they can see is hostile, in which they may not be heard, and we still hold *them* responsible for making work work?

And while there were many voices lamenting the skills and education gaps, there were probably fewer who might see the solution more simply, as I do. Schools are foreshortening Key Stage 3 by a year, replacing a breadth of subjects, with an earlier compulsory 3 year GCSE curriculum which includes RE, and PSHE, but means that at 12, many children are having to choose to do GCSE courses in computer science / coding, or a consumer-style iMedia, or no IT at all, for the rest of their school life. This either-or content, is incredibly short-sighted and surely some blend of non-examined digital skills should be offered through to 16 to all, at least in parallel importance with RE or PSHE.

I also still wonder, about all that incredibly bright and engaged people are not talking about and solving, and missing in policy making, while caught up in AI. We need to keep thinking broadly, and keep human rights at the centre of our thinking on machines. Anaïs Nin wrote over 70 years ago about the risks of growth in technology to expand our potential for connectivity through machines, but diminish our genuine connectedness as people.

“I don’t think the [American] obsession with politics and economics has improved anything. I am tired of this constant drafting of everyone, to think only of present day events”.

And as I wrote about nearly 3 years ago, we still seem to have no vision for sustainable public policy on data, or establishing a social contract for its use as Reema said, which underpins the UK AI debate. Meanwhile, the current changing national public policies in England on identity and technology, are becoming catastrophic.

Challenging the unaccountable and the ‘inevitable’ in today’s technology and AI debate, is an urgent call to action.

I look forward to hearing how Women Leading in AI plan to make it happen.


References:

Women Leading in AI website: http://womenleadinginai.org/
WLiAI Report: 10 Principles of Responsible AI
@WLinAI #WLinAI

image credits 
post: creative commons Mark Dodds/Flickr
event photo:  / GemServ

Policy shapers, product makers, and profit takers (2)

Corporate capture

Companies are increasingly in controlling positions of the tech narrative in the press. They are funding neutral third-sector orgs’ and think tanks’ research. Supporting organisations advising on online education. Closely involved in politics. And sit increasingly, within the organisations set up to lead the technology vision, advising government on policy and UK data analytics, or on social media, AI and ethics.

It is all subject to corporate capture.

But is this healthy for UK public policy and the future not of an industry sector, but a whole technology, when it comes to AI?

If a company’s vital business interests seem unfazed by the risk and harm they cause to individuals — from people who no longer trust the confidentiality of the system to measurable harms — why should those companies sit on public policy boards set up to shape the ethics they claim we need, to solve the problems and restore loss of trust that these very same companies are causing?

We laud people in these companies as co-founders and forward thinkers on new data ethics institutes. They are invited to sit on our national boards, or create new ones.

What does that say about the entire board’s respect for the law which the company breached? It is hard not to see it signal acceptance of the company’s excuses or lack of accountability.

Corporate accountability

The same companies whose work has breached data protection law, multiple ways, seemingly ‘by accident’ on national data extractions, are those companies that cross the t’s and dot the i’s on even the simplest conference call, and demand everything is said in strictest confidence. Meanwhile their everyday business practices ignore millions of people’s lawful rights to confidentiality.

The extent of commercial companies’ influence on these boards is  opaque. To allow this ethics bandwagon to be driven by the corporate giants surely eschews genuine rights-based values, and long-term integrity of the body they appear to serve.

I am told that these global orgs must be in the room and at the table, to use the opportunity to make the world a better place.

These companies already have *all* the opportunity. Not only monopoly positions on their own technology, but the datasets at scale which underpin it, excluding new entrants to the market. Their pick of new hires from universities. The sponsorship of events. The political lobbying. Access to the media. The lawyers. Bottomless pockets to pay for it all. And seats at board tables set up to shape UK policy responses.

It’s a struggle for power, and a stake in our collective future. The status quo is not good enough for many parts of society, and to enable Big Tech or big government to maintain that simply through the latest tools, is a missed chance to reshape for good.

You can see it in many tech boards’ make up, and pervasive white male bias. We hear it echoed in London think tank conferences, even independent tech design agencies, or set out in some Big Tech reports. All seemingly unconnected, but often funded by the same driving sources.

These companies are often those that made it worse to start with, and the very ethics issues the boards have been set up to deal with, are at the core of their business models and of their making.

The deliberate infiltration of influence on online safety policy for children, or global privacy efforts is very real, explicitly set out in the #FacebookEmails, for example.

We will not resolve these fundamental questions, as long as the companies whose business depend on them, steer national policy. The odds will be ever in their favour.

At the same time, some of these individuals are brilliant. In all senses.

So what’s the answer. If they are around the table, what should the UK public expect of their involvement, and ensure in whose best interests it is? How do we achieve authentic accountability?

Whether it be social media, data analytics, or AI in public policy, can companies be safely permitted to be policy shapers if they wear all the hats; product maker, profit taker, *and* process or product auditor?

Creating Authentic Accountability

At minimum we must demand responsibility for their own actions from board members who represent or are funded by companies.

  1. They must deliver on their own product problems first before being allowed to suggest solutions to societal problems.
  2. There should be credible separation between informing policy makers, and shaping policy.
  3. There must be total transparency of funding sources across any public sector boards, of members, and those lobbying them.
  4. Board members must be meaningfully held accountable for continued company transgressions on rights and freedoms, not only harms.
  5. Oversight of board decision making must be decentralised, transparent and available to scrutiny and meaningful challenge.

While these new bodies may propose solutions that include public engagement strategies, transparency, and standards, few propose meaningful oversight. The real test is not what companies say in their ethical frameworks, but in what they continue to do.

If they fail to meet legal or regulatory frameworks, minimum accountability should mean no more access to public data sets and losing positions of policy influence.

Their behaviour needs to go above and beyond meeting the letter of the law, scraping by or working around rights based protections. They need to put people ahead of profit and self interests. That’s what ethics should mean, not be a PR route to avoid regulation.

As long as companies think the consequences of their platforms and actions are tolerable and a minimal disruption to their business model, society will be expected to live with their transgressions, and our most vulnerable will continue to pay the cost.


This is part 2 of thoughts on Policy shapers, product makers, and profit takers — data and AI. Part 1 is here.

The power of imagination in public policy

“A new, a vast, and a powerful language is developed for the future use of analysis, in which to wield its truths so that these may become of more speedy and accurate practical application for the purposes of mankind than the means hitherto in our possession have rendered possible.” [on Ada Lovelace, The First tech Visionary, New Yorker, 2013]

What would Ada Lovelace have argued for in today’s AI debates? I think she may have used her voice not only to call for the good use of data analysis, but for her second strength.The power of her imagination.

James Ball recently wrote in The European [1]:

“It is becoming increasingly clear that the modern political war isn’t one against poverty, or against crime, or drugs, or even the tech giants – our modern political era is dominated by a war against reality.”

My overriding take away from three days spent at the Conservative Party Conference this week, was similar. It reaffirmed the title of a school debate I lost at age 15, ‘We only believe what we want to believe.’

James writes that it is, “easy to deny something that’s a few years in the future“, and that Conservatives, “especially pro-Brexit Conservatives – are sticking to that tried-and-tested formula: denying the facts, telling a story of the world as you’d like it to be, and waiting for the votes and applause to roll in.”

These positions are not confined to one party’s politics, or speeches of future hopes, but define perception of current reality.

I spent a lot of time listening to MPs. To Ministers, to Councillors, and to party members. At fringe events, in coffee queues, on the exhibition floor. I had conversations pressed against corridor walls as small press-illuminated swarms of people passed by with Queen Johnson or Rees-Mogg at their centre.

In one panel I heard a primary school teacher deny that child poverty really exists, or affects learning in the classroom.

In another, in passing, a digital Minister suggested that Pupil Referral Units (PRU) are where most of society’s ills start, but as a Birmingham head wrote this week, “They’ll blame the housing crisis on PRUs soon!” and “for the record, there aren’t gang recruiters outside our gates.”

This is no tirade on failings of public policymakers however. While it is easy to suspect malicious intent when you are at, or feel, the sharp end of policies which do harm, success is subjective.

It is clear that an overwhelming sense of self-belief exists in those responsible, in the intent of any given policy to do good.

Where policies include technology, this is underpinned by a self re-affirming belief in its power. Power waiting to be harnessed by government and the public sector. Even more appealing where it is sold as a cost-saving tool in cash strapped councils. Many that have cut away human staff are now trying to use machine power to make decisions. Some of the unintended consequences of taking humans out of the process, are catastrophic for human rights.

Sweeping human assumptions behind such thinking on social issues and their causes, are becoming hard coded into algorithmic solutions that involve identifying young people who are in danger of becoming involved in crime using “risk factors” such as truancy, school exclusion, domestic violence and gang membership.

The disconnect between perception of risk, the reality of risk, and real harm, whether perceived or felt from these applied policies in real-life, is not so much, ‘easy to deny something that’s a few years in the future‘ as Ball writes, but a denial of the reality now.

Concerningly, there is lack of imagination of what real harms look like.There is no discussion where sometimes these predictive policies have no positive, or even a negative effect, and make things worse.

I’m deeply concerned that there is an unwillingness to recognise any failures in current data processing in the public sector, particularly at scale, and where it regards the well-known poor quality of administrative data. Or to be accountable for its failures.

Harms, existing harms to individuals, are perceived as outliers. Any broad sweep of harms across policy like Universal Credit, seem perceived as political criticism, which makes the measurable failures less meaningful, less real, and less necessary to change.

There is a worrying growing trend of finger-pointing exclusively at others’ tech failures instead. In particular, social media companies.

Imagination and mistaken ideas are reinforced where the idea is plausible, and shared. An oft heard and self-affirming belief was repeated in many fora between policymakers, media, NGOs regards children’s online safety. “There is no regulation online”. In fact, much that applies offline applies online. The Crown Prosecution Service Social Media Guidelines is a good place to start. [2] But no one discusses where children’s lives may be put at risk or less safe, through the use of state information about them.

Policymakers want data to give us certainty. But many uses of big data, and new tools appear to do little more than quantify moral fears, and yet still guide real-life interventions in real-lives.

Child abuse prediction, and school exclusion interventions should not be test-beds for technology the public cannot scrutinise or understand.

In one trial attempting to predict exclusion, this recent UK research project in 2013-16 linked children’s school records of 800 children in 40 London schools, with Metropolitan Police arrest records of all the participants. It found interventions created no benefit, and may have caused harm. [3]

“Anecdotal evidence from the EiE-L core workers indicated that in some instances schools informed students that they were enrolled on the intervention because they were the “worst kids”.”

Keeping students in education, by providing them with an inclusive school environment, which would facilitate school bonds in the context of supportive student–teacher relationships, should be seen as a key goal for educators and policy makers in this area,” researchers suggested.

But policy makers seem intent to use systems that tick boxes, and create triggers to single people out, with quantifiable impact.

Some of these systems are known to be poor, or harmful.

When it comes to predicting and preventing child abuse, there is concern with the harms in US programmes ahead of us, such as both Pittsburgh, and Chicago that has scrapped its programme.

The Illinois Department of Children and Family Services ended a high-profile program that used computer data mining to identify children at risk for serious injury or death after the agency’s top official called the technology unreliable, and children still died.

“We are not doing the predictive analytics because it didn’t seem to be predicting much,” DCFS Director Beverly “B.J.” Walker told the Tribune.

Many professionals in the UK share these concerns. How long will they be ignored and children be guinea pigs without transparent error rates, or recognition of the potential harmful effects?

Helen Margetts, Director of the Oxford Internet Institute and Programme Director for Public Policy at the Alan Turing Institute, suggested at the IGF event this week, that stopping the use of these AI in the public sector is impossible. We could not decide that, “we’re not doing this until we’ve decided how it’s going to be.” It can’t work like that.” [45:30]

Why on earth not? At least for these high risk projects.

How long should children be the test subjects of machine learning tools at scale, without transparent error rates, audit, or scrutiny of their systems and understanding of unintended consequences?

Is harm to any child a price you’re willing to pay to keep using these systems to perhaps identify others, while we don’t know?

Is there an acceptable positive versus negative outcome rate?

The evidence so far of AI in child abuse prediction is not clearly showing that more children are helped than harmed.

Surely it’s time to stop thinking, and demand action on this.

It doesn’t take much imagination, to see the harms. Safe technology, and safe use of data, does not prevent the imagination or innovation, employed for good.

If we continue to ignore views from Patrick Brown, Ruth Gilbert, Rachel Pearson and Gene Feder, Charmaine Fletcher, Mike Stein, Tina Shaw and John Simmonds I want to know why.

Where you are willing to sacrifice certainty of human safety for the machine decision, I want someone to be accountable for why.

 


References

[1] James Ball, The European, Those waging war against reality are doomed to failure, October 4, 2018.

[2] Thanks to Graham Smith for the link. “Social Media – Guidelines on prosecuting cases involving communications sent via social media. The Crown Prosecution Service (CPS) , August 2018.”

[3] Obsuth, I., Sutherland, A., Cope, A. et al. J Youth Adolescence (2017) 46: 538. https://doi.org/10.1007/s10964-016-0468-4 London Education and Inclusion Project (LEIP): Results from a Cluster-Randomized Controlled Trial of an Intervention to Reduce School Exclusion and Antisocial Behavior (March 2016)

The Queen’s Speech, Information Society Services and GDPR

The Queen’s Speech promised new laws to ensure that the United Kingdom retains its world-class regime protecting personal data. And the government proposes a new digital charter to make the United Kingdom the safest place to be online for children.

Improving online safety for children should mean one thing. Children should be able to use online services without being used by them and the people and organisations behind it. It should mean that their rights to be heard are prioritised in decisions about them.

As Sir Tim Berners-Lee is reported as saying, there is a need to work with companies to put “a fair level of data control back in the hands of people“. He rightly points out that today terms and conditions are “all or nothing”.

There is a gap in discussions that we fail to address when we think of consent to terms and conditions, or “handing over data”. It is that this assumes that these are always and can be always, conscious acts.

For children the question of whether accepting Ts&Cs giving them control and whether it is meaningful becomes even more moot. What are the agreeing to? Younger children cannot give free and informed consent. After all most privacy policies standardly include phrases such as, “If we sell all or a portion of our business, we may transfer all of your information, including personal information, to the successor organization,” which means in effect that “accepting” a privacy policy today, is effectively a blank cheque for anything tomorrow.

The GDPR requires terms and conditions to be laid out in policies that a child can understand.

The current approach to legislation around children and the Internet is heavily weighted towards protection from seen threats. The threats we need to give more attention to, are those unseen.

By 2024 more than 50% of home Internet traffic will be used by appliances and devices, rather than just for communication and entertainment…The IoT raises huge questions on privacy and security, that have to be addressed by government, corporations and consumers. (WEF, 2017)

Our lives as measured in our behaviours and opinions, purchases and likes, are connected by trillions of sensors. My parents may have described using the Internet as going online. Today’s online world no longer means our time is spent ‘on the computer’, but being online, all day every day. Instead of going to a desk and booting up through a long phone cable, we have wireless computers in our pockets and in our homes, with functionality built-in to enable us to do other things; make a phonecall, make toast, and play. In a smart city surrounded by sensors under pavements, in buildings, cameras and tracking everywhere we go, we are living ever more inside an overarching network of cloud computers that store our data. And from all that data decisions are made, which adverts to show us, on which network sites, what we get offered and do not, and our behaviours and our conscious decision-making may be nudged quite invisibly.

Data about us, whether uniquely identifiable or not, is all too often collected passively, IP Address, linked sign-ins that extract friends lists, and some decide if we can either use the thing or not. It’s part of the deal. We get the service, they get to trade our identity, like Top Trumps, behind the scenes. But we often don’t see it, and under GDPR, there should be no contractual requirement as part of consent. I.e. agree or don’t get the service, is not an option.

From May 25, 2018 there will be special “conditions applicable to child’s consent in relation to information society services,” in Data Protection law which are applicable to the collection of data.

As yet, we have not had debate in the UK what that means in concrete terms, and if we do not soon, we risk it becoming an afterthought that harms more than helps protect children’s privacy, and therefore their digital identity.

I think of five things needed by policy shapers to tackle it:

  • In depth understanding of what ‘online’ and the Internet mean
  • Consistent understanding of what threat models and risk are connected to personal data, which today are underestimated
  • A grasp of why data privacy training is vital to safeguarding
    Confront the idea that user regulation as a stand-alone step will create a better online experience for users, when we know that perceived problems are created by providers or other site users
  • Siloed thinking that fails to be forward thinking or join the dots of tactics across Departments into cohesive inclusive strategy

If the government’s new “major new drive on internet safety” involves the world’s largest technology companies in order to make the UK the “safest place in the world for young people to go online,” then we must also ensure that these strategies and papers join things up and above all, a technical knowledge of how the Internet works needs to join the dots of risks and benefits in order to form a strategy that will actually make children safe, skilled and see into their future.

When it comes to children, there is a further question over consent and parental spyware. Various walk-to-school apps, lauded by the former Secretary of State two years running, use spyware and can be used without a child’s consent. Guardian Gallery, which could be used to scan for nudity in photos on anyone’s phone that the ‘parent’ phone holder has access to install it on, can be made invisible on the ‘child’ phone. Imagine this in coercive relationships.

If these technologies and the online environment are not correctly assessed with regard to “online safety” threat models for all parts of our population, then they fail to address the risk for the most vulnerable who need it.

What will the GDPR really mean for online safety improvement? What will it define as online services for remuneration in the IoT? And who will be considered as children, “targeted at” or “offered to”?

An active decision is required in the UK. Will 16 remain the default age needed for consent to access Information Society Services, or will we adopt 13 which needs a legal change?

As banal as these questions sound they need close attention paid, and clarity, between now and May 25, 2018 if the UK is to be GDPR ready for providers of online services to know who and how they should treat Internet access, participation and age [parental] verification.

How will the “controller” make “reasonable efforts to verify in such cases that consent is given or authorised by the holder of parental responsibility over the child”, and “taking into consideration available technology”.

These are fundamental questions of what the Internet is and means to people today. And if the current government approach to security is anything to go by, safety will not mean what we think it will mean.

It will matter how these plans join up. Age verification was not being considered in UK law in relation to how we would derogate GDPR, even as late as in October 2016 despite age verification requirements already in the Digital Economy Bill. It shows a lack of joined up digital thinking across our government and needs addressed with urgency to get into the next Parliamentary round.

In recent draft legislation I am yet to see the UK government address Internet rights and safety for young people as anything other than a protection issue, treating the online space in the same way as offline, irl, focused on stranger danger, and sexting.

The UK Digital Strategy commits to the implementation of the General Data Protection Regulation by May 2018, and frames it as a business issue, labelling data as “a global commodity” and as such, its handling is framed solely as a requirements needed to ensure “that our businesses can continue to compete and communicate effectively around the world” and that adoption “will ensure a shared and higher standard of protection for consumers and their data.”

The Digital Economy Bill, despite being a perfect vehicle for this has failed to take on children’s rights, and in particular the requirements of GDPR for consent at all. It was clear if we were to do any future digital transactions we need to level up to GDPR, not drop to the lowest common denominator between that and existing laws.

It was utterly ignored. So were children’s rights to have their own views heard in the consultation to comment on the GDPR derogations for children, with little chance for involvement from young people’s organisations, and less than a monthto respond.

We must now get this right in any new Digital Strategy and bill in the coming parliament.

Crouching Tiger Hidden Dragon: the making of an IoT trust mark

The Internet of Things (IoT) brings with it unique privacy and security concerns associated with smart technology and its use of data.

  • What would it mean for you to trust an Internet connected product or service and why would you not?
  • What has damaged consumer trust in products and services and why do sellers care?
  • What do we want to see different from today, and what is necessary to bring about that change?

These three pairs of questions implicitly underpinned the intense day of  discussion at the London Zoo last Friday.

The questions went unasked, and could have been voiced before we started, although were probably assumed to be self-evident:

  1. Why do you want one at all [define the problem]?
  2. What needs to change and why [define the future model]?
  3. How do you deliver that and for whom [set out the solution]?

If a group does not agree on the need and drivers for change, there will be no consensus on what that should look like, what the gap is to achieve it, and even less on making it happen.

So who do you want the trustmark to be for, why will anyone want it, and what will need to change to deliver the aims? No one wants a trustmark per se. Perhaps you want what values or promises it embodies to  demonstrate what you stand for, promote good practice, and generate consumer trust. To generate trust, you must be seen to be trustworthy. Will the principles deliver on those goals?

The Open IoT Certification Mark Principles, as a rough draft was the outcome of the day, and are available online.

Here’s my reflections, including what was missing on privacy, and the potential for it to be considered in future.

I’ve structured this first, assuming readers attended the event, at ca 1,000 words. Lists and bullet points. The background comes after that, for anyone interested to read a longer piece.

Many thanks upfront, to fellow participants, to the organisers Alexandra D-S and Usman Haque and the colleague who hosted at the London Zoo. And Usman’s Mum.  I hope there will be more constructive work to follow, and that there is space for civil society to play a supporting role and critical friend.


The mark didn’t aim to fix the IoT in a day, but deliver something better for product and service users, by those IoT companies and providers who want to sign up. Here is what I took away.

I learned three things

  1. A sense of privacy is not homogenous, even within people who like and care about privacy in theoretical and applied ways. (I very much look forward to reading suggestions promised by fellow participants, even if enforced personal openness and ‘watching the watchers’ may mean ‘privacy is theft‘.)
  2. Awareness of current data protection regulations needs improved in the field. For example, Subject Access Requests already apply to all data controllers, public and private. Few have read the GDPR, or the e-Privacy directive, despite importance for security measures in personal devices, relevant for IoT.
  3. I truly love working on this stuff, with people who care.

And it reaffirmed things I already knew

  1. Change is hard, no matter in what field.
  2. People working together towards a common goal is brilliant.
  3. Group collaboration can create some brilliantly sharp ideas. Group compromise can blunt them.
  4. Some men are particularly bad at talking over each other, never mind over the women in the conversation. Women notice more. (Note to self: When discussion is passionate, it’s hard to hold back in my own enthusiasm and not do the same myself. To fix.)
  5. The IoT context, and risks within it are not homogenous, but brings new risks and adverseries. The risks for manufacturers and consumers and the rest of the public are different, and cannot be easily solved with a one-size-fits-all solution. But we can try.

Concerns I came away with

  1. If the citizen / customer / individual is to benefit from the IoT trustmark, they must be put first, ahead of companies’ wants.
  2. If the IoT group controls both the design, assessment to adherence and the definition of success, how objective will it be?
  3. The group was not sufficiently diverse and as a result, reflects too little on the risks and impact of the lack of diversity in design and effect, and the implications of dataveillance .
  4. Critical minority thoughts although welcomed, were stripped out from crowdsourced first draft principles in compromise.
  5. More future thinking should be built-in to be robust over time.

IoT adversaries: via Twitter, unknown source

What was missing

There was too little discussion of privacy in perhaps the most important context of IoT – inter connectivity and new adversaries. It’s not only about *your* thing, but things that it speaks to, interacts with, of friends, passersby, the cityscape , and other individual and state actors interested in offense and defense. While we started to discuss it, we did not have the opportunity to discuss sufficiently at depth to be able to get any thinking into applying solutions in the principles.

One of the greatest risks that users face is the ubiquitous collection and storage of data about users that reveal detailed, inter-connected patterns of behaviour and our identity and not seeing how that is used by companies behind the scenes.

What we also missed discussing is not what we see as necessary today, but what we can foresee as necessary for the short term future, brainstorming and crowdsourcing horizon scanning for market needs and changing stakeholder wants.

Future thinking

Here’s the areas of future thinking that smart thinking on the IoT mark could consider.

  1. We are moving towards ever greater requirements to declare identity to use a product or service, to register and log in to use anything at all. How will that change trust in IoT devices?
  2. Single identity sign-on is becoming ever more imposed, and any attempts for multiple presentation of who I am by choice, and dependent on context, therefore restricted. [not all users want to use the same social media credentials for online shopping, with their child’s school app, and their weekend entertainment]
  3. Is this imposition what the public wants or what companies sell us as what customers want in the name of convenience? What I believe the public would really want is the choice to do neither.
  4. There is increasingly no private space or time, at places of work.
  5. Limitations on private space are encroaching in secret in all public city spaces. How will ‘handoffs’ affect privacy in the IoT?
  6. Public sector (connected) services are likely to need even more exacting standards than single home services.
  7. There is too little understanding of the social effects of this connectedness and knowledge created, embedded in design.
  8. What effects may there be on the perception of the IoT as a whole, if predictive data analysis and complex machine learning and AI hidden in black boxes becomes more commonplace and not every company wants to be or can be open-by-design?
  9. Ubiquitous collection and storage of data about users that reveal detailed, inter-connected patterns of behaviour and our identity needs greater commitments to disclosure. Where the hand-offs are to other devices, and whatever else is in the surrounding ecosystem, who has responsibility for communicating interaction through privacy notices, or defining legitimate interests, where the data joined up may be much more revealing than stand-alone data in each silo?
  10. Define with greater clarity the privacy threat models for different groups of stakeholders and address the principles for each.

What would better look like?

The draft privacy principles are a start, but they’re not yet aspirational as I would have hoped. Of course the principles will only be adopted if possible, practical and by those who choose to. But where is the differentiator from what everyone is required to do, and better than the bare minimum? How will you sell this to consumers as new? How would you like your child to be treated?

The wording in these 5 bullet points, is the first crowdsourced starting point.

  • The supplier of this product or service MUST be General Data Protection Regulation (GDPR) compliant.
  • This product SHALL NOT disclose data to third parties without my knowledge.
  • I SHOULD get full access to all the data collected about me.
  • I MAY operate this device without connecting to the internet.
  • My data SHALL NOT be used for profiling, marketing or advertising without transparent disclosure.

Yes other points that came under security address some of the crossover between privacy and surveillance risks, but there is as yet little substantial that is aspirational to make the IoT mark a real differentiator in terms of privacy. An opportunity remains.

It was that and how young people perceive privacy that I hoped to bring to the table. Because if manufacturers are serious about future success, they cannot ignore today’s children and how they feel. How you treat them today, will shape future purchasers and their purchasing, and there is evidence you are getting it wrong.

The timing is good in that it now also offers the opportunity to promote consistent understanding, and embed the language of GDPR and ePrivacy regulations into consistent and compatible language in policy and practice in the #IoTmark principles.

User rights I would like to see considered

These are some of the points I would think privacy by design would mean. This would better articulate GDPR Article 25 to consumers.

Data sovereignty is a good concept and I believe should be considered for inclusion in explanatory blurb before any agreed privacy principles.

  1. Goods should by ‘dumb* by default’ until the smart functionality is switched on. [*As our group chair/scribe called it]  I would describe this as, “off is the default setting out-of-the-box”.
  2. Privact by design. Deniability by default. i.e. not only after opt out, but a company should not access the personal or identifying purchase data of anyone who opts out of data collection about their product/service use during the set up process.
  3. The right to opt out of data collection at a later date while continuing to use services.
  4. A right to object to the sale or transfer of behavioural data, including to third-party ad networks and absolute opt-in on company transfer of ownership.
  5. A requirement that advertising should be targeted to content, [user bought fridge A] not through jigsaw data held on users by the company [how user uses fridge A, B, C and related behaviour].
  6. An absolute rejection of using children’s personal data gathered to target advertising and marketing at children

Background: Starting points before privacy

After a brief recap on 5 years ago, we heard two talks.

The first was a presentation from Bosch. They used the insights from the IoT open definition from 5 years ago in their IoT thinking and embedded it in their brand book. The presenter suggested that in five years time, every fridge Bosch sells will be ‘smart’. And the  second was a fascinating presentation, of both EU thinking and the intellectual nudge to think beyond the practical and think what kind of society we want to see using the IoT in future. Hints of hardcore ethics and philosophy that made my brain fizz from , soon to retire from the European Commission.

The principles of open sourcing, manufacturing, and sustainable life cycle were debated in the afternoon with intense arguments and clearly knowledgeable participants, including those who were quiet.  But while the group had assigned security, and started work on it weeks before, there was no one pre-assigned to privacy. For me, that said something. If they are serious about those who earn the trustmark being better for customers than their competition, then there needs to be greater emphasis on thinking like their customers, and by their customers, and what use the mark will be to customers, not companies. Plan early public engagement and testing into the design of this IoT mark, and make that testing open and diverse.

To that end, I believe it needed to be articulated more strongly, that sustainable public trust is the primary goal of the principles.

  • Trust that my device will not become unusable or worthless through updates or lack of them.
  • Trust that my device is manufactured safely and ethically and with thought given to end of life and the environment.
  • Trust that my source components are of high standards.
  • Trust in what data and how that data is gathered and used by the manufacturers.

Fundamental to ‘smart’ devices is their connection to the Internet, and so the last for me, is therefore key to successful public perception and it actually making a difference, beyond the PR value to companies. The value-add must be measured from consumers point of view.

All the openness about design functions and practice improvements, without attempting to change privacy infringing practices, may be wasted effort. Why? Because the perceived benefit of the value of the mark, will be proportionate to what risks it is seen to mitigate.

Why?

Because I assume that you know where your source components come from today. I was shocked to find out not all do and that ‘one degree removed’ is going to be an improvement? Holy cow, I thought. What about regulatory requirements for product safety recalls? These differ of course for different product areas, but I was still surprised. Having worked in global Fast Moving Consumer Goods (FMCG) and food industry, semiconductor and optoelectronics, and medical devices it was self-evident for me, that sourcing is rigorous. So that new requirement to know one degree removed, was a suggested minimum. But it might shock consumers to know there is not usually more by default.

Customers also believe they have reasonable expectations of not being screwed by a product update, left with something that does not work because of its computing based components. The public can take vocal, reputation-damaging action when they are let down.

In the last year alone, some of the more notable press stories include a manufacturer denying service, telling customers, “Your unit will be denied server connection,” after a critical product review. Customer support at Jawbone came in for criticism after reported failings. And even Apple has had problems in rolling out major updates.

While these are visible, the full extent of the overreach of company market and product surveillance into our whole lives, not just our living rooms, is yet to become understood by the general population. What will happen when it is?

The Internet of Things is exacerbating the power imbalance between consumers and companies, between government and citizens. As Wendy Grossman wrote recently, in one sense this may make privacy advocates’ jobs easier. It was always hard to explain why “privacy” mattered. Power, people understand.

That public discussion is long overdue. If open principles on IoT devices mean that the signed-up companies differentiate themselves by becoming market leaders in transparency, it will be a great thing. Companies need to offer full disclosure of data use in any privacy notices in clear, plain language  under GDPR anyway, but to go beyond that, and offer customers fair presentation of both risks and customer benefits, will not only be a point-of-sales benefit, but potentially improve digital literacy in customers too.

The morning discussion touched quite often on pay-for-privacy models. While product makers may see this as offering a good thing, I strove to bring discussion back to first principles.

Privacy is a human right. There can be no ethical model of discrimination based on any non-consensual invasion of privacy. Privacy is not something I should pay to have. You should not design products that reduce my rights. GDPR requires privacy-by-design and data protection by default. Now is that chance for IoT manufacturers to lead that shift towards higher standards.

We also need a new ethics thinking on acceptable fair use. It won’t change overnight, and perfect may be the enemy of better. But it’s not a battle that companies should think consumers have lost. Human rights and information security should not be on the battlefield at all in the war to win customer loyalty.  Now is the time to do better, to be better, demand better for us and in particular, for our children.

Privacy will be a genuine market differentiator

If manufacturers do not want to change their approach to exploiting customer data, they are unlikely to be seen to have changed.

Today feelings that people in US and Europe reflect in surveys are loss of empowerment, feeling helpless, and feeling used. That will shift to shock, resentment, and any change curve will predict, anger.

A 2014 survey for the Royal Statistical Society by Ipsos MORI, found that trust in institutions to use data is much lower than trust in them in general.

“The poll of just over two thousand British adults carried out by Ipsos MORI found that the media, internet services such as social media and search engines and telecommunication companies were the least trusted to use personal data appropriately.” [2014, Data trust deficit with lessons for policymakers, Royal Statistical Society]

In the British student population, one 2015 survey of university applicants in England, found of 37,000 who responded, the vast majority of UCAS applicants agree that sharing personal data can benefit them and support public benefit research into university admissions, but they want to stay firmly in control. 90% of respondents said they wanted to be asked for their consent before their personal data is provided outside of the admissions service.

In 2010, a multi method model of research with young people aged 14-18, by the Royal Society of Engineering, found that, “despite their openness to social networking, the Facebook generation have real concerns about the privacy of their medical records.” [2010, Privacy and Prejudice, RAE, Wellcome]

When people use privacy settings on Facebook set to maximum, they believe they get privacy, and understand little of what that means behind the scenes.

Are there tools designed by others, like Projects by If licenses, and ways this can be done, that you’re not even considering yet?

What if you don’t do it?

“But do you feel like you have privacy today?” I was asked the question in the afternoon. How do people feel today, and does it matter? Companies exploiting consumer data and getting caught doing things the public don’t expect with their data, has repeatedly damaged consumer trust. Data breaches and lack of information security have damaged consumer trust. Both cause reputational harm. Damage to reputation can harm customer loyalty. Damage to customer loyalty costs sales, profit and upsets the Board.

Where overreach into our living rooms has raised awareness of invasive data collection, we are yet to be able to see and understand the invasion of privacy into our thinking and nudge behaviour, into our perception of the world on social media, the effects on decision making that data analytics is enabling as data shows companies ‘how we think’, granting companies access to human minds in the abstract, even before Facebook is there in the flesh.

Governments want to see how we think too, and is thought crime really that far away using database labels of ‘domestic extremists’ for activists and anti-fracking campaigners, or the growing weight of policy makers attention given to predpol, predictive analytics, the [formerly] Cabinet Office Nudge Unit, Google DeepMind et al?

Had the internet remained decentralized the debate may be different.

I am starting to think of the IoT not as the Internet of Things, but as the Internet of Tracking. If some have their way, it will be the Internet of Thinking.

Considering our centralised Internet of Things model, our personal data from human interactions has become the network infrastructure, and data flows, are controlled by others. Our brains are the new data servers.

In the Internet of Tracking, people become the end nodes, not things.

And it is this where the future users will be so important. Do you understand and plan for factors that will drive push back, and crash of consumer confidence in your products, and take it seriously?

Companies have a choice to act as Empires would – multinationals, joining up even on low levels, disempowering individuals and sucking knowledge and power at the centre. Or they can act as Nation states ensuring citizens keep their sovereignty and control over a selected sense of self.

Look at Brexit. Look at the GE2017. Tell me, what do you see is the direction of travel? Companies can fight it, but will not defeat how people feel. No matter how much they hope ‘nudge’ and predictive analytics might give them this power, the people can take back control.

What might this desire to take-back-control mean for future consumer models? The afternoon discussion whilst intense, reached fairly simplistic concluding statements on privacy. We could have done with at least another hour.

Some in the group were frustrated “we seem to be going backwards” in current approaches to privacy and with GDPR.

But if the current legislation is reactive because companies have misbehaved, how will that be rectified for future? The challenge in the IoT both in terms of security and privacy, AND in terms of public perception and reputation management, is that you are dependent on the behaviours of the network, and those around you. Good and bad. And bad practices by one, can endanger others, in all senses.

If you believe that is going back to reclaim a growing sense of citizens’ rights, rather than accepting companies have the outsourced power to control the rights of others, that may be true.

There was a first principle asked whether any element on privacy was needed at all, if the text was simply to state, that the supplier of this product or service must be General Data Protection Regulation (GDPR) compliant. The GDPR was years in the making after all. Does it matter more in the IoT and in what ways? The room tended, understandably, to talk about it from the company perspective.  “We can’t” “won’t” “that would stop us from XYZ.” Privacy would however be better addressed from the personal point of view.

What do people want?

From the company point of view, the language is different and holds clues. Openness, control, and user choice and pay for privacy are not the same thing as the basic human right to be left alone. Afternoon discussion reminded me of the 2014 WAPO article, discussing Mark Zuckerberg’s theory of privacy and a Palo Alto meeting at Facebook:

“Not one person ever uttered the word “privacy” in their responses to us. Instead, they talked about “user control” or “user options” or promoted the “openness of the platform.” It was as if a memo had been circulated that morning instructing them never to use the word “privacy.””

In the afternoon working group on privacy, there was robust discussion whether we had consensus on what privacy even means. Words like autonomy, control, and choice came up a lot. But it was only a beginning. There is opportunity for better. An academic voice raised the concept of sovereignty with which I agreed, but how and where  to fit it into wording, which is at once both minimal and applied, and under a scribe who appeared frustrated and wanted a completely different approach from what he heard across the group, meant it was left out.

This group do care about privacy. But I wasn’t convinced that the room cared in the way that the public as a whole does, but rather only as consumers and customers do. But IoT products will affect potentially everyone, even those who do not buy your stuff. Everyone in that room, agreed on one thing. The status quo is not good enough. What we did not agree on, was why, and what was the minimum change needed to make a enough of a difference that matters.

I share the deep concerns of many child rights academics who see the harm that efforts to avoid restrictions Article 8 the GDPR will impose. It is likely to be damaging for children’s right to access information, be discriminatory according to parents’ prejudices or socio-economic status, and ‘cheating’ – requiring secrecy rather than privacy, in attempts to hide or work round the stringent system.

In ‘The Class’ the research showed, ” teachers and young people have a lot invested in keeping their spheres of interest and identity separate, under their autonomous control, and away from the scrutiny of each other.” [2016, Livingstone and Sefton-Green, p235]

Employers require staff use devices with single sign including web and activity tracking and monitoring software. Employee personal data and employment data are blended. Who owns that data, what rights will employees have to refuse what they see as excessive, and is it manageable given the power imbalance between employer and employee?

What is this doing in the classroom and boardroom for stress, anxiety, performance and system and social avoidance strategies?

A desire for convenience creates shortcuts, and these are often met using systems that require a sign-on through the platforms giants: Google, Facebook, Twitter, et al. But we are kept in the dark how by using these platforms, that gives access to them, and the companies, to see how our online and offline activity is all joined up.

Any illusion of privacy we maintain, we discussed, is not choice or control if based on ignorance, and backlash against companies lack of efforts to ensure disclosure and understanding is growing.

“The lack of accountability isn’t just troubling from a philosophical perspective. It’s dangerous in a political climate where people are pushing back at the very idea of globalization. There’s no industry more globalized than tech, and no industry more vulnerable to a potential backlash.”

[Maciej Ceglowski, Notes from an Emergency, talk at re.publica]

Why do users need you to know about them?

If your connected *thing* requires registration, why does it? How about a commitment to not forcing one of these registration methods or indeed any at all? Social Media Research by Pew Research in 2016 found that  56% of smartphone owners ages 18 to 29 use auto-delete apps, more than four times the share among those 30-49 (13%) and six times the share among those 50 or older (9%).

Does that tell us anything about the demographics of data retention preferences?

In 2012, they suggested social media has changed the public discussion about managing “privacy” online. When asked, people say that privacy is important to them; when observed, people’s actions seem to suggest otherwise.

Does that tell us anything about how well companies communicate to consumers how their data is used and what rights they have?

There is also data with strong indications about how women act to protect their privacy more but when it comes to basic privacy settings, users of all ages are equally likely to choose a private, semi-private or public setting for their profile. There are no significant variations across age groups in the US sample.

Now think about why that matters for the IoT? I wonder who makes the bulk of purchasing decsions about household white goods for example and has Bosch factored that into their smart-fridges-only decision?

Do you *need* to know who the user is? Can the smart user choose to stay anonymous at all?

The day’s morning challenge was to attend more than one interesting discussion happening at the same time. As invariably happens, the session notes and quotes are always out of context and can’t possibly capture everything, no matter how amazing the volunteer (with thanks!). But here are some of the discussion points from the session on the body and health devices, the home, and privacy. It also included a discussion on racial discrimination, algorithmic bias, and the reasons why care.data failed patients and failed as a programme. We had lengthy discussion on ethics and privacy: smart meters, objections to models of price discrimination, and why pay-for-privacy harms the poor by design.

Smart meter data can track the use of unique appliances inside a person’s home and intimate patterns of behaviour. Information about our consumption of power, what and when every day, reveals  personal details about everyday lives, our interactions with others, and personal habits.

Why should company convenience come above the consumer’s? Why should government powers, trump personal rights?

Smart meter is among the knowledge that government is exploiting, without consent, to discover a whole range of issues, including ensuring that “Troubled Families are identified”. Knowing how dodgy some of the school behaviour data might be, that helps define who is “troubled” there is a real question here, is this sound data science? How are errors identified? What about privacy? It’s not your policy, but if it is your product, what are your responsibilities?

If companies do not respect children’s rights,  you’d better shape up to be GDPR compliant

For children and young people, more vulnerable to nudge, and while developing their sense of self can involve forming, and questioning their identity, these influences need oversight or be avoided.

In terms of GDPR, providers are going to pay particular attention to Article 8 ‘information society services’ and parental consent, Article 17 on profiling,  and rights to restriction of processing (19) right to erasure in recital 65 and rights to portability. (20) However, they  may need to simply reassess their exploitation of children and young people’s personal data and behavioural data. Article 57 requires special attention to be paid by regulators to activities specifically targeted at children, as ‘vulnerable natural persons’ of recital 75.

Human Rights, regulations and conventions overlap in similar principles that demand respect for a child, and right to be let alone:

(a) The development of the child ‘s personality, talents and mental and physical abilities to their fullest potential;

(b) The development of respect for human rights and fundamental freedoms, and for the principles enshrined in the Charter of the United Nations.

A weakness of the GDPR is that it allows derogation on age and will create inequality and inconsistency  for children as a result. By comparison Article one of the Convention on the Rights of the Child (CRC) defines who is to be considered a “child” for the purposes of the CRC, and states that: “For the purposes of the present Convention, a child means every human being below the age of eighteen years unless, under the law applicable to the child, majority is attained earlier.”<

Article two of the CRC says that States Parties shall respect and ensure the rights set forth in the present Convention to each child within their jurisdiction without discrimination of any kind.

CRC Article 16 says that no child shall be subjected to arbitrary or unlawful interference with his or her honour and reputation.

Article 8 CRC requires respect for the right of the child to preserve his or her identity […] without unlawful interference.

Article 12 CRC demands States Parties shall assure to the child who is capable of forming his or her own views the right to express those views freely in all matters affecting the child, the views of the child being given due weight in accordance with the age and maturity of the child.

That stands in potential conflict with GDPR article 8. There is much on GDPR on derogations by country, and or children, still to be set.

What next for our data in the wild

Hosting the event at the zoo offered added animals, and during a lunch tour we got out on a tour, kindly hosted by a fellow participant. We learned how smart technology was embedded in some of the animal enclosures, and work on temperature sensors with penguins for example. I love tigers, so it was a bonus that we got to see such beautiful and powerful animals up close, if a little sad for their circumstances and as a general basic principle, seeing big animals caged as opposed to in-the-wild.

Freedom is a common desire in all animals. Physical, mental, and freedom from control by others.

I think any manufacturer that underestimates this element of human instinct is ignoring the ‘hidden dragon’ that some think is a myth.  Privacy is not dead. It is not extinct, or even unlike the beautiful tigers, endangered. Privacy in the IoT at its most basic, is the right to control our purchasing power. The ultimate people power waiting to be sprung. Truly a crouching tiger. People object to being used and if companies continue to do so without full disclosure, they do so at their peril. Companies seem all-powerful in the battle for privacy, but they are not.  Even insurers and data brokers must be fair and lawful, and it is for regulators to ensure that practices meet the law.

When consumers realise our data, our purchasing power has the potential to control, not be controlled, that balance will shift.

“Paper tigers” are superficially powerful but are prone to overextension that leads to sudden collapse. If that happens to the superficially powerful companies that choose unethical and bad practice, as a result of better data privacy and data ethics, then bring it on.

I hope that the IoT mark can champion best practices and make a difference to benefit everyone.

While the companies involved in its design may be interested in consumers, I believe it could be better for everyone, done well. The great thing about the efforts into an #IoTmark is that it is a collective effort to improve the whole ecosystem.

I hope more companies will realise their privacy rights and ethical responsibility in the world to all people, including those interested in just being, those who want to be let alone, and not just those buying.

“If a cat is called a tiger it can easily be dismissed as a paper tiger; the question remains however why one was so scared of the cat in the first place.”

The Resistance to Theory (1982), Paul de Man

Further reading: Networks of Control – A Report on Corporate Surveillance, Digital Tracking, Big Data & Privacy by Wolfie Christl and Sarah Spiekermann

Google Family Link for Under 13s: children’s privacy friend or faux?

“With the Family Link app from Google, you can stay in the loop as your kid explores on their Android* device. Family Link lets you create a Google Account for your kid that’s like your account, while also helping you set certain digital ground rules that work for your family — like managing the apps your kid can use, keeping an eye on screen time, and setting a bedtime on your kid’s device.”


John Carr shared his blog post about the Google Family Link today which was the first I had read about the new US account in beta. In his post, with an eye on GDPR, he asks, what is the right thing to do?

What is the Family Link app?

Family Link requires a US based google account to sign up, so outside the US we can’t read the full details. However from what is published online, it appears to offer the following three key features:

“Approve or block the apps your kid wants to download from the Google Play Store.

Keep an eye on screen time. See how much time your kid spends on their favorite apps with weekly or monthly activity reports, and set daily screen time limits for their device. “

and

“Set device bedtime: Remotely lock your kid’s device when it’s time to play, study, or sleep.”

From the privacy and disclosure information it reads that there is not a lot of difference between a regular (over 13s) Google account and this one for under 13s. To collect data from under 13s it must be compliant with COPPA legislation.

If you google “what is COPPA” the first result says, The Children’s Online Privacy Protection Act (COPPA) is a law created to protect the privacy of children under 13.”

But does this Google Family Link do that? What safeguards and controls are in place for use of this app and children’s privacy?

What data does it capture?

“In order to create a Google Account for your child, you must review the Disclosure (including the Privacy Notice) and the Google Privacy Policy, and give consent by authorizing a $0.30 charge on your credit card.”

Google captures the parent’s verified real-life credit card data.

Google captures child’s name, date of birth and email.

Google captures voice.

Google captures location.

Google may associate your child’s phone number with their account.

And lots more:

Google automatically collects and stores certain information about the services a child uses and how a child uses them, including when they save a picture in Google Photos, enter a query in Google Search, create a document in Google Drive, talk to the Google Assistant, or watch a video in YouTube Kids.

What does it offer over regular “13+ Google”?

In terms of general safeguarding, it doesn’t appear that SafeSearch is on by default but must be set and enforced by a parent.

Parents should “review and adjust your child’s Google Play settings based on what you think is right for them.”

Google rightly points out however that, “filters like SafeSearch are not perfect, so explicit, graphic, or other content you may not want your child to see makes it through sometimes.”

Ron Amadeo at Arstechnica wrote a review of the Family Link app back in February, and came to similar conclusions about added safeguarding value:

“Other than not showing “personalized” ads to kids, data collection and storage seems to work just like in a regular Google account. On the “Disclosure for Parents” page, Google notes that “your child’s Google Account will be like your own” and “Most of these products and services have not been designed or tailored for children.” Google won’t do any special content blocking on a kid’s device, so they can still get into plenty of trouble even with a monitored Google account.”

Your child will be able to share information, including photos, videos, audio, and location, publicly and with others, when signed in with their Google Account. And Google wants to see those photos.

There’s some things that parents cannot block at all.

Installs of app updates can’t be controlled, so leave a questionable grey area. Many apps are built on classic bait and switch – start with a free version and then the upgrade contains paid features. This is therefore something to watch for.

“Regardless of the approval settings you choose for your child’s purchases and downloads, you won’t be asked to provide approval in some instances, such as if your child: re-downloads an app or other content; installs an update to an app (even an update that adds content or asks for additional data or permissions); or downloads shared content from your Google Play Family Library. “

The child “will have the ability to change their activity controls, delete their past activity in “My Activity,” and grant app permissions (including things like device location, microphone, or contacts) to third parties”.

What’s in it for children?

You could argue that this gives children “their own accounts” and autonomy. But why do they need one at all? If I give my child a device on which they can download an app, then I approve it first.

If I am not aware of my under 13 year old child’s Internet time physically, then I’m probably not a parent who’s going to care to monitor it much by remote app either. Is there enough insecurity around ‘what children under 13 really do online’, versus what I see or they tell me as a parent, that warrants 24/7 built-in surveillance software?

I can use safe settings without this app. I can use a device time limiting app without creating a Google account for my child.

If parents want to give children an email address, yes, this allows them to have a device linked Gmail account to which you as a parent, cannot access content. But wait a minute, what’s this. Google can?

Google can read their mails and provide them “personalised product features”. More detail is probably needed but this seems clear:

“Our automated systems analyze your child’s content (including emails) to provide your child personally relevant product features, such as customized search results and spam and malware detection.”

And what happens when the under 13s turn 13? It’s questionable that it is right for Google et al. to then be able draw on a pool of ready-made customers’ data in waiting. Free from COPPA ad regulation. Free from COPPA privacy regulation.

Google knows when the child reaches 13 (the set-up requires a child’s date of birth, their first and last name, and email address, to set up the account). And they will inform the child directly when they become eligible to sign up to a regular account free of parental oversight.

What a birthday gift. But is it packaged for the child or Google?

What’s in it for Google?

The parental disclosure begins,

“At Google, your trust is a priority for us.”

If it truly is, I’d suggest they revise their privacy policy entirely.

Google’s disclosure policy also makes parents read a lot before you fully understand the permissions this app gives to Google.

I do not believe Family Link gives parents adequate control of their children’s privacy at all nor does it protect children from predatory practices.

While “Google will not serve personalized ads to your child“, your child “will still see ads while using Google’s services.”

Google also tailors the Family Link apps that the child sees, (and begs you to buy) based on their data:

“(including combining personal information from one service with information, including personal information, from other Google services) to offer them tailored content, such as more relevant app recommendations or search results.”

Contextual advertising using “persistent identifiers” is permitted under COPPA, and is surely a fundamental flaw. It’s certainly one I wouldn’t want to see duplicated under GDPR. Serving up ads that are relevant to the content the child is using, doesn’t protect them from predatory ads at all.

Google captures geolocators and knows where a child is and builds up their behavioural and location patterns. Google, like other online companies, captures and uses what I’ve labelled ‘your synthesised self’; the mix of online and offline identity and behavioural data about a user. In this case, the who and where and what they are doing, are the synthesised selves of under 13 year old children.

These data are made more valuable by the connection to an adult with spending power.

The Google Privacy Policy’s description of how Google services generally use information applies to your child’s Google Account.

Google gains permission via the parent’s acceptance of the privacy policy, to pass personal data around to third parties and affiliates. An affiliate is an entity that belongs to the Google group of companies. Today, that’s a lot of companies.

Google’s ad network consists of Google services, like Search, YouTube and Gmail, as well as 2+ million non-Google websites and apps that partner with Google to show ads.

I also wonder if it will undo some of the previous pro-privacy features on any linked child’s YouTube account if Google links any logged in accounts across the Family Link and YouTube platforms.

Is this pseudo-safe use a good thing?

In practical terms, I’d suggest this app is likely to lull parents into a false sense of security. Privacy safeguarding is not the default set up.

It’s questionable that Google should adopt some sort of parenting role through an app. Parental remote controls via an app isn’t an appropriate way to regulate whether my under 13 year old is using their device, rather than sleeping.

It’s also got to raise questions about children’s autonomy at say, 12. Should I as a parent know exactly every website and app that my child visits? What does that do for parental-child trust and relations?

As for my own children I see no benefit compared with letting them have supervised access as I do already.  That is without compromising my debit card details, or under a false sense of safeguarding. Their online time is based on age appropriate education and trust, and yes I have to manage their viewing time.

That said, if there are people who think parents cannot do that, is the app a step forward? I’m not convinced. It’s definitely of benefit to Google. But for families it feels more like a sop to adults who feel a duty towards safeguarding children, but aren’t sure how to do it.

Is this the best that Google can do by children?

In summary it seems to me that the Family Link app is a free gift from Google. (Well, free after the thirty cents to prove you’re a card-carrying adult.)

It gives parents three key tools: App approval (accept, pay, or block), Screen-time surveillance,  and a remote Switch Off of child’s access.

In return, Google gets access to a valuable data set – a parent-child relationship with credit data attached – and can increase its potential targeted app sales. Yet Google can’t guarantee additional safeguarding, privacy, or benefits for the child while using it.

I think for families and child rights, it’s a false friend. None of these tools per se require a Google account. There are alternatives.

Children’s use of the Internet should not mean they are used and their personal data passed around or traded in hidden back room bidding by the Internet companies, with no hope of control.

There are other technical solutions to age verification and privacy too.

I’d ask, what else has Google considered and discarded?

Is this the best that a cutting edge technology giant can muster?

This isn’t designed to respect children’s rights as intended under COPPA or ready for GDPR, and it’s a shame they’re not trying.

If I were designing Family Link for children, it would collect no real identifiers. No voice. No locators. It would not permit others access to voice or images or need linked. It would keep children’s privacy intact, and enable them when older, to decide what they disclose. It would not target personalised apps/products  at children at all.

GDPR requires active, informed parental consent for children’s online services. It must be revocable, personal data must collect the minimum necessary and be portable. Privacy policies must be clear to children. This, in terms of GDPR readiness, is nowhere near ‘it’.

Family Link needs to re-do their homework. And this isn’t a case of ‘please revise’.

Google is a multi-billion dollar company. If they want parental trust, and want to be GDPR and COPPA compliant, they should do the right thing.

When it comes to child rights, companies must do or do not. There is no try.


image source: ArsTechnica

Notes on Not the fake news

Notes and thoughts from Full Fact’s event at Newspeak House in London on 27/3 to discuss fake news, the misinformation ecosystem, and how best to respond. The recording is here. The contributions and questions part of the evening began from 55.55.


What is fake news? Are there solutions?

1. Clickbait: celebrity pull to draw online site visitors towards traffic to an advertising model – kill the business model
2. Mischief makers: Deceptive with hostile intent – bots, trolls, with an agenda
3. Incorrectly held views: ‘vaccinations cause autism’ despite the evidence to the contrary. How can facts reach people who only believe what they want to believe?

Why does it matter? The scrutiny of people in power matters – to politicians, charities, think tanks – as well as the public.

It is fundamental to remember that we do in general believe that the public has a sense of discernment, however there is also a disconnect between an objective truth and some people’s perception of reality. Can this conflict be resolved? Is it necessary to do so? If yes, when is it necessary to do so and who decides that?

There is a role for independent tracing of unreliable information, its sources and its distribution patterns and identifying who continues to circulate fake news even when asked to desist.

Transparency about these processes is in the public interest.

Overall, there is too little public understanding of how technology and online tools affect behaviours and decision-making.

The Role of Media in Society

How do you define the media?
How can average news consumers distinguish between self-made and distributed content compared with established news sources?
What is the role of media in a democracy?
What is the mainstream media?
Does the media really represent what I want to understand? > Does the media play a role in failure of democracy if news is not representative of all views? > see Brexit, see Trump
What are news values and do we have common press ethics?

New problems in the current press model:

Failure of the traditional media organisations in fact checking; part of the problem is that the credible media is under incredible pressure to compete to gain advertising money share.

Journalism is under resourced. Verification skills are lacking and tools can be time consuming. Techniques like reverse image search, and verification take effort.

Press releases with numbers can be less easily scrutinised so how do we ensure there is not misinformation through poor journalism?

What about confirmation bias and reinforcement?

What about friends’ behaviours? Can and should we try to break these links if we are not getting a fair picture? The Facebook representative was keen to push responsibility for the bubble entirely to users’ choices. Is this fair given the opacity of the model?
Have we cracked the bubble of self-reinforcing stories being the only stories that mutual friends see?
Can we crack the echo chamber?
How do we start to change behaviours? Can we? Should we?

The risk is that if people start to feel nothing is trustworthy, we trust nothing. This harms relations between citizens and state, organisations and consumers, professionals and public and between us all. Community is built on relationships. Relationships are built on trust. Trust is fundamental to a functioning society and economy.

Is it game over?

Will Moy assured the audience that there is no need to descend into blind panic and there is still discernment among the public.

Then, it was asked, is perhaps part of the problem that the Internet is incapable in its current construct to keep this problem at bay? Is part of the solution re-architecturing and re-engineering the web?

What about algorithms? Search engines start with word frequency and neutral decisions but are now much more nuanced and complex. We really must see how systems decide what is published. Search engines provide but also restrict our access to facts and ‘no one gets past page 2 of search results’. Lack of algorithmic transparency is an issue, but will not be solved due to commercial sensitivities.

Fake news creation can be lucrative. Mangement models that rely on user moderation or comments to give balance can be gamed.

Are there appropriate responses to the grey area between trolling and deliberate deception through fake news that is damaging? In what context and background? Are all communities treated equally?

The question came from the audience whether the panel thought regulation would come from the select committee inquiry. The general response was that it was unlikely.

What are the solutions?

The questions I came away thinking about went unanswered, because I am not sure there are solutions as long as the current news model exists and is funded in the current way by current players.

I believe one of the things that permits fake news is the growing imbalance of money between the big global news distributors and independent and public interest news sources.

This loss of balance, reduces our ability to decide for ourselves what we believe and what matters to us.

The monetisation of news through its packaging in between advertising has surely contaminated the news content itself.

Think of a Facebook promoted post – you can personalise your audience to a set of very narrow and selective characteristics. The bubble that receives that news is already likely to be connected by similar interest pages and friends and the story becomes self reinforcing, showing up in  friends’ timelines.

A modern online newsroom moves content on the webpage around according to what is getting the most views and trending topics in a list encourage the viewers to see what other people are reading, and again, are self reinforcing.

There is also a lack of transparency of power. Where we see a range of choices from which we may choose to digest a range of news, we often fail to see one conglomerate funder which manages them all.

The discussion didn’t address at all the fundamental shift in “what is news” which has taken place over the last twenty years. In part, I believe the responsibility for the credibility level of fake news in viewers lies with 24/7 news channels. They have shifted the balance of content from factual bulletins, to discussion and opinion. Now while the news channel is seen as a source of ‘news’ much of the time, the content is not factual, but opinion, and often that means the promotion and discussion of the opinions of their paymaster.

Most simply, how should I answer the question that my ten year old asks – how do I know if something on the Internet is true or not?

Can we really say it is up to the public to each take on this role and where do we fit the needs of the vulnerable or children into that?

Is the term fake news the wrong approach and something to move away from? Can we move solutions away from target-fixation ‘stop fake news’ which is impossible online, but towards what the problems are that fake news cause?

Interference in democracy. Interference in purchasing power. Interference in decision making. Interference in our emotions.

These interferences with our autonomy is not something that the web is responsible for, but the people behind the platforms must be accountable for how their technology works.

In the mean time, what can we do?

“if we ever want the spread of fake news to stop we have to take responsibility for calling out those who share fake news (real fake news, not just things that feel wrong), and start doing a bit of basic fact-checking ourselves.” [IB Times, Eliot Higgins is the founder of Bellingcat]

Not everyone has the time or capacity to each do that. As long as today’s imbalance of money and power exists, truly independent organisations like Bellingcat and FullFact have an untold value.


The billed Google and Twitter speakers were absent because they were invited to a meeting with the Home Secretary on 28/3. Speakers were Will Moy, Director of Jenni Sargent Managing Director of , Richard Allan, Facebook EMEA Policy Director and the event was chaired by Bill Thompson.

Mum, are we there yet? Why should AI care.

Mike Loukides drew similarities between the current status of AI and children’s learning in an article I read this week.

The children I know are always curious to know where they are going, how long will it take, and how they will know when they get there. They ask others for guidance often.

Loukides wrote that if you look carefully at how humans learn, you see surprisingly little unsupervised learning.

If unsupervised learning is a prerequisite for general intelligence, but not the substance, what should we be looking for, he asked. It made me wonder is it also true that general intelligence is a prerequisite for unsupervised learning? And if so, what level of learning must AI achieve before it is capable of recursive self-improvement? What is AI being encouraged to look for as it learns, what is it learning as it looks?

What is AI looking for and how will it know when it gets there?

Loukides says he can imagine a toddler learning some rudiments of counting and addition on his or her own, but can’t imagine a child developing any sort of higher mathematics without a teacher.

I suggest a different starting point. I think children develop on their own, given a foundation. And if the foundation is accompanied by a purpose — to understand why they should learn to count, and why they should want to — and if they have the inspiration, incentive and  assets they’ll soon go off on their own, and outstrip your level of knowledge. That may or may not be with a teacher depending on what is available, cost, and how far they get compared with what they want to achieve.

It’s hard to learn something from scratch by yourself if you have no boundaries to set knowledge within and search for more, or to know when to stop when you have found it.

You’ve only to start an online course, get stuck, and try to find the solution through a search engine to know how hard it can be to find the answer if you don’t know what you’re looking for. You can’t type in search terms if you don’t know the right words to describe the problem.

I described this recently to a fellow codebar-goer, more experienced than me, and she pointed out something much better to me. Don’t search for the solution or describe what you’re trying to do, ask the search engine to find others with the same error message.

In effect she said, your search is wrong. Google knows the answer, but can’t tell you what you want to know, if you don’t ask it in the way it expects.

So what will AI expect from people and will it care if we dont know how to interrelate? How does AI best serve humankind and defined by whose point-of-view? Will AI serve only those who think most closely in AI style steps and language?  How will it serve those who don’t know how to talk about, or with it? AI won’t care if we don’t.

If as Loukides says, we humans are good at learning something and then applying that knowledge in a completely different area, it’s worth us thinking about how we are transferring our knowledge today to AI and how it learns from that. Not only what does AI learn in content and context, but what does it learn about learning?

His comparison of a toddler learning from parents — who in effect are ‘tagging’ objects through repetition of words while looking at images in a picture book — made me wonder how we will teach AI the benefit of learning? What incentive will it have to progress?

“the biggest project facing AI isn’t making the learning process faster and more efficient. It’s moving from machines that solve one problem very well (such as playing Go or generating imitation Rembrandts) to machines that are flexible and can solve many unrelated problems well, even problems they’ve never seen before.”

Is the skill to enable “transfer learning” what will matter most?

For AI to become truly useful, we need better as a global society to understand *where* it might best interface with our daily lives, and most importantly *why*.  And consider *who* is teaching and AI and who is being left out in the crowdsourcing of AI’s teaching.

Who is teaching AI what it needs to know?

The natural user interfaces for people to interact with today’s more common virtual assistants (Amazon’s Alexa, Apple’s Siri and Viv, Microsoft  and Cortana) are not just providing information to the user, but through its use, those systems are learning. I wonder what percentage of today’s  population is using these assistants, how representative are they, and what our AI assistants are being taught through their use? Tay was a swift lesson learned for Microsoft.

In helping shape what AI learns, what range of language it will use to develop its reference words and knowledge, society co-shapes what AI’s purpose will be —  and for AI providers to know what’s the point of selling it. So will this technology serve everyone?

Are providers counter-balancing what AI is currently learning from crowdsourcing, if the crowd is not representative of society?

So far we can only teach machines to make decisions based on what we already know, and what we can tell it to decide quickly against pre-known references using lots of data. Will your next image captcha, teach AI to separate the sloth from the pain-au-chocolat?

One of the task items for machine processing is better searches. Measurable goal driven tasks have boundaries, but who sets them? When does a computer know, if it’s found enough to make a decision. If the balance of material about the Holocaust on the web for example, were written by Holocaust deniers will AI know who is right? How will AI know what is trusted and by whose measure?

What will matter most is surely not going to be how to optimise knowledge transfer from human to AI — that is the baseline knowledge of supervised learning — and it won’t even be for AI to know when to use its skill set in one place and when to apply it elsewhere in a different context; so-called learning transfer, as Mike Loukides says. But rather, will AI reach the point where it cares?

  • Will AI ever care what it should know and where to stop or when it knows enough on any given subject?
  • How will it know or care if what it learns is true?
  • If in the best interests of advancing technology or through inaction  we do not limit its boundaries, what oversight is there of its implications?

Online limits will limit what we can reach in Thinking and Learning

If you look carefully at how humans learn online, I think rather than seeing  surprisingly little unsupervised learning, you see a lot of unsupervised questioning. It is often in the questioning that is done in private we discover, and through discovery we learn. Often valuable discoveries are made; whether in science, in maths, or important truths are found where there is a need to challenge the status quo. Imagine if Galileo had given up.

The freedom to think freely and to challenge authority, is vital to protect, and one reason why I and others are concerned about the compulsory web monitoring starting on September 5th in all schools in England, and its potential chilling effect. Some are concerned who  might have access to these monitoring results today or in future, if stored could they be opened to employers or academic institutions?

If you tell children do not use these search terms and do not be curious about *this* subject without repercussions, it is censorship. I find the idea bad enough for children, but for us as adults its scary.

As Frankie Boyle wrote last November, we need to consider what our internet history is:

“The legislation seems to view it as a list of actions, but it’s not. It’s a document that shows what we’re thinking about.”

Children think and act in ways that they may not as an adult. People also think and act differently in private and in public. It’s concerning that our private online activity will become visible to the State in the IP Bill — whether photographs that captured momentary actions in social media platforms without the possibility to erase them, or trails of transitive thinking via our web history — and third-parties may make covert judgements and conclusions about us, correctly or not, behind the scenes without transparency, oversight or recourse.

Children worry about lack of recourse and repercussions. So do I. Things done in passing, can take on a permanence they never had before and were never intended. If expert providers of the tech world such as Apple Inc, Facebook Inc, Google Inc, Microsoft Corp, Twitter Inc and Yahoo Inc are calling for change, why is the government not listening? This is more than very concerning, it will have disastrous implications for trust in the State, data use by others, self-censorship, and fear that it will lead to outright censorship of adults online too.

By narrowing our parameters what will we not discover? Not debate?  Or not invent? Happy are the clockmakers, and kids who create. Any restriction on freedom to access information, to challenge and question will restrict children’s learning or even their wanting to.  It will limit how we can improve our shared knowledge and improve our society as a result. The same is true of adults.

So in teaching AI how to learn, I wonder how the limitations that humans put on its scope — otherwise how would it learn what the developers want — combined with showing it ‘our thinking’ through search terms,  and how limitations on that if users self-censor due to surveillance, will shape what AI will help us with in future and will it be the things that could help the most people, the poorest people, or will it be people like those who programme the AI and use search terms and languages it already understands?

Who is accountable for the scope of what we allow AI to do or not? Who is accountable for what AI learns about us, from our behaviour data if it is used without our knowledge?

How far does AI have to go?

The leap for AI will be if and when AI can determine what it doesn’t know, and it sees a need to fill that gap. To do that, AI will need to discover a purpose for its own learning, indeed for its own being, and be able to do so without limitation from the that humans shaped its framework for doing so. How will AI know what it needs to know and why? How will it know, what it knows is right and sources to trust? Against what boundaries will AI decide what it should engage with in its learning, who from and why? Will it care? Why will it care? Will it find meaning in its reason for being? Why am I here?

We assume AI will know better. We need to care, if AI is going to.

How far are we away from a machine that is capable of recursive self-improvement, asks John Naughton in yesterday’s Guardian, referencing work by Yuval Harari suggesting artificial intelligence and genetic enhancements will usher in a world of inequality and powerful elites. As I was finishing this, I read his article, and found myself nodding, as I read the implications of new technology focus too much on technology and too little on society’s role in shaping it.

AI at the moment has a very broad meaning to the general public. Is it living with life-supporting humanoids?  Do we consider assistive search tools as AI? There is a fairly general understanding of “What is A.I., really?” Some wonder if we are “probably one of the last generations of Homo sapiens,” as we know it.

If the purpose of AI is to improve human lives, who defines improvement and who will that improvement serve? Is there a consensus on the direction AI should and should not take, and how far it should go? What will the global language be to speak AI?

As AI learning progresses, every time AI turns to ask its creators, “Are we there yet?”,  how will we know what to say?

image: Stephen Barling flickr.com/photos/cripsyduck (CC BY-NC 2.0)