A reflection for World Children’s Day 2021. In ten years’ time my three children will be in their twenties. What will they and the world around them have become? What will shape them in the years in between?
Today when people talk about AI, we hear fears of consciousness in AI. We see, I, Robot. The reality of any AI that will touch their lives in the next ten years is very different. The definition may be contested but artificial intelligence in schools already involves automated decision making at speed and scale, without compassion or conscience, but with outcomes that affect children’s lives for a long time.
The guidance of today—in policy documents, and well intentioned toolkits and guidelines and oh yes yet another ‘ethics’ framework— is all fairly same-y in terms of the issues identified.
Bias in training data. Discrimination in outcomes. Inequitable access or treatment. Lack of understandability or transparency of decision-making. Lack of routes for redress. More rarely thoughts on exclusion, disability and accessible design, and the digital divide. In seeking to fill it, the call can conclude with a cry to ensure ‘AI for all’.
Most of these issues fail to address the key questions in my mind, with regards to AI in education.
Who gets to shape a child’s life and the environment they grow up in? The special case of children is often used for special pleading in government tech issues. Despite this, in policy discussion and documents, govt. fails over and over again to address children as human beings.
Children are still developing. Physically, emotionally, their sense of fairness and justice, of humor, of politics and who they are.
AI is shaping children in ways that schools and parents cannot see. And the issues go beyond limited agency and autonomy. Beyond the UNCRC articles 8 and 18, the role of the parent and lost boundaries between schools and home, and 23 and 29. (See at the end in detail).
Concerns about accessibility published on AI are often about the individual and inclusion, in terms of design to be able to participate. But once they can participate, where is the independent measurement and evaluation of impact on their educational progress, or physical and mental development? What is their effect?
From overhyped like Edgenuity, to the oversold like ClassCharts (that didn’t actually have any AI in it but it still won Bett Show Awards), frameworks often mention but still have no meaningful solutions for the products that don’t work and fail.
But what about the harms from products that work as intended? These can fail human dignity or create a chilling effect, like exam proctoring tech. Those safety tech that infer things and cause staff to intervene even if the child was only chatting about ‘a terraced house.’ Punitive systems that keep profiles of behaviour points long after a teacher would have let it go. What about those shaping the developing child’s emotions and state of mind by design and claim to operate within data protection law? Those who measure and trackmental health or make predictions for interventions by school staff?
Brain headbands to transfer neurosignals aren’t biometric data in data protection terms if not used to or able to uniquely identify a child.
“Wellbeing” apps are not being regulated as medical devices and yet are designed to profile and influence mental health and mood and schools adopt them at scale.
If AI is being used to deliver a child’s education, but only in the English language, what risk does this tech-colonialism create in evangelising children in non-native English speaking families through AI, not only in access to teaching, but on reshaping culture and identity?
At the institutional level, concerns are only addressed after the fact. But how should they be assessed as part of procurement when many AI are marketed as , it never stops “learning about your child”? Tech needs full life-cycle oversight, but what companies claim their products do is often only assessed to pass accreditation at a single point in time.
But the biggest gap in governance is not going to be fixed by audits or accreditation of algorithmic fairness. It is the failure to recognize the redistribution of not only agency but authority; from individuals to companies (teacher doesn’t decide what you do next, the computer does). From public interest institutions to companies (company X determines the curriculum content, not the school). And from State to companies (accountability for outcomes has fallen through the gap in outsourcing activity to the AI company). We are automating authority, and with it the shirking of responsibility, the liability for the machine’s flaws, and accepting it is the only way, thanks to our automation bias. Accountability must be human, but whose?
Around the world the rush to regulate AI, or related tech in Online Harms, or Digital Services, or Biometrics law, is going to embed, not redistribute power, through regulatory capitalism.
We have regulatory capture including on government boards and bodies that shape the agenda; unrealistic expectations of competition shaping the market; and we’re ignoring transnational colonialisation of whole schools or even regions and countries shaping the delivery of education at scale.
We’re not regulating the questions: Who does the AI serve and how do we deal with conflicts of interest between child’s rights, family, school staff, the institution or State, and the company’s wants? Where do we draw the line between public interest, private interests, and who decides what are the best interests of each child?
We’re not managing what the implications are of the datafied child being mined and analysed in order to train companies’ AI. Is it ethical or desirable to use children’s behaviour as sources of business intelligence, to donate free labour in school systems performed for companies to profit from, without any choice (see UNCRC Art 32)?
We’re barely aware as parents, if a company will decide how a child is tested in a certain way, asked certain questions about their mental health, given nudges to ‘improve’ their performance or mood. It’s not a question of ‘is it in the best interests of a child’, but rather, who designs it and can schools assess compatibility with a child’s fundamental rights and freedoms to develop free from interference?
It’s not about protection of ‘the data’ although data protection should be about the protection of the person, not only enabling data flows for business.
It’s about protection from strangers engineering a child’s development in closed systems.
It is about child protection from unknown and unlimited number of persons interfering with who they will become.
Today’s laws and debate are too often about regulating someone else’s opinion; how it should be done, not if it should be done at all.
Who do I ask my top two questions on AI in education:
(a) who gets and grants permission to shape my developing child, and
(b) what happens to the duty of care in loco parentis as schools outsource authority to an algorithm?
UNCRC
Article 8
1. States Parties undertake to respect the right of the child to preserve his or her identity, including nationality, name and family relations as recognised by law without unlawful interference.
Article 18
1. States Parties shall use their best efforts to ensure recognition of the principle that both parents have common responsibilities for the upbringing and development of the child. Parentsor, as the case may be, legal guardians, have the primary responsibility for the upbringing and development of the child. The best interests of the child will be their basic concern.
Article 29
1. States Parties agree that the education of the child shall be directed to:
(a) The development of the child’s personality, talents and mental and physical abilities to their fullest potential;
(c) The development of respect for the child’s parents, his or her own cultural identity, language and values, for the national values of the country in which the child is living, the country from which he or she may originate, and for civilizations different from his or her own;
Article 30
In those States in which ethnic, religious or linguistic minorities or persons of indigenous origin exist, a child belonging to such a minority or who is indigenous shall not be denied the right, in community with other members of his or her group, to enjoy his or her own culture
A slightly longer version of a talk I gave at the launch event of the OMDDAC Data-Driven Responses to COVID-19: Lessons Learned report on October 13, 2021. I was asked to respond to the findings presented on Young People, Covid-19 and Data-Driven Decision-Making by Dr Claire Bessant at Northumbria Law School.
[ ] indicates text I omitted for reasons of time, on the day.
Their final report is now available to download from the website.
You can also watch the full event here via YouTube. The part on young people presented by Claire and that I follow, is at the start.
—————————————————–
I’m really pleased to congratulate Claire and her colleagues today at OMDDAC and hope that policy makers will recognise the value of this work and it will influence change.
I will reiterate three things they found or included in their work.
Young people want to be heard.
Young people’s views on data and trust, include concerns about conflated data purposes
and
3. The concept of being, “data driven under COVID conditions”.
This OMDDAC work together with Investing in Children, is very timely as a rapid response, but I think it is also important to set it in context, and recognize that some of its significance is that it reflects a continuum of similar findings over time, largely unaffected by the pandemic.
Claire’s work comprehensively backs up the consistent findings of over ten years of public engagement, including with young people.
The 2010 study with young people conducted by The Royal Academy of Engineering supported by three Research Councils and Wellcome, discussed attitudes towards the use of medical records and concluded: These questions and concerns must be addressed by policy makers, regulators, developers and engineers before progressing with the design, and implementation of record keeping systems and the linking of any databases.
The same Committee’s Big Data Dilemma 2015-16 report, (p9) concluded “data (some collected many years before and no longer with a clear consent trail) […] is unsatisfactory left unaddressed by Government and without a clear public-policy position.”
[This year our own work with young people, published in our report on data metaphors “the words we use in data policy”, found that young people want institutions to stop treating data about them as a commodity and start respecting data as extracts from the stories of their lives.]
The UK government and policy makers, are simply ignoring the inconvenient truth that legislation and governance frameworks such as the UN General Comment no 25 on Children in the Digital Environment, that exist today, demand people know what is done with data about them, and it must be applied to address children’s right to be heard and to enable them to exercise their data rights.
The public perceptions study within this new OMDDAC work, shows that it’s not only the views of children and young people that are being ignored, but adults too.
And perhaps it is worth reflecting here, that often people don’t tend to think about all this in terms of data rights and data protection, but rather human rights and protections for the human being from the use of data that gives other people power over our lives.
This project, found young people’s trust in use of their confidential personal data was affected by understanding who would use the data and why, and how people will be protected from prejudice and discrimination.
We could build easy-reporting mechanisms at public points of contact with state institutions; in education, in social care, in welfare and policing, to produce reports on demand of the information you hold about me and enable corrections. It would benefit institutions by having more accurate data, and make them more trustworthy if people can see here’s what you hold on me and here’s what you did with it.
This research shows that current policy is not what young people want. People want the ability to choose between granular levels of control in the data that is being shared. They value having autonomy and control, knowing who will have access, maintaining records accuracy, how people will be kept informed of changes, who will maintain and regulate the database, data security, anonymisation, and to have their views listened to.
Young people also fear the power of data to speak for them, that the data about them are taken at face value, listened to by those in authority more than the child in their own voice.
What do these findings mean for public policy? Without respect for what people want; for the fundamental human rights and freedoms for all, there is no social license for data policies.
Yet the government stubbornly refuses to learn and seems to believe it’s all a communications issue, a bit like the ‘Yes Minister’ English approach to foreigners when they don’t understand: just shout louder.
No, this research shows data policy failures are not fixed by, “communicate the benefits”.
Nor is it fixed by changing Data Protection law. As a comment in the report says, UK data protection law offers a “how-to” not a “don’t-do”.
Data protection law is designed to be enabling of data flows. But that can mean that when state data processing rightly often avoids using the lawful basis of consent in data protection terms, the data use is not consensual.
[For the sake of time, I didn’t include this thought in the next two paragraphs in the talk, but I think it is important to mention that in our own work we find that this contradiction is not lost on young people. — Against the backdrop of the efforts after the MeToo movement and lots said by Ministers in Education and at the DCMS about the Everyone’s Invitedwork earlier this year to champion consent in relationships, sex and health education (RSHE) curriculum; adults in authority keep saying consent matters, but don’t demonstrate it, and when it comes to data, use people’s data in ways they do not want.
The report picks up that young people, and disproportionately those communities that experience harm from authorities, mistrust data sharing with the police. This is now set against the backdrop of not only the recent, Wayne Couzens case, but a series of very public misuses of police power, including COVID powers.]
The data powers used, “Under COVID conditions” are now being used as a cover for the attack on data protections in the future. The DCMS consultation on changing UK Data Protection law, open until November 19th,suggests that similarly reduced protections on data distribution in the emergency, should become the norm. While DP law is written expressly to permit things that are out of the ordinary in extraordinary circumstances, they are limited in time. The government is proposing that some things that were found convenient to do under COVID, now become commonplace.
But it includes things such as removing Article 22 from the UK GDPR with its protections for people in processes involving automated decision making.
Young people were those who felt first hand the risks and harms of those processes in the summer of 2020, and the “mutant algorithm” is something this Observatory Report work also addressed in their research. Again, it found young people felt left out of those decisions about them despite being the group that would feel its negative effects.
[Data protection law may be enabling increased lawful data distribution across the public sector, but it is not offering people, including young people, the protections they expect of their human right to privacy. We are on a dangerous trajectory for public interest research and for society, if the “new direction” this government goes in, for data and digital policy and practice, goes against prevailing public attitudes and undermines fundamental human rights and freedoms.]
The risks and benefits of the power obtained from the use of admin data are felt disproportionately across different communities including children, who are not a one size fits all, homogenous group.
[While views across groups will differ — and we must be careful to understand any popular context at any point in time on a single issue and unconscious bias in and between groups — policy must recognise where there are consistent findings across this research with that which has gone before it. There are red lines about data re-uses, especially on conflated purposes using the same data once collected by different people, like commercial re-use or sharing (health) data with police.]
The golden thread that runs through time and across different sectors’ data use, are the legal frameworks underpinned by democratic mandates, that uphold our human rights.
I hope the powers-at-be in the DCMS consultation, and wider policy makers in data and digital policy, take this work seriously and not only listen, but act on its recommendations.
2024 updates: opening paragraph edited to add current links.
A chapter written by Rachel Allsopp and Claire bessant discussing OMDDAC’s research with children will also be published on 21st May 2024 in Governance, democracy and ethics in crisis-decision-making: The pandemic and beyond (Manchester University Press) as part of its Pandemic and Beyond series https://manchesteruniversitypress.co.uk/9781526180049/ and an article discussing the research in the open access European Journal of Law and Technology is available here https://www.ejlt.org/index.php/ejlt/article/view/872.
The opening discussion from the launch of the Institute for Ethics in AI in the Schwarzman Centre for Humanties in Oxford both asked many questions and left many open.
The Director recognised in his opening remarks where he expected their work to differ from the talk of ethics in AI that can become ‘matters of facile mottos hard to distinguish from corporate PR’, like “Don’t be evil.” I would like to have heard him go on to point out the reasons why, because I fear this whole enterprise is founded on just that.
My first question is whether the Institute will ever challenge its own need for existence. It is funded, therefore it is. An acceptance of the technological value and inevitability of AI is after all, built into the name of the Institute.
My second question is on the three drivers they went on to identify, in the same article, “Artificial intelligence… is backed by real-world forces of money, power, and data.”
So let’s follow the money.
The funder of the Schwarzman Centre for Humanties the home of the new Institute is also funding AI ethics work across the Atlantic, at Harvard, Yale and other renowned institutions that you might expect to lead in the publication of influential research. The intention at the MIT Schwarzman College of Computing, is that his investment “will reorient MIT to address the opportunities and challenges presented by the rise of artificial intelligence including critical ethical and policy considerations to ensure that the technologies are employed for the common good.” Quite where does that ‘reorientation’ seek to end up?
The panel discussed power.
The idea of ‘citizens representing citizens rather than an elite class representing citizens’, should surely itself be applied to challenge who funds work that shapes public debate. How much influence is democratic for one person to wield?
“In 2007, Mr. Schwarzman was included in TIME’s “100 Most Influential People.” In 2016, he topped Forbes Magazine’s list of the most influential people in finance and in 2018 was ranked in the Top 50 on Forbes’ list of the “World’s Most Powerful People.” [Blackstone]
The panel also talked quite a bit about data.
So I wonder what work the Institute will do in this area and the values that might steer it.
In 2020 Schwarzman’s private equity company Blackstone, acquired a majority stake in Ancestry, a provider of ‘digital family history services with 3.6 million subscribers in over 30 countries’. DNA. The Chief Financial Officer of Alphabet Inc. and Google Inc sits on Blackstone’s board. Big data. The biggest. Bloomberg reported in December 2020 that, ‘Blackstone’s Next Product May Be Data From Companies It Buys’.“Blackstone, which holds stakes in about 97 companies through its private equity funds, ramped up its data push in 2015.”
It was Nigel Shadbolt who picked up the issues of data and of representation as relates to putting human values at the centre of design. He suggested that there is growing disquiet that rather than everyday humans’ self governance, or the agency of individuals, this can mean the values of ‘organised group interests’ assert control. He picked up on the values that we most prize, as things that matter in value-based computing and later on, that transparency of data flows as a form of power being important to understand. Perhaps the striving for open data as revealing power, should also apply to funding in a more transparent, publicly accessible model?
AI in a democratic culture.
Those whose lives are most influenced by AI are often those the most excluded in discussing its harms, and rarely involved in its shaping or application. Prof Hélène Landemore (Yale University) asked perhaps the most important question in the discussion, given its wide-ranging dance around the central theme of AI and its role or effects in a democratic culture, that included Age Appropriate Design, technical security requirements, surveillance capitalism and fairness. Do we in fact have democracy or agency today at all?
It is after all not technology itself that has any intrinsic ethics but those who wield its power, those who are designing it, and shaping the future through it, those human-accountability-owners who need to uphold ethical standards in how technology controls others’ lives.
The present is already one in which human rights are infringed by machine-made and data-led decisions about us without us, without fairness, without recourse, and without redress. It is a world that includes a few individuals in control of a lot. A world in which Yassen Aslam this week said, “the conditions of work, are being hidden behind the technology.”
In the words of the 2017 International Business Times article, How Billionaire Trump Adviser Evades Ethics Law While Shaping Policies That Make Money For His Wall Street Firm, “Schwarzman has long been a fixture in Republican politics.” “Despite Schwarzman’s formal policy role in the Trump White House, he is not technically on the White House payroll.” Craig Holman of Public Citizen, was reported as saying, “We’ve never seen this type of abuse of the ethics laws”. While politics may have moved on, we are arguably now in a time Schwarzman described as a golden age that arrives, “when you have a mess.”
Will the work alone at the new ethics Institute be enough to prove that its purpose is not for the funder or his friends to use their influence to have their business interests ethics-washed in Oxford blue? Or might what the Institute chooses not to research, say just as much? It is going to have to prove its independence and own ethical position in everything it does, and does not do, indefinitely. The panel covered a wide range of already well-discussed, popular but interesting topics in the field, so we can only wait and see.
I still think, as I did in 2019, that corporate capture is unhealthy for UK public policy. If done at scale, with added global influence, it is not only unhealthy for the future of public policy, but for academia. In this case it has the potential in practice to be at best irrelevant corporate PR, but at worst to be harmful for the direction of travel in the shaping of global attitudes towards a whole field of technology.
The National Data Strategy is not about “the data”. People need to stop thinking of data only as abstract information, or even as personal data when it comes to national policy. Administrative data is the knowledge about the workings of the interactions between the public and the State and about us as people. It is the story of selected life events. To the State it is business intelligence. What resources are used, where, by whom and who costs The Treasury how much? How any government is permitted to govern that, shapes our relationship with the State and the nature of the governance we get of people, of public services. How much power we cede to the State or retain at national, local, and individual levels over our communities and our lives matters. Any change in National Data Strategy is about the rewiring of state power, and we need to understand its terms and conditions very, very carefully.
What government wants
“It’s not to say we don’t see privacy and data protection as not important,” said Phil Earl, Deputy Director at DCMS, in the panel discussion hosted by techUK as part of Birmingham Tech Week, exploring the UK Government’s recently released National Data Strategy.
I sighed so loudly I was glad to be on mute. The first of many big watch outs for the messaging around the National Data Strategy was already touted in the text, as “the high watermark of data use set during the pandemic.” In response to COVID “a few of the perceived barriers seem to have melted away,” said Earl, and saw this reduced state of data protections is desirable beyond the pandemic. “Can we maintain that level of collaboration and willingness to share data?” he asked.
Data protection laws are at their heart protections for people, not data, and if any government is seeking to reduce those protections for people we should pay attention to messaging very carefully.
This positioning fails to recognise that data protection law is more permissive in exceptional circumstances such as pandemics, with a recognition by default that the tests in law of necessity and proportionality are different from usual, and are time bound to the pandemic.
“What’s the equivalent? How do we empower people to feel that that greater good [felt in the pandemic] outweighs their legitimate concerns about data being shared,” he said.” The whole trust thing is something that must be constantly maintained,” but you may hear between the lines, ‘preferably on our [government] terms.’
The idea that the public is ignorant about data, is often repeated and still wrong. The same old mantras resurfaced. If people can make more informed decisions, understand “the benefits”, then the government can influence their judgments, trusting us to “make the decisions that we want them to make [to agree to data re-use].”
If *this* is the government set course (again), then watch out.
What people want
In fact when asked, the majority of people both who are willing and less willing to have data about them reused, generally want the same things. Safeguards, opt in to re use, restricted distribution, and protections for redress and against misuse strengthened in legislation.
Read Doteveryone’s public attitudes work. Or the Ipsos MORI polls or work by Wellcome. (see below). Or even the care.data summaries.
The red lines in the “Dialogues on Data” report from workshops carried out across different regions of the UK for the 2013 ADRN (about reuse of deidentified linked public admin datasets by qualified researchers in safe settings), remain valid today, in particular with relation to:
Creating large databases containing many variables/data from a large number of public sector sources
Allowing administrative data to be linked with business data
Linking of passively collected administrative data, in particular geo-location data
“All of the above were seen as having potential privacy implications or allowing the possibility of reidentification of individuals within datasets. The other ‘red-line’ for some participants was allowing researchers for private companies to access data, either to deliver a public service or in order to make profit. Trust in private companies’ motivations were low.”
The BT spokesperson on the panel, went on to say that their own survey showed 85% of people say their data is important to them, and 75% believe they have too little control.
Mr. Earl was absolutely correct in saying it puts the onus on government to be transparent and show how data will be used. But we hear *nothing* about concrete plans to deliver that. What does that look like? Saying it three times out loud, doesn’t make it real.
What government does
Government needs to embrace the fact it can only get data right, if it does the right thing. That includes upholding the law. This includes examining its own purposes and practice.
The Department for Education has been giving away 15 million people’s personal confidential data since 2012 and never told them. They knew this. They chose to ignore it. And on top of that, didn’t inform people who were in school since then, that Mr Gove changed the law. So now over 21 million people’s pupil records are being given away to comapnies and other third parties, for use in ways we do not expect, and is misused too. In 2015, more secret data sharing began, with the Home Office. And another pilot in 2018 with the DWP. And in 2019, sharing with the police.
Promises on government data policy transparency right now are worth less than zero. What matters now is government actions. Trust will be based on what you do, not what you say. Is the government trustworthy?
After the summary findings published by the ICO of their compulsory audit of the Department for Education, the question now is what will the Department and government do to address the 139 recommendations for improvement, with over 60% classified as urgent or high priority. Is the government intentional about change?
What will government do?
So I had a question for the panel: Is the government serious about its point in the strategy, 7.1.2 “Our data regime should empower individuals and groups to control and understand how their data is used.”
I didn’t get an answer.
I want to know if the government is prepared to build the necessary infrastructure to enable that understanding and control?
Enhance and build the physical infrastructure:
access to personal reports what data is held and how it is used.
management controls and communications over reuse [opt-in to meet the necessary permissions of legitimate interests or consent as lawful basis for further data processing, conditions for sensitive data processing, or at very least opt-out to respect objections].
secure systems (not just excel, and WannaCry resistant)
Enable the necessary transparency tools and create demonstrable accountability through registers of algorithms and data sharing with oversight functions and routes for redress.
Empower staff with the necessary human skills at all levels in the public sector on the laws around data that do not just consist of a sharepoint on GDPR — what about privacy law, communications law, equality and discrimination laws among others?
Empower the public with the controls they want to have their rights respected.
Examine toxic policy that drives bad data collection and re-use.
Public admin data is all about people
Coming soon is the publication of an Integrated Review, we were told, how ‘data and security’ and other joined up issues will feature.
A risk of this conflation is seeing the national data strategy as another dry review about data as ‘a thing’, or its management.
It should be about people. The people who our public admin data are about. The people that want access to it. The people making the policy decisions. And its human infrastructure. The controls on power about purposes of the data reuse, data governance is all about the roles and responsibilties of people and the laws that oversee them and require human accountability.
These NDS strategy missions, and pillars and aims are all about “the data”.
For a national data strategy to be realised and to thrive in all of our wide ranging societal ‘data’ interests, it cannot be all about data as a commodity. Or all about government wants. Or seen through the lens only of research. Allow that, and they divide and conquer. It must be founded on building a social contract between government and the public in a digital environment, and setting the expectations of these multi-purpose relationships, at national, and local levels.
For a forward thinking data strategy truly building something in the long term public interest, it is about understanding ‘wider public need‘. The purpose of ‘the data’ and its strategy, is as much about the purpose of government behind it. Data is personal. But when used to make decisions it also business intelligence. How does the government make the business of governing work, through data?
Any national data strategy does not sit in a vacuum of other policy and public experience of government either. If Manchester‘s second lockdown funding treatment is seen as the expectations of how local needs get trumped by national wants, and people’s wishes will be steam rollered, then a national approach will not win support. A long list of bad government engagement over recent months, is a poor foundation and you don’t fix that by talking about “the benefits”.
Will government listen?
Edgenuity, the U.S. based school assessment system using AI for marking, made the news this summer, when parents found it could be gamed by simply packing essays with all the right keywords, but respondents didn’t need to make sense or give accurate answers. To be received well and get a good grade, they were expected simply to tell the system the words ‘it wanted to hear’.
Will the government actually look at responses to the National Data Strategy and care about getting it right? Not just care about getting what they want? Or about what commercial actors will do with it?
Government wanted to and changed the law on education admin data in 2012 and got it wrong. Education data alone is a sin bin of bad habits and complete lack of public and professional engagement, before even starting to address data quality and accuracy and backwards looking policy built on bad historic data.
Government wanted data from one Department to be collected for the purposes of another and got it wrong. People boycotted the collection until it was killed off.
If the current path is any indicator, this government is little interested in local power, or people, and certainly not in our human rights. They are interested in centralised power. We should be very cautious about giving that all away to the State on its own terms.
DotEveryone(2018-20) https://doteveryone.org.uk/project/peoplepowertech/ Their public attitudes report found that although people’s digital understanding has grown, that’s not helping them to shape their online experiences in line with their own wishes.
“Whatever the social issue we want to grasp – the answer should always begin with family.”
Not my words, but David Cameron’s. Just five years ago, Conservative policy was all about “putting families at the centre of domestic policy-making.”
Debate on the Online Harms White Paper, thanks in part to media framing of its own departmental making, is almost all about children. But I struggle with the debate that leaves out our role as parents almost entirely, other than as bereft or helpless victims ourselves.
I am conscious wearing my other hat of defenddigitalme, that not all families are the same, and not all children have families. Yet it seems counter to conservative values, for a party that places the family traditionally at the centre of policy, to leave out or abdicate parents of responsibility for their children’s actions and care online.
Parental responsibility cannot be outsourced to tech companies, or accept it’s too hard to police our children’s phones. If we as parents are concerned about harms, it is our responsibility to enable access to that which is not, and be aware and educate ourselves and our children on what is. We are aware of what they read in books. I cast an eye over what they borrow or buy. I play a supervisory role.
Brutal as it may be, the Internet is not responsible for suicide. It’s just not that simple. We cannot bring children back from the dead. We certainly can as society and policy makers, try and create the conditions that harms are not normalised, and do not become more common. And seek to reduce risk. But few would suggest social media is a single source of children’s mental health issues.
What policy makers are trying to regulate is in essence, not a single source of online harms but 2.1 billion users’ online behaviours.
It follows that to see social media as a single source of attributable fault per se, is equally misplaced. A one-size-fits-all solution is going to be flawed, but everyone seems to have accepted its inevitability.
So how will we make the least bad law?
If we are to have sound law that can be applied around what is lawful, we must reduce the substance of debate by removing what is alreadyunlawful and has appropriate remedy and enforcement.
Debate must also try to be free from emotive content and language.
I strongly suspect the language around ‘our way of life’ and ‘values’ in the White Paper comes from the Home Office. So while it sounds fair and just, we must remember reality in the background of TOEIC, of Windrush, of children removed from school because their national records are being misused beyond educational purposes. The Home Office is no friend of child rights, and does not foster the societal values that break down discrimination and harm. It instead creates harms of its own making, and division by design.
I’m going to quote Graham Smith, for I cannot word it better.
“Harms to society, feature heavily in the White Paper, for example: content or activity that:
“threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration.”
Similarly:
“undermine our democratic values and debate”;
“encouraging us to make decisions that could damage our health, undermining our respect and tolerance for each other and confusing our understanding of what is happening in the wider world.”
This kind of prose may befit the soapbox or an election manifesto, but has no place in or near legislation.”
My key concern in this area is that through a feeling of ‘it is all awful’ stems the sense that ‘all regulation will be better than now’, and comes with a real risk of increasing current practices that would not be better than now, and in fact need fixing.
More monitoring
The first, is today’s general monitoring of school children’s Internet content for risk and harms, which creates unintended consequences and very real harms of its own — at the moment, without oversight.
“This is the practicality of monitoring the internet. When the duty of care required by the White Paper becomes law, companies and regulators will have to do a lot more of it. ” [April 30, HOL]
The Brennan Centre yesterday published its research on the spend by US schools purchasing social media monitoring software from 2013-18, and highlighted some of the issues:
“Aside from anecdotes promoted by the companies that sell this software, there is no proof that these surveillance tools work [compared with other practices]. But there are plenty of risks. In any context, social media is ripe for misinterpretation and misuse.” [Brennan Centre for Justice, April 30, 209]
That monitoring software focuses on two things —
a) seeing children through the lens of terrorism and extremism, and b) harms caused by them to others, or as victims of harms by others, or self-harm.
It is the near same list of ‘harms’ topics that the White Paper covers. Co-driven by the same department interested in it in schools — the Home Office.
These concerns are set in the context of the direction of travel of law and policy making, its own loosening of accountability and process.
It was preceded by a House of Commons discussion on Social Media and Health, lead by the former Minister for Digital, Culture, Media and Sport who seems to feel more at home in that sphere, than in health.
His unilateral award of funds to the Samaritans for work with Google and Facebook on a duty of care, while the very same is still under public consultation, is surprising to say the least.
But it was his response to this question, which points to the slippery slope such regulations may lead. The Freedom of Speech champions should be most concerned not even by what is potentially in any legislation ahead, but in the direction of travel and debate around it.
“Will he look at whether tech giants such as Amazon can be brought into the remit of the Online Harms White Paper? ”
He replied, that “Amazon sells physical goods for the most part and surely has a duty of care to those who buy them, in the same way that a shop has a responsibility for what it sells. My hon. Friend makes an important point, which I will follow up.”
Debate so far has demonstrated broad gaps between what is wanted, in knowledge, and what is possible. If behaviours are to be stopped because they are undesirable rather than unlawful, we open up a whole can of worms if not done with the greatest attention to detail.
Lord Stevenson and Lord McNally both suggested that pre-legislative scrutiny of the Bill, and more discussion would be positive. Let’s hope it happens.
Here’s my personal first reflections on the Online Harms White Paper discussion so far.
Six suggestions:
Suggestion one:
The Law Commission Review, mentioned in the House of Lords debate, may provide what I have been thinking of crowd sourcing and now may not need to. A list of laws that the Online Harms White Paper related discussion reaches into, so that we can compare what is needed in debate versus what is being sucked in. We should aim to curtail emotive discussion of broad risk and threat that people experience online. This would enable the themes which are already covered in law to be avoided, and focus on the gaps. It would make for much tighter and more effective legislation. For example, the Crown Prosecution Service offers Guidelines on prosecuting cases involving communications sent via social media, but a wider list of law is needed.
Suggestion two: After (1) defining what legislation is lacking, definitions must be very clear, narrow, and consistent across other legislation. Not for the regulator to determine ad-hoc and alone.
Suggestion three: If children’s rights are at to be so central in discussion on this paper, then their wider rights must including privacy and participation, access to information and freedom of speech must be included in debate. This should include academic research-based evidence of children’s experience online when making the regulations.
Suggestion four: Internet surveillance software in schools should be publicly scrutinised. A review should establish the efficacy, boundaries and oversight of policy and practice regards Internet monitoring for harms and not embed even more, without it. Boundaries should be put into legislation for clarity and consistency.
Suggestion five: Terrorist activity or child sexual exploitation and abuse (CSEA) online are already unlawful and should not need additional Home Office powers. Great caution must be exercised here.
Suggestion six: Legislation could and should encapsulate accountability and oversight for micro-targeting and algorithmic abuse.
More detail behind my thinking, follows below, after the break. [Structure rearranged on May 14, 2019]
“A new, a vast, and a powerful language is developed for the future use of analysis, in which to wield its truths so that these may become of more speedy and accurate practical application for the purposes of mankind than the means hitherto in our possession have rendered possible.” [on Ada Lovelace, The First tech Visionary, New Yorker, 2013]
What would Ada Lovelace have argued for in today’s AI debates? I think she may have used her voice not only to call for the good use of data analysis, but for her second strength.The power of her imagination.
James Ball recently wrote in The European [1]:
“It is becoming increasingly clear that the modern political war isn’t one against poverty, or against crime, or drugs, or even the tech giants – our modern political era is dominated by a war against reality.”
My overriding take away from three days spent at the Conservative Party Conference this week, was similar. It reaffirmed the title of a school debate I lost at age 15, ‘We only believe what we want to believe.’
James writes that it is, “easy to deny something that’s a few years in the future“, and that Conservatives, “especially pro-Brexit Conservatives – are sticking to that tried-and-tested formula: denying the facts, telling a story of the world as you’d like it to be, and waiting for the votes and applause to roll in.”
These positions are not confined to one party’s politics, or speeches of future hopes, but define perception of current reality.
I spent a lot of time listening to MPs. To Ministers, to Councillors, and to party members. At fringe events, in coffee queues, on the exhibition floor. I had conversations pressed against corridor walls as small press-illuminated swarms of people passed by with Queen Johnson or Rees-Mogg at their centre.
In one panel I heard a primary school teacher deny that child poverty really exists, or affects learning in the classroom.
In another, in passing, a digital Minister suggested that Pupil Referral Units (PRU) are where most of society’s ills start, but as a Birmingham head wrote this week, “They’ll blame the housing crisis on PRUs soon!” and “for the record, there aren’t gang recruiters outside our gates.”
This is no tirade on failings of public policymakers however. While it is easy to suspect malicious intent when you are at, or feel, the sharp end of policies which do harm, success is subjective.
It is clear that an overwhelming sense of self-belief exists in those responsible, in the intent of any given policy to do good.
Where policies include technology, this is underpinned by a self re-affirming belief in its power. Power waiting to be harnessed by government and the public sector. Even more appealing where it is sold as a cost-saving tool in cash strapped councils. Many that have cut away human staff are now trying to use machine power to make decisions. Some of the unintended consequences of taking humans out of the process, are catastrophic for human rights.
The disconnect between perception of risk, the reality of risk, and real harm, whether perceived or felt from these applied policies in real-life, is not so much, ‘easy to deny something that’s a few years in the future‘ as Ball writes, but a denial of the reality now.
Concerningly, there is lack of imagination of what real harms look like.There is no discussion where sometimes these predictive policies have no positive, or even a negative effect, and make things worse.
I’m deeply concerned that there is an unwillingness to recognise any failures in current data processing in the public sector, particularly at scale, and where it regards the well-known poor quality of administrative data. Or to be accountable for its failures.
Harms, existing harms to individuals, are perceived as outliers. Any broad sweep of harms across policy like Universal Credit, seem perceived as political criticism, which makes the measurable failures less meaningful, less real, and less necessary to change.
There is a worrying growing trend of finger-pointing exclusively at others’ tech failures instead. In particular, social media companies.
Imagination and mistaken ideas are reinforced where the idea is plausible, and shared. An oft heard and self-affirming belief was repeated in many fora between policymakers, media, NGOs regards children’s online safety. “There is no regulation online”. In fact, much that applies offline applies online. The Crown Prosecution Service Social Media Guidelines is a good place to start. [2] But no one discusses where children’s lives may be put at risk or less safe, through the use of state information about them.
Policymakers want data to give us certainty. But many uses of big data, and new tools appear to do little more than quantify moral fears, and yet still guide real-life interventions in real-lives.
Child abuse prediction, and school exclusion interventions should not be test-beds for technology the public cannot scrutinise or understand.
In one trial attempting to predict exclusion, this recent UK research project in 2013-16 linked children’s school records of 800 children in 40 London schools, with Metropolitan Police arrest records of all the participants. It found interventions created no benefit, and may have caused harm. [3]
“Anecdotal evidence from the EiE-L core workers indicated that in some instances schools informed students that they were enrolled on the intervention because they were the “worst kids”.”
“Keeping students in education, by providing them with an inclusive school environment, which would facilitate school bonds in the context of supportive student–teacher relationships, should be seen as a key goal for educators and policy makers in this area,” researchers suggested.
But policy makers seem intent to use systems that tick boxes, and create triggers to single people out, with quantifiable impact.
Some of these systems are known to be poor, or harmful.
When it comes to predicting and preventing child abuse, there is concern with the harms in US programmes ahead of us, such as both Pittsburgh, and Chicago that has scrapped its programme.
The Illinois Department of Children and Family Services ended a high-profile program that used computer data mining to identify children at risk for serious injury or death after the agency’s top official called the technology unreliable, and children still died.
“We are not doing the predictive analytics because it didn’t seem to be predicting much,” DCFS Director Beverly “B.J.” Walker told the Tribune.
Many professionals in the UK share these concerns. How long will they be ignored and children be guinea pigs without transparent error rates, or recognition of the potential harmful effects?
Why on earth not? At least for these high risk projects.
How long should children be the test subjects of machine learning tools at scale, without transparent error rates, audit, or scrutiny of their systems and understanding of unintended consequences?
Is harm to any child a price you’re willing to pay to keep using these systems to perhaps identify others, while we don’t know?
Is there an acceptable positive versus negative outcome rate?
The evidence so far of AI in child abuse prediction is not clearly showing that more children are helped than harmed.
Surely it’s time to stop thinking, and demand action on this.
It doesn’t take much imagination, to see the harms. Safe technology, and safe use of data, does not prevent the imagination or innovation, employed for good.
Where you are willing to sacrifice certainty of human safety for the machine decision, I want someone to be accountable for why.
References
[1] James Ball, The European, Those waging war against reality are doomed to failure, October 4, 2018.
[2] Thanks to Graham Smith for the link. “Social Media – Guidelines on prosecuting cases involving communications sent via social media. The Crown Prosecution Service (CPS) , August 2018.”
[3] Obsuth, I., Sutherland, A., Cope, A. et al. J Youth Adolescence (2017) 46: 538. https://doi.org/10.1007/s10964-016-0468-4 London Education and Inclusion Project (LEIP): Results from a Cluster-Randomized Controlled Trial of an Intervention to Reduce School Exclusion and Antisocial Behavior (March 2016)
Five years on, other people’s use of the language of data ethics puts social science at risk. Event after event, we are witnessing the gradual dissolution of the value and meaning of ‘ethics’, into little more than a buzzword.
Companies and organisations are using the language of ‘ethical’ behaviour blended with ‘corporate responsibility’ modelled after their own values, as a way to present competitive advantage.
Ethics is becoming shorthand for, ‘we’re the good guys’. It is being subverted by personal data users’ self-interest. Not to address concerns over the effects of data processing on individuals or communities, but to justify doing it anyway.
An ethics race
There’s certainly a race on for who gets to define what data ethics will mean. We have at least three new UK institutes competing for a voice in the space. Digital Catapult has formed an AI ethics committee. Data charities abound. Even Google has developed an ethical AI strategy of its own, in the wake of their Project Maven.
Lessons learned in public data policy should be clear by now. There should be no surprises how administrative data about us are used by others. We should expect fairness. Yet these basics still seem hard for some to accept.
The NHS Royal Free Hospital in 2015 was rightly criticised – because they tried “to commercialise personal confidentiality without personal consent,” as reported in Wired recently.
“The shortcomings we found were avoidable,” wrote Elizabeth Denham in 2017 when the ICO found six ways the Google DeepMind — Royal Free deal did not comply with the Data Protection Act. The price of innovation, she said, didn’t need to be the erosion of fundamental privacy rights underpinned by the law.
If the Centre for Data Ethics and Innovation is put on a statutory footing where does that leave the ICO, when their views differ?
It’s why the idea of DeepMind funding work in Ethics and Society seems incongruous to me. I wait to be proven wrong. In their own words, “technologists must take responsibility for the ethical and social impact of their work“. Breaking the law however, is conspicuous by its absence, and the Centre must not be used by companies, to generate pseudo lawful or ethical acceptability.
Do we need new digital ethics?
Admittedly, not all laws are good laws. But if recognising and acting under the authority of the rule-of-law is now an optional extra, it will undermine the ICO, sink public trust, and destroy any hope of achieving the research ambitions of UK social science.
I am not convinced there is any such thing as digital ethics. The claimed gap in an ability to get things right in this complex area, is too often after people simply get caught doing something wrong. Technologists abdicate accountability saying “we’re just developers,” and sociologists say, “we’re not tech people.”
These shrugs of the shoulders by third-parties, should not be rewarded with more data access, or new contracts. Get it wrong, get out of our data.
This lack of acceptance of responsibility creates a sense of helplessness. We can’t make it work, so let’s make the technology do more. But even the most transparent algorithms will never be accountable. People can be accountable, and it must be possible to hold leaders to account for the outcomes of their decisions.
But it shouldn’t be surprising no one wants to be held to account. The consequences of some of these data uses are catastrophic.
Accountability is the number one problem to be solved right now. It includes openness of data errors, uses, outcomes, and policy. Are commercial companies, with public sector contracts, checking data are accurate and corrected from people who the data are about, before applying in predictive tools?
Unethical practice
As Tim Harford in the FT once asked about Big Data uses in general: “Who cares about causation or sampling bias, though, when there is money to be made?”
Problem area number two, whether researchers are are working towards a profit model, or chasing grant funding is this:
How data users can make unbiased decisions whether they should use the data? We have all the same bodies deciding on data access, that oversee its governance. Conflict of self interest is built-in by default, and the allure of new data territory is tempting.
But perhaps the UK key public data ethics problem, is that the policy is currently too often about the system goal, not about improving the experience of the people using systems. Not using technology as a tool, as if people mattered. Harmful policy, can generate harmful data.
Secondary uses of data are intrinsically dependent on the ethics of the data’s operational purpose at collection. Damage-by-design is evident right now across a range of UK commercial and administrative systems. Metrics of policy success and associated data may be just wrong.
Home Office reviews wrongly identified students as potentially fraudulent, yet their TOIEC visa cancellations has irrevocably damaged lives without redress.
Some of the most ethical research aims try to reveal these problems. But we need to also recognise not all research would be welcomed by the people the research is about, and few researchers want to talk about it. Among hundreds of already-approved university research ethics board applications I’ve read, some were desperately lacking. An organisation is no more ethical than the people who make decisions in its name. People disagree on what is morally right. People can game data input and outcomes and fail reproducibility. Markets and monopolies of power bias aims. Trying to support the next cohort of PhDs and impact for the REF, shapes priorities and values.
It is still rare to find informed discussion among the brightest and best of our leading data institutions, about the extensive everyday real world secondary data use across public authorities, including where that use may be unlawful and unethical, like buying from data brokers. Research users are pushing those boundaries for more and more without public debate. Who says what’s too far?
The only way is ethics? Where next?
The latest academic-commercial mash-ups on why we need new data ethics in a new regulatory landscape where the established is seen as past it, is a dangerous catch-all ‘get out of jail free card’.
Ethical barriers are out of step with some of today’s data politics. The law is being sidestepped and regulation diminished by lack of enforcement of gratuitous data grabs from the Internet of Things, and social media data are seen as a free-for-all. Data access barriers are unwanted. What is left to prevent harm?
I’m certain that we first need to take a step back if we are to move forward. Ethical values are founded on human rights that existed before data protection law. Fundamental human decency, rights to privacy, and to freedom from interference, common law confidentiality, tort, and professional codes of conduct on conflict of interest, and confidentiality.
Data protection law emphasises data use. But too often its first principles of necessity and proportionality are ignored. Ethical practice would ask more often, should we collect the data at all?
Let’s not pretend secondary use of data is unproblematic, while uses are decided in secret. Calls for a new infrastructure actually seek workarounds of regulation. And human rights are dismissed.
Building a social license between data subjects and data users is unavoidable if use of data about people hopes to be ethical.
The lasting solutions are underpinned by law, and ethics. Accountability for risk and harm. Put the person first in all things.
We need more than hopes and dreams and talk of ethics.
We need realism if we are to get a future UK data strategy that enables human flourishing, with public support.
Notes of desperation or exasperation are increasingly evident in discourse on data policy, and start to sound little better than ‘we want more data at all costs’. If so, the true costs would be lasting.
Perhaps then it is unsurprising that there are calls for a new infrastructure to make it happen, in the form of Data Trusts. Some thoughts on that follow too.
Part 1. Ethically problematic
Ethics is dissolving into little more than a buzzword. Can we find solutions underpinned by law, and ethics, and put the person first?
Peter Riddell, the Commissioner for Public Appointments, has completed his investigation into the recent appointments to the Board of the Office for Students and published his report.
From the “Number 10 Googlers,” that NUS affiliation — an interest in student union representation was seen as undesirable, to “undermining the policy goals” and what the SpAds supported, the whole report is worth a read.
Perception of the process
The concern that the Commissioner raises, over the harm done to the public’s perception of the public appointments process means more needs done to fix these problems, before and after appointments.
This process reinforces what people think already. Jobs for the [white Oxford] boys, and yes-men. And so what, why should I get involved anyway, and what can we hope to change?
Possibilities for improvement
What should the Department for Education (DfE) now offer and what should be required after the appointments process, for the OfS and other bodies, boards and groups et al?
Every board at the Department for Education, its name, aim, and members — internal and external — should be published.
Every board at the Department for Education should be required to publish its Terms of Appointment, and Terms of Reference.
Every board at the Department for Education should be required to publish agendas before meetings and meaningful meeting minutes promptly.
Why? Because there’s all sorts of boards around and their transparency is frankly non-existent. I know because I sit on one. Foolishly I did not make it a requirement to publish minutes before I agreed to join. But in a year it has only met twice, so you’ve not missed much. Who else sits where, on what policy, and why?
In another I used to sit on I got increasingly frustrated that the minutes were not reflective of the substance of discussion. This does the public a disservice twice over. The purpose of the boards look insipid and the evidence for what challenge they are intended to offer, their very reason for being, is washed away. Show the public what’s hard, that there’s debate, that risks are analysed and balanced, and then decisions taken. Be open to scrutiny.
The public has a right to know
When scrutiny really matters, it is wrong — just as the Commissioner report reads — for any Department or body to try to hide the truth.
The purpose of transparency must be to hold to account and ensure checks-and-balances are upheld in a democratic system.
The DfE withdrew from a legal hearing scheduled at the First Tier Information Rights Tribunal last year a couple of weeks beforehand, and finally accepted an ICO decision notice in my favour. I had gone through a year of the Freedom-of-Information appeal process to get hold of the meeting minutes of the Department for Education Star Chamber Scrutiny Board, from November 2015.
It was the meeting in which I had been told members approved the collection of nationality and country of birth in the school census.
“The Star Chamber Scrutiny Board”. Not out of Harry Potter and the Ministry of Magic but appointed by the DfE.
It’s a board that mentions actively seeking members of certain teaching unions but omits others. It publishes no meeting minutes. Its terms of reference are 38 words long, and it was not told the whole truth before one of the most important and widely criticised decisions it ever made affecting the lives of millions of children across England and harm and division in the classroom.
Its annual report doesn’t mention the controversy at all.
After sixteen months, the DfE finally admitted it had kept the Star Chamber Scrutiny Board in the dark on at least one of the purposes of expanding the school census. And on its pre-existing active, related data policy passing pupil data over to the Home Office.
The minutes revealed the Board did not know anything about the data sharing agreement already in place between the DfE and Home Office or that “(once collected) nationality data” [para 15.2.6] was intended to share with the Border Force Casework Removals Team.
Truth that the DfE was forced to reveal, and only came out two years after the meeting, and a full year after the change in law.
If the truth, transparency, diversity of political opinion on boards are allowed to die so does democracy
I spoke to Board members in 2016. They were shocked to find out what the MOU purposes were for the new data, and that regular data transfers had already begun without their knowledge, when they were asked to sign off the nationality data collection.
Their lack of concerns raised was given in written evidence to the House of Lords Secondary Legislation Scrutiny Committee that it had been properly reviewed.
How trustworthy is anything that the Star Chamber now “approves” and our law making process to expand school data? How trustworthy is the Statutory Instrument scrutiny process?
“there was no need for DfE to discuss with SCSB the sharing of data with Home Office as: a.) none of the data being considered by the SCSB as part of the proposal supporting this SI has been, or will be, shared with any third-party (including other government departments);
[omits it “was planned to be”]
and b.) even if the data was to be shared externally, those decisions are outside the SCSB terms of reference.”
Outside the terms of reference that are 38 words long and should scrutinise but not too closely or reject on the basis of what exactly?
Not only is the public not being told the full truth about how these boards are created, and what their purpose is, it seems board members are not always told the full truth they deserve either.
Who is invited to the meeting, and who is left out? What reports are generated with what recommendations? What facts or opinion cannot be listened to, scrutinised and countered, that could be so damaging as to not even allow people to bring the truth to the table?
If the meeting minutes would be so controversial and damaging to making public policy by publishing them, then who the heck are these unelected people making such significant decisions and how? Are they qualified, are they independent, and are they accountable?
If alternately, what should be ‘independent’ boards, or panels, or meetings set up to offer scrutiny and challenge, are in fact being manipulated to manoeuvre policy and ready-made political opinions of the day, it is a disaster for public engagement and democracy.
It should end with this ex- OfS hiring process at DfE, today.
The appointments process and the ongoing work by boards must have full transparency, if they are ever to be seen as trustworthy.
What can Matt Hancock learn from his app privacy flaws?
Note: since starting this blog, the privacy policy has been changed since what was live at 4.30 and the “last changed date” backdated on the version that is now live at 21.00. It shows the challenge I point out in 5:
It’s hard to trust privacy policy terms and conditions that are not strong and stable.
The Data Protection Bill about to pass through the House of Commons requires the Information Commissioner to prepare and issue codes of practice — which must be approved by the Secretary of State — before they can become statutory and enforced.
One of those new codes (clause 124) is about age-appropriate data protection design. Any provider of an Information Society Service — as outlined in GDPR Article 8, where a child’s data are collected on the legal basis of consent — must have regard for the code, if they target the site use at a child.
For 13 -18 year olds what changes might mean compared with current practices can be demonstrated by the Minister for Digital, Culture, Media and Sport’s new app, launched today.
This app is designed to be used by children 13+. Regardless that the terms say, [more aligned with US COPPA laws rather than GDPR] the app requires parental approval 13-18, it still needs to work for the child.
Apps could and should be used to open up what politics is about to children. Younger users are more likely to use an app than read a paper for example. But it must not cost them their freedoms. As others have written, this app has privacy flaws by design.
Children merit specific protection with regard to their personal data, as they may be less aware of the risks, consequences and safeguards concerned and their rights in relation to the processing of personal data. (GDPR Recital 38).
The flaw in the intent to protect by age, in the app, GDPR and UK Bill overall, is that understanding needed for consent is not dependent on age, but on capacity. The age-based model to protect the virtual child, is fundamentally flawed. It’s shortsighted, if well intentioned, but bad-by-design and does little to really protect children’s rights.
Future age verification for example; if it is to be helpful, not harm, or a nuisance like a new cookie law, must be “a narrow form of ‘identity assurance’ – where only one attribute (age) need be defined.” It must also respect Recital 57, and not mean a lazy data grab like GiffGaff’s.
On these 5 things this app fails to be age appropriate:
Age appropriate participation, privacy, and consent design.
Excessive personal data collection and permissions. (Article 25)
The purposes of each data collected must be specified, explicit and not further processed for something incompatible with them. (Principle 2).
The privacy policy terms and conditions must be easily understood by a child, and be accurate. (Recital 58)
It’s hard to trust privacy policy terms and conditions that are not strong and stable. Among things that can change are terms on a free trial which should require active and affirmative action not continue the account forever, that may compel future costs. Any future changes, should also be age-appropriate of themselves, and in the way that consent is re-managed.
How much profiling does the app enable and what is it used for? The Article 29 WP recommends, “Because children represent a more vulnerable group of society, organisations should, in general, refrain from profiling them for marketing purposes.” What will this mean for any software that profile children’s meta-data to share with third parties, or commercial apps with in-app purchases, or “bait and switch” style models? As this app’s privacy policy refers to.
The Council of Europe 2016-21 Strategy on the Rights of the Child, recognises “provision for children in the digital environment ICT and digital media have added a new dimension to children’s right to education” exposing them to new risk, “privacy and data protection issues” and that “parents and teachers struggle to keep up with technological developments. ” [6. Growing up in a Digital World, Para 21]
Data protection by design really matters to get right for children and young people.
This is a commercially produced app and will only be used on a consent and optional basis.
This app shows how hard it can be for people buying tech from developers to understand and to trust what’s legal and appropriate.
For developers with changing laws and standards they need clarity and support to get it right. For parents and teachers they will need confidence to buy and let children use safe, quality technology.
Without relevant and trustworthy guidance, it’s nigh on impossible.
For any Minister in charge of the data protection rights of children, we need the technology they approve and put out for use by children, to be age-appropriate, and of the highest standards.
This app could and should be changed to meet them.
For children across the UK, more often using apps offers them no choice whether or not to use it. Many are required by schools that can make similar demands for their data and infringe their privacy rights for life. How much harder then, to protect their data security and rights, and keep track of their digital footprint where data goes.
If the Data protection Bill could have an ICO code of practice for children that goes beyond consent based data collection; to put clarity, consistency and confidence at the heart of good edTech for children, parents and schools, it would be warmly welcomed.
Here’s detailed examples what the Minister might change to make his app in line with GDPR, and age-appropriate for younger users.
1. Is the app age appropriate by design?
Unless otherwise specified in the App details on the applicable App Store, to use the App you must be 18 or older (or be 13 or older and have your parent or guardian’s consent).
Children over 13 can use the app, but this app needs parental consent. That’s different from GDPR– consent over and above the new laws as will apply in the UK from May. That age will vary across the EU. Inconsistent age policies are going to be hard to navigate.
Many of the things that matter to privacy, have not been included in the privacy policy (detailed below), but in the terms and conditions.
What else needs changed?
2. Personal data protection by design and default
Excessive personal data collection cannot be justified through a “consent” process, by agreeing to use the app. There must be data protection by design and default using the available technology. That includes data minimisation, and limited retention. (Article 25)
The apps powers are vast and collect far more personal data than is needed, and if you use it, even getting permission to listen to your mic. That is not data protection by design and default, which must implement data-protection principles, such as data minimisation.
If as has been suggested, in the newest version of android each permission is asked for at the point of use not on first install, that could be a serious challenge for parents who think they have reviewed and approved permissions pre-install (and fails beyond the scope of this app). An app only requires consent to install and can change the permissions behind the scenes at any time. It makes privacy and data protection by design even more important.
Here’s a copy of what the android Google library page says it can do. Once you click into “permissions” and scroll. This is excessive. “Matt Hancock” is designed to prevent your phone from sleeping, read and modify the contents of storage, and access your microphone.
Version 2.27 can access:
Location
approximate location (network-based)
Phone
read phone status and identity
Photos / Media / Files
read the contents of your USB storage
modify or delete the contents of your USB storage
Storage
read the contents of your USB storage
modify or delete the contents of your USB storage
Camera
take pictures and videos
Microphone
record audio
Wi-Fi connection information
view Wi-Fi connections
Device ID & call information
read phone status and identity
Other
control vibration
manage document storage
receive data from Internet
view network connections
full network access
change your audio settings
control vibration
prevent device from sleeping
“Matt Hancock” knows where you live
The app makers – and Matt Hancock – should have no necessity to know where your phone is at all times, where it is regularly, or whose other phones you are near, unless you switch it off. That is excessive.
It’s not the same as saying “I’m a constituent”. It’s 24/7 surveillance.
The Ts&Cs say more.
It places the onus on the user to switch off location services — which you may expect for other apps such as your Strava run — rather than the developers take responsibility for your privacy by design. [Click image to see larger] [Full source policy].
[update since writing this post on February 1, the policy has been greatly added to]
It also collects ill-defined “technical information”. How should a 13 year old – or parent for that matter – know what these information are? Those data are the meta-data, the address and sender tags etc.
By using the App, you consent to us collecting and using technical information about your device and related information for the purpose of helping us to improve the App and provide any services to you.
As NSA General Counsel Stewart Baker has said, “metadata absolutely tells you everything about somebody’s life. General Michael Hayden, former director of the NSA and the CIA, has famously said, “We kill people based on metadata.”
If you use this app and “approve” the use, do you really know what the location services are tracking and how that data are used? For a young person, it is impossible to know, or see where their digital footprint has gone, or knowledge about them, have been used.
3. Specified, explicit, and necessary purposes
As a general principle, personal data must be only collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes. The purposes of these very broad data collection, are not clearly defined. That must be more specifically explained, especially given the data are so broad, and will include sensitive data. (Principle 2).
While the Minister has told the BBC that you maintain complete editorial control, the terms and conditions are quite different.
The app can use user photos, files, your audio and location data, and that once content is shared it is “a perpetual, irrevocable” permission to use and edit, this is not age-appropriate design for children who might accidentally click yes, or not appreciate what that may permit. Or later wish they could get that photo back. But now that photo is on social media potentially worldwide — “Facebook, Twitter, Pinterest, YouTube, Instagram and on the Publisher’s own websites,” and the child’s rights to privacy and consent, are lost forever.
That’s not age appropriate and not in line with GDPR on rights to withdraw consent, to object or to restrict processing. In fact the terms, conflict with the app privacy policy which states those rights [see 4. App User Data Rights] Just writing “there may be valid reasons why we may be unable to do this” is poor practice and a CYA card.
4. Any privacy policy and app must do what it says
A privacy policy and terms and conditions must be easily understood by a child, [indeed any user] and be accurate.
Journalists testing the app point out that even if the user clicks “don’t allow”, when prompted to permit access to the photo library, the user is allowed to post the photo anyway.
What does consent mean if you don’t know what you are consenting to? You’re not. GDPR requires that privacy policies are written in a way that their meaning can be understood by a child user (not only their parent). They need to be jargon-free and meaningful in “clear and plain language that the child can easily understand.” (Recital 58)
This privacy policy is not child-appropriate. It’s not even clear for adults.
5. What would age appropriate permissions for charging and other future changes look like?
It should be clear to users if there may be up front or future costs, and there should be no assumption that agreeing once to pay for an app, means granting permission forever, without affirmative action.
Couching Bait-and-Switch, Hidden Costs
This is one of the flaws that the Matt Hancock app terms and conditions shares with many free education apps used in schools. At first, they’re free. You register, and you don’t even know when your child starts using the app, that it’s a free trial. But after a while, as determined by the developer, the app might not be free any more.
That’s not to say this is what the Matt Hancock app will do, in fact it would be very odd if it did. But odd then, that its privacy policy terms and conditions state it could.
The folly of boiler plate policy, or perhaps simply wanting to keep your options open?
Either way, it’s bad design for children– indeed any user — to agree to something that in fact, is meaningless because it could change at any time, and automatic renewals are convenient but who has not found they paid for an extra month of a newspaper or something else they intended to only use for a limited time? And to avoid any charges, you must cancel before the end of the free trial – but if you don’t know it’s free, that’s hard to do. More so for children.
From time to time we may offer a free trial period when you first register to use the App before you pay for the subscription.[…] To avoid any charges, you must cancel before the end of the free trial.
(And on the “For more details, please see the product details in the App Store before you download the App.” there aren’t any, in case you’re wondering).
What would age appropriate future changes be?
It should be clear to parents that what they consent to on behalf of a child, or if a child consents, at the time of install. What that means must empower them to better digital understanding and to stay in control, not allow the company to change the agreement, without the user’s clear and affirmative action.
One of the biggest flaws for parents in children using apps is that what they think they have reviewed, thought appropriate, and permitted, can change at any time, at the whim of the developer and as often as they like.
Notification “by updating the Effective Date listed above” is not any notification at all. And PS. they changed the policy and backdated it today from February 1, 2018, to July 2017. By 8 months. That’s odd.
The statements in this “changes” contradict one another. It’s a future dated get-out-of-jail-free-card for the developer and a transparency and oversight nightmare for parents. “Your continued use” is not clear, affirmative, and freely given consent, as demanded by GDPR.
Perhaps the kindest thing to say about this policy, and its poor privacy approach to rights and responsibilities, is that maybe the Minister did not read it. Which highlights the basic flaw in privacy policies in the first place. Data usage reports how your personal data have actually been used, versus what was promised, are of much greater value and meaning. That’s what children need in schools.
What would it mean for you to trust an Internet connected product or service and why would you not?
What has damaged consumer trust in products and services and why do sellers care?
What do we want to see different from today, and what is necessary to bring about that change?
These three pairs of questions implicitly underpinned the intense day of #iotmark discussion at the London Zoo last Friday.
The questions went unasked, and could have been voiced before we started, although were probably assumed to be self-evident:
Why do you want one at all [define the problem]?
What needs to change and why [define the future model]?
How do you deliver that and for whom [set out the solution]?
If a group does not agree on the need and drivers for change, there will be no consensus on what that should look like, what the gap is to achieve it, and even less on making it happen.
So who do you want the trustmark to be for, why will anyone want it, and what will need to change to deliver the aims? No one wants a trustmark per se. Perhaps you want what values or promises it embodies to demonstrate what you stand for, promote good practice, and generate consumer trust. To generate trust, you must be seen to be trustworthy. Will the principles deliver on those goals?
The Open IoT Certification Mark Principles, as a rough draft was the outcome of the day, and are available online.
Here’s my reflections, including what was missing on privacy, and the potential for it to be considered in future.
I’ve structured this first, assuming readers attended the event, at ca 1,000 words. Lists and bullet points. The background comes after that, for anyone interested to read a longer piece.
Many thanks upfront, to fellow participants, to the organisers Alexandra D-S and Usman Haque and the colleague who hosted at the London Zoo. And Usman’s Mum. I hope there will be more constructive work to follow, and that there is space for civil society to play a supporting role and critical friend.
The mark didn’t aim to fix the IoT in a day, but deliver something better for product and service users, by those IoT companies and providers who want to sign up. Here is what I took away.
I learned three things
A sense of privacy is not homogenous, even within people who like and care about privacy in theoretical and applied ways. (I very much look forward to reading suggestions promised by fellow participants, even if enforced personal openness and ‘watching the watchers’ may mean ‘privacy is theft‘.)
Awareness of current data protection regulations needs improved in the field. For example, Subject Access Requests already apply to all data controllers, public and private. Few have read the GDPR, or the e-Privacy directive, despite importance for security measures in personal devices, relevant for IoT.
I truly love working on this stuff, with people who care.
And it reaffirmed things I already knew
Change is hard, no matter in what field.
People working together towards a common goal is brilliant.
Group collaboration can create some brilliantly sharp ideas. Group compromise can blunt them.
Some men are particularly bad at talking over each other, never mind over the women in the conversation. Women notice more. (Note to self: When discussion is passionate, it’s hard to hold back in my own enthusiasm and not do the same myself. To fix.)
The IoT context, and risks within it are not homogenous, but brings new risks and adverseries. The risks for manufacturers and consumers and the rest of the public are different, and cannot be easily solved with a one-size-fits-all solution. But we can try.
Concerns I came away with
If the citizen / customer / individual is to benefit from the IoT trustmark, they must be put first, ahead of companies’ wants.
If the IoT group controls both the design, assessment to adherence and the definition of success, how objective will it be?
The group was not sufficiently diverse and as a result, reflects too little on the risks and impact of the lack of diversity in design and effect, and the implications of dataveillance .
Critical minority thoughts although welcomed, were stripped out from crowdsourced first draft principles in compromise.
More future thinking should be built-in to be robust over time.
What was missing
There was too little discussion of privacy in perhaps the most important context of IoT – inter connectivity and new adversaries. It’s not only about *your* thing, but things that it speaks to, interacts with, of friends, passersby, the cityscape , and other individual and state actors interested in offense and defense. While we started to discuss it, we did not have the opportunity to discuss sufficiently at depth to be able to get any thinking into applying solutions in the principles.
One of the greatest risks that users face is the ubiquitous collection and storage of data about users that reveal detailed, inter-connected patterns of behaviour and our identity and not seeing how that is used by companies behind the scenes.
What we also missed discussing is not what we see as necessary today, but what we can foresee as necessary for the short term future, brainstorming and crowdsourcing horizon scanning for market needs and changing stakeholder wants.
Future thinking
Here’s the areas of future thinking that smart thinking on the IoT mark could consider.
We are moving towards ever greater requirements to declare identity to use a product or service, to register and log in to use anything at all. How will that change trust in IoT devices?
Single identity sign-on is becoming ever more imposed, and any attempts for multiple presentation of who I am by choice, and dependent on context, therefore restricted. [not all users want to use the same social media credentials for online shopping, with their child’s school app, and their weekend entertainment]
Is this imposition what the public wants or what companies sell us as what customers want in the name of convenience? What I believe the public would really want is the choice to do neither.
There is increasingly no private space or time, at places of work.
Limitations on private space are encroaching in secret in all public city spaces. How will ‘handoffs’ affect privacy in the IoT?
There is too little understanding of the social effects of this connectedness and knowledge created, embedded in design.
What effects may there be on the perception of the IoT as a whole, if predictive data analysis and complex machine learning and AI hidden in black boxes becomes more commonplace and not every company wants to be or can be open-by-design?
Ubiquitous collection and storage of data about users that reveal detailed, inter-connected patterns of behaviour and our identity needs greater commitments to disclosure. Where the hand-offs are to other devices, and whatever else is in the surrounding ecosystem, who has responsibility for communicating interaction through privacy notices, or defining legitimate interests, where the data joined up may be much more revealing than stand-alone data in each silo?
Define with greater clarity the privacy threat models for different groups of stakeholders and address the principles for each.
What would better look like?
The draft privacy principles are a start, but they’re not yet aspirational as I would have hoped. Of course the principles will only be adopted if possible, practical and by those who choose to. But where is the differentiator from what everyone is required to do, and better than the bare minimum? How will you sell this to consumers as new? How would you like your child to be treated?
The wording in these 5 bullet points, is the first crowdsourced starting point.
The supplier of this product or service MUST be General Data Protection Regulation (GDPR) compliant.
This product SHALL NOT disclose data to third parties without my knowledge.
I SHOULD get full access to all the data collected about me.
I MAY operate this device without connecting to the internet.
My data SHALL NOT be used for profiling, marketing or advertising without transparent disclosure.
Yes other points that came under security address some of the crossover between privacy and surveillance risks, but there is as yet little substantial that is aspirational to make the IoT mark a real differentiator in terms of privacy. An opportunity remains.
It was that and how young people perceive privacy that I hoped to bring to the table. Because if manufacturers are serious about future success, they cannot ignore today’s children and how they feel. How you treat them today, will shape future purchasers and their purchasing, and there is evidence you are getting it wrong.
The timing is good in that it now also offers the opportunity to promote consistent understanding, and embed the language of GDPR and ePrivacy regulations into consistent and compatible language in policy and practice in the #IoTmark principles.
User rights I would like to see considered
These are some of the points I would think privacy by design would mean. This would better articulate GDPR Article 25 to consumers.
Data sovereignty is a good concept and I believe should be considered for inclusion in explanatory blurb before any agreed privacy principles.
Goods should by ‘dumb* by default’ until the smart functionality is switched on. [*As our group chair/scribe called it] I would describe this as, “off is the default setting out-of-the-box”.
Privact by design. Deniability by default. i.e. not only after opt out, but a company should not access the personal or identifying purchase data of anyone who opts out of data collection about their product/service use during the set up process.
The right to opt out of data collection at a later date while continuing to use services.
A right to object to the sale or transfer of behavioural data, including to third-party ad networks and absolute opt-in on company transfer of ownership.
A requirement that advertising should be targeted to content, [user bought fridge A] not through jigsaw data held on users by the company [how user uses fridge A, B, C and related behaviour].
An absolute rejection of using children’s personal data gathered to target advertising and marketing at children
Background: Starting points before privacy
After a brief recap on 5 years ago, we heard two talks.
The first was a presentation from Bosch. They used the insights from the IoT open definition from 5 years ago in their IoT thinking and embedded it in their brand book. The presenter suggested that in five years time, every fridge Bosch sells will be ‘smart’. And the second was a fascinating presentation, of both EU thinking and the intellectual nudge to think beyond the practical and think what kind of society we want to see using the IoT in future. Hints of hardcore ethics and philosophy that made my brain fizz from Gerald Santucci, soon to retire from the European Commission.
The principles of open sourcing, manufacturing, and sustainable life cycle were debated in the afternoon with intense arguments and clearly knowledgeable participants, including those who were quiet. But while the group had assigned security, and started work on it weeks before, there was no one pre-assigned to privacy. For me, that said something. If they are serious about those who earn the trustmark being better for customers than their competition, then there needs to be greater emphasis on thinking like their customers, and by their customers, and what use the mark will be to customers, not companies. Plan early public engagement and testing into the design of this IoT mark, and make that testing open and diverse.
To that end, I believe it needed to be articulated more strongly, that sustainable public trust is the primary goal of the principles.
Trust that my device will not become unusable or worthless through updates or lack of them.
Trust that my device is manufactured safely and ethically and with thought given to end of life and the environment.
Trust that my source components are of high standards.
Trust in what data and how that data is gathered and used by the manufacturers.
Fundamental to ‘smart’ devices is their connection to the Internet, and so the last for me, is therefore key to successful public perception and it actually making a difference, beyond the PR value to companies. The value-add must be measured from consumers point of view.
All the openness about design functions and practice improvements, without attempting to change privacy infringing practices, may be wasted effort. Why? Because the perceived benefit of the value of the mark, will be proportionate to what risks it is seen to mitigate.
Why?
Because I assume that you know where your source components come from today. I was shocked to find out not all do and that ‘one degree removed’ is going to be an improvement? Holy cow, I thought. What about regulatory requirements for product safety recalls? These differ of course for different product areas, but I was still surprised. Having worked in global Fast Moving Consumer Goods (FMCG) and food industry, semiconductor and optoelectronics, and medical devices it was self-evident for me, that sourcing is rigorous. So that new requirement to know one degree removed, was a suggested minimum. But it might shock consumers to know there is not usually more by default.
Customers also believe they have reasonable expectations of not being screwed by a product update, left with something that does not work because of its computing based components. The public can take vocal, reputation-damaging action when they are let down.
While these are visible, the full extent of the overreach of company market and product surveillance into our whole lives, not just our living rooms, is yet to become understood by the general population. What will happen when it is?
The Internet of Things is exacerbating the power imbalance between consumers and companies, between government and citizens. As Wendy Grossman wrote recently, in one sense this may make privacy advocates’ jobs easier. It was always hard to explain why “privacy” mattered. Power, people understand.
That public discussion is long overdue. If open principles on IoT devices mean that the signed-up companies differentiate themselves by becoming market leaders in transparency, it will be a great thing. Companies need to offer full disclosure of data use in any privacy notices in clear, plain language under GDPR anyway, but to go beyond that, and offer customers fair presentation of both risks and customer benefits, will not only be a point-of-sales benefit, but potentially improve digital literacy in customers too.
The morning discussion touched quite often on pay-for-privacy models. While product makers may see this as offering a good thing, I strove to bring discussion back to first principles.
Privacy is a human right. There can be no ethical model of discrimination based on any non-consensual invasion of privacy. Privacy is not something I should pay to have. You should not design products that reduce my rights. GDPR requires privacy-by-design and data protection by default. Now is that chance for IoT manufacturers to lead that shift towards higher standards.
We also need a new ethics thinking on acceptable fair use. It won’t change overnight, and perfect may be the enemy of better. But it’s not a battle that companies should think consumers have lost. Human rights and information security should not be on the battlefield at all in the war to win customer loyalty. Now is the time to do better, to be better, demand better for us and in particular, for our children.
Privacy will be a genuine market differentiator
If manufacturers do not want to change their approach to exploiting customer data, they are unlikely to be seen to have changed.
Today feelings that people in US and Europe reflect in surveys are loss of empowerment, feeling helpless, and feeling used. That will shift to shock, resentment, and any change curve will predict, anger.
“The poll of just over two thousand British adults carried out by Ipsos MORI found that the media, internet services such as social media and search engines and telecommunication companies were the least trusted to use personal data appropriately.” [2014, Data trust deficit with lessons for policymakers, Royal Statistical Society]
In the British student population, one 2015 survey of university applicants in England, found of 37,000 who responded, the vast majority of UCAS applicants agree that sharing personal data can benefit them and support public benefit research into university admissions, but they want to stay firmly in control. 90% of respondents said they wanted to be asked for their consent before their personal data is provided outside of the admissions service.
In 2010, a multi method model of research with young people aged 14-18, by the Royal Society of Engineering, found that, “despite their openness to social networking, the Facebook generation have real concerns about the privacy of their medical records.” [2010, Privacy and Prejudice, RAE, Wellcome]
When people use privacy settings on Facebook set to maximum, they believe they get privacy, and understand little of what that means behind the scenes.
Are there tools designed by others, like Projects by If licenses, and ways this can be done, that you’re not even considering yet?
What if you don’t do it?
“But do you feel like you have privacy today?” I was asked the question in the afternoon. How do people feel today, and does it matter? Companies exploiting consumer data and getting caught doing things the public don’t expect with their data, has repeatedly damaged consumer trust. Data breaches and lack of information security have damaged consumer trust. Both cause reputational harm. Damage to reputation can harm customer loyalty. Damage to customer loyalty costs sales, profit and upsets the Board.
Where overreach into our living rooms has raised awareness of invasive data collection, we are yet to be able to see and understand the invasion of privacy into our thinking and nudge behaviour, into our perception of the world on social media, the effects on decision making that data analytics is enabling as data shows companies ‘how we think’, granting companies access to human minds in the abstract, even before Facebook is there in the flesh.
Governments want to see how we think too, and is thought crime really that far away using database labels of ‘domestic extremists’ for activists and anti-fracking campaigners, or the growing weight of policy makers attention given to predpol, predictive analytics, the [formerly] Cabinet Office Nudge Unit, Google DeepMind et al?
Had the internet remained decentralized the debate may be different.
I am starting to think of the IoT not as the Internet of Things, but as the Internet of Tracking. If some have their way, it will be the Internet of Thinking.
Considering our centralised Internet of Things model, our personal data from human interactions has become the network infrastructure, and data flows, are controlled by others. Our brains are the new data servers.
In the Internet of Tracking, people become the end nodes, not things.
And it is this where the future users will be so important. Do you understand and plan for factors that will drive push back, and crash of consumer confidence in your products, and take it seriously?
Companies have a choice to act as Empires would – multinationals, joining up even on low levels, disempowering individuals and sucking knowledge and power at the centre. Or they can act as Nation states ensuring citizens keep their sovereignty and control over a selected sense of self.
Look at Brexit. Look at the GE2017. Tell me, what do you see is the direction of travel? Companies can fight it, but will not defeat how people feel. No matter how much they hope ‘nudge’ and predictive analytics might give them this power, the people can take back control.
What might this desire to take-back-control mean for future consumer models? The afternoon discussion whilst intense, reached fairly simplistic concluding statements on privacy. We could have done with at least another hour.
Some in the group were frustrated “we seem to be going backwards” in current approaches to privacy and with GDPR.
But if the current legislation is reactive because companies have misbehaved, how will that be rectified for future? The challenge in the IoT both in terms of security and privacy, AND in terms of public perception and reputation management, is that you are dependent on the behaviours of the network, and those around you. Good and bad. And bad practices by one, can endanger others, in all senses.
If you believe that is going back to reclaim a growing sense of citizens’ rights, rather than accepting companies have the outsourced power to control the rights of others, that may be true.
There was a first principle asked whether any element on privacy was needed at all, if the text was simply to state, that the supplier of this product or service must be General Data Protection Regulation (GDPR) compliant. The GDPR was years in the making after all. Does it matter more in the IoT and in what ways? The room tended, understandably, to talk about it from the company perspective. “We can’t” “won’t” “that would stop us from XYZ.” Privacy would however be better addressed from the personal point of view.
What do people want?
From the company point of view, the language is different and holds clues. Openness, control, and user choice and pay for privacy are not the same thing as the basic human right to be left alone. Afternoon discussion reminded me of the 2014 WAPO article, discussing Mark Zuckerberg’s theory of privacy and a Palo Alto meeting at Facebook:
“Not one person ever uttered the word “privacy” in their responses to us. Instead, they talked about “user control” or “user options” or promoted the “openness of the platform.” It was as if a memo had been circulated that morning instructing them never to use the word “privacy.””
In the afternoon working group on privacy, there was robust discussion whether we had consensus on what privacy even means. Words like autonomy, control, and choice came up a lot. But it was only a beginning. There is opportunity for better. An academic voice raised the concept of sovereignty with which I agreed, but how and where to fit it into wording, which is at once both minimal and applied, and under a scribe who appeared frustrated and wanted a completely different approach from what he heard across the group, meant it was left out.
This group do care about privacy. But I wasn’t convinced that the room cared in the way that the public as a whole does, but rather only as consumers and customers do. But IoT products will affect potentially everyone, even those who do not buy your stuff. Everyone in that room, agreed on one thing. The status quo is not good enough. What we did not agree on, was why, and what was the minimum change needed to make a enough of a difference that matters.
I share the deep concerns of many child rights academics who see the harm that efforts to avoid restrictions Article 8 the GDPR will impose. It is likely to be damaging for children’s right to access information, be discriminatory according to parents’ prejudices or socio-economic status, and ‘cheating’ – requiring secrecy rather than privacy, in attempts to hide or work round the stringent system.
In ‘The Class’ the research showed, ” teachers and young people have a lot invested in keeping their spheres of interest and identity separate, under their autonomous control, and away from the scrutiny of each other.” [2016, Livingstone and Sefton-Green, p235]
Employers require staff use devices with single sign including web and activity tracking and monitoring software. Employee personal data and employment data are blended. Who owns that data, what rights will employees have to refuse what they see as excessive, and is it manageable given the power imbalance between employer and employee?
What is this doing in the classroom and boardroom for stress, anxiety, performance and system and social avoidance strategies?
A desire for convenience creates shortcuts, and these are often met using systems that require a sign-on through the platforms giants: Google, Facebook, Twitter, et al. But we are kept in the dark how by using these platforms, that gives access to them, and the companies, to see how our online and offline activity is all joined up.
Any illusion of privacy we maintain, we discussed, is not choice or control if based on ignorance, and backlash against companies lack of efforts to ensure disclosure and understanding is growing.
“The lack of accountability isn’t just troubling from a philosophical perspective. It’s dangerous in a political climate where people are pushing back at the very idea of globalization. There’s no industry more globalized than tech, and no industry more vulnerable to a potential backlash.”
If your connected *thing* requires registration, why does it? How about a commitment to not forcing one of these registration methods or indeed any at all? Social Media Research by Pew Research in 2016 found that 56% of smartphone owners ages 18 to 29 use auto-delete apps, more than four times the share among those 30-49 (13%) and six times the share among those 50 or older (9%).
Does that tell us anything about the demographics of data retention preferences?
In 2012, they suggested social media has changed the public discussion about managing “privacy” online. When asked, people say that privacy is important to them; when observed, people’s actions seem to suggest otherwise.
Does that tell us anything about how well companies communicate to consumers how their data is used and what rights they have?
There is also data with strong indications about how women act to protect their privacy more but when it comes to basic privacy settings, users of all ages are equally likely to choose a private, semi-private or public setting for their profile. There are no significant variations across age groups in the US sample.
Now think about why that matters for the IoT? I wonder who makes the bulk of purchasing decsions about household white goods for example and has Bosch factored that into their smart-fridges-only decision?
Do you *need* to know who the user is? Can the smart user choose to stay anonymous at all?
The day’s morning challenge was to attend more than one interesting discussion happening at the same time. As invariably happens, the session notes and quotes are always out of context and can’t possibly capture everything, no matter how amazing the volunteer (with thanks!). But here are some of the discussion points from the session on the body and health devices, the home, and privacy. It also included a discussion on racial discrimination, algorithmic bias, and the reasons why care.data failed patients and failed as a programme. We had lengthy discussion on ethics and privacy: smart meters, objections to models of price discrimination, and why pay-for-privacy harms the poor by design.
Smart meter data can track the use of unique appliances inside a person’s home and intimate patterns of behaviour. Information about our consumption of power, what and when every day, reveals personal details about everyday lives, our interactions with others, and personal habits.
Why should company convenience come above the consumer’s? Why should government powers, trump personal rights?
Smart meter is among the knowledge that government is exploiting, without consent, to discover a whole range of issues, including ensuring that “Troubled Families are identified”. Knowing how dodgy some of the school behaviour data might be, that helps define who is “troubled” there is a real question here, is this sound data science? How are errors identified? What about privacy? It’s not your policy, but if it is your product, what are your responsibilities?
If companies do not respect children’s rights, you’d better shape up to be GDPR compliant
For children and young people, more vulnerable to nudge, and while developing their sense of self can involve forming, and questioning their identity, these influences need oversight or be avoided.
In terms of GDPR, providers are going to pay particular attention to Article 8 ‘information society services’ and parental consent, Article 17 on profiling, and rights to restriction of processing (19) right to erasure in recital 65 and rights to portability. (20) However, they may need to simply reassess their exploitation of children and young people’s personal data and behavioural data. Article 57 requires special attention to be paid by regulators to activities specifically targeted at children, as ‘vulnerable natural persons’ of recital 75.
Human Rights, regulations and conventions overlap in similar principles that demand respect for a child, and right to be let alone:
(a) The development of the child ‘s personality, talents and mental and physical abilities to their fullest potential;
(b) The development of respect for human rights and fundamental freedoms, and for the principles enshrined in the Charter of the United Nations.
A weakness of the GDPR is that it allows derogation on age and will create inequality and inconsistency for children as a result. By comparison Article one of the Convention on the Rights of the Child (CRC) defines who is to be considered a “child” for the purposes of the CRC, and states that: “For the purposes of the present Convention, a child means every human being below the age of eighteen years unless, under the law applicable to the child, majority is attained earlier.”<
Article two of the CRC says that States Parties shall respect and ensure the rights set forth in the present Convention to each child within their jurisdiction without discrimination of any kind.
CRC Article 16 says that no child shall be subjected to arbitrary or unlawful interference with his or her honour and reputation.
Article 8 CRC requires respect for the right of the child to preserve his or her identity […] without unlawful interference.
Article 12 CRC demands States Parties shall assure to the child who is capable of forming his or her own views the right to express those views freely in all matters affecting the child, the views of the child being given due weight in accordance with the age and maturity of the child.
That stands in potential conflict with GDPR article 8. There is much on GDPR on derogations by country, and or children, still to be set.
What next for our data in the wild
Hosting the event at the zoo offered added animals, and during a lunch tour we got out on a tour, kindly hosted by a fellow participant. We learned how smart technology was embedded in some of the animal enclosures, and work on temperature sensors with penguins for example. I love tigers, so it was a bonus that we got to see such beautiful and powerful animals up close, if a little sad for their circumstances and as a general basic principle, seeing big animals caged as opposed to in-the-wild.
Freedom is a common desire in all animals. Physical, mental, and freedom from control by others.
I think any manufacturer that underestimates this element of human instinct is ignoring the ‘hidden dragon’ that some think is a myth. Privacy is not dead. It is not extinct, or even unlike the beautiful tigers, endangered. Privacy in the IoT at its most basic, is the right to control our purchasing power. The ultimate people power waiting to be sprung. Truly a crouching tiger. People object to being used and if companies continue to do so without full disclosure, they do so at their peril. Companies seem all-powerful in the battle for privacy, but they are not. Even insurers and data brokers must be fair and lawful, and it is for regulators to ensure that practices meet the law.
When consumers realise our data, our purchasing power has the potential to control, not be controlled, that balance will shift.
“Paper tigers” are superficially powerful but are prone to overextension that leads to sudden collapse. If that happens to the superficially powerful companies that choose unethical and bad practice, as a result of better data privacy and data ethics, then bring it on.
I hope that the IoT mark can champion best practices and make a difference to benefit everyone.
While the companies involved in its design may be interested in consumers, I believe it could be better for everyone, done well. The great thing about the efforts into an #IoTmark is that it is a collective effort to improve the whole ecosystem.
I hope more companies will realise their privacy rights and ethical responsibility in the world to all people, including those interested in just being, those who want to be let alone, and not just those buying.
“If a cat is called a tiger it can easily be dismissed as a paper tiger; the question remains however why one was so scared of the cat in the first place.”
Further reading: Networks of Control – A Report on Corporate Surveillance, Digital Tracking, Big Data & Privacy by Wolfie Christl and Sarah Spiekermann