Hosted by the Mental Health Foundation, it’s Mental Health Awareness Week until 24th May, 2020. The theme for 2020 is ‘kindness’.
So let’s not comment on the former Education Ministers and MPs, the great-and-the-good and the-recently-resigned, involved in the Mail’s continued hatchet job on teachers. They probably believe thatthey are standing up for vulnerable children when they talk about the “damage that may last a generation“. Yet the evidence of much of their voting, and policy design to-date, suggests it’s much more about getting people back to work.
Of course there are massive implications for children in families unable to work or living with the stress of financial insecurity on top of limited home schooling. But policy makers should be honest about the return to school as an economic lever, not use children’s vulnerability to pressure professionals to return to full-school early, or make up statistics to up the stakes.
The rush to get back to full-school for the youngest of primary age pupils has been met with understandable resistance, and too few practical facts. Going back to a school in COVID-19 measures for very young children, will take tonnes of adjustment, to the virus, to seeing friends they cannot properly play with, to grief and stress.
When it comes to COVID-19 risk, many countries with similar population density to the UK, locked down earlier and tighter —and now have lower rates of community transmission than we do. Or compare where didn’t, Sweden, but that has a population density of24 people per Km2.The population density for the United Kingdom is274 people per square kilometre. In Italy, with 201 inhabitants per square kilometre, you needed a permission slip to leave home.
And that’s leaving aside the unknowns on COVID-19 immunity, or identifying it, or the lack of testing offer to over a million children under-5, the very group expected to be those who return first to full-school.
Children have rights to education, and to life, survival and development. But the blanket target groups and target date, don’t appear to take the Best Interests of The Child, for each child, into account at all. ‘Won’t someone think of the children?’ may never have been more apt.
Parenting while poor is highly political
What’s the messaging in the debate, even leaving media extremes aside?
The sweeping assumption by many commentators that ‘the poorest children will have learned nothing‘ (BBC Newsnight, May 19) is unfair, but this blind acceptance as fact, a politicisation of parenting while poor, conflated with poor parenting, enables the claimed concern for their vulnerability to pass without question.
Many of these most vulnerable children were not receiving full time education *before* the pandemic but look at how it is told.
“The St Giles Trust research provided more soundbites. Pupils involved in “county lines” are in pupil referral units (PRUs), often doing only an hour each day, and rarely returning into mainstream education.’ (Steve Howell, Schools Week)
Schools have remained open for children of key workers and more than half a million pupils labeled as ‘vulnerable’, which includes those classified as “children in need” as well as 270,000 children with an education, health and care (EHC) plan for special educational needs. Not all of those are ‘at risk’ of domestic violence or abuse or neglect. The reasons why there is low turnout, tend to be conflated.
Assumptions abound about the importance of formal education and the best place for those very young children in Early Years (age 2-5) to be in school at all, despite conflicting UK evidence, that is thin on the ground. Research for the NFER [the same organisation running the upcoming Baseline Test of four year olds still due to begin this year] (Sharp, 2002), found:
“there would appear to be no compelling educational rationale for a statutory school age of five or for the practice of admitting four-year-olds to school reception classes.” And “a late start appears to have no adverse effect on children’s progress.”
Later research from 2008, from the IoE, Research Report No. DCSF-RR061 (Sylva et al, 2008) commissioned before the then ‘new’ UK Government took office in 2010, suggested better outcomes for children who are in excellent Early Years provision, but also pointed out that more often the most vulnerable are not those in the best of provision.
“quality appears to be especially important for disadvantaged groups.”
What will provision quality be like, under Coronavirus measures? How much stress-free space and time for learning will be left at all?
The questions we should be asking are a) What has been learned for the second wave and b) Assume by May 2021 nothing changes. What would ideal schooling look like, and how do we get there?
Attainment is not the only gap
While it is not compulsory to be in any form of education, including home education, till your fifth birthday in England, most children start school at age 4 and turn five in the course of the year. It is one of the youngest starts in Europe. Many hundreds of thousands of children start formal education in the UK even younger from age 2 or three. Yet is it truly better for children? We are way down the Pisa attainment scores, or comparable regional measures. There has been little change in those outcomes in 13 years, except to find that our children are measured as being progressively less happy.
“As Education Datalab points out, the PISA 2018 cohort started school around 2008, so their period at school not only lines up with the age of austerity and government cuts, but with the “significant reforms” to GCSEs introduced by Michael Gove while he was Education Secretary.” [source: Schools Week, 2019]
There’s no doubt that some of the harmful economic effects of Brexit will be attributed to the effects of the pandemic. Similarly, many of the outcomes of ten years of policy that have increased children’s vulnerability and attainment gap, pre-COVID-19, will no doubt be conflated with harms from this crisis in the next few years.
The risk of the acceptance of misattributing this gap in outcomes, is a willingness to adopt misguided solutions, and deny accountability.
Children’s vulnerability
Many experts in children’s needs, have been in their jobs much longer than most MPs and have told them for years the harm their policies are doing to the very children, those voices now claim to want to protect. Will the MPs look at that evidence and act on it?
Charities speaking out this week, said that in the decade since 2010, local authority spending on early intervention services dropped by 46% but has risen on late intervention, from 58% to 78% of spending on children and young people’s services over the same period.
If those advocating for a return to school, for a month before the summer, really want to reduce children’s vulnerability, they might sort out CAMHs for simultaneous support of the return to school, and address those areas in which government must first do no harm. Fix these things that increase the “damage that may last a generation“.
Case studies in damage that may last
Adoption and Children (Coronavirus) (Amendment) Regulations 2020’
“These regulations make significant temporary changes to the protections given in law to some of the most vulnerable children in the country – those living in care.” ” I would like to see all the regulations revoked, as I do not believe that there is sufficient justification to introduce them. This crisis must not remove protections from extremely vulnerable children, particularly as they are even more vulnerable at this time. As an urgent priority it is essential that the most concerning changes detailed above are reversed.”
“Specialist services are turning away one in four of the children referred to them by their GPs or teachers for treatment. More than 338,000 children were referred to CAMHS in 2017, but less than a third received treatment within the year. Around 75 per cent of young people experiencing a mental health problem are forced to wait so long their condition gets worse or are unable to access any treatment at all.”
“Only 6.7 per cent of mental health spending goes to children and adolescent mental health services (CAMHS). Government funding for the Early Intervention Grant has been cut by almost £500 million since 2013. It is projected to drop by a further £183 million by 2020.
“Public health funding, which funds school nurses and public mental health services, has been reduced by £600 million from 2015/16 to 2019/20.”
“Around sixty thousand families forced to claim universal credit since mid-March because of COVID-19 will discover that they will not get the support their family needs because of the controversial ‘two-child policy”.
“The cuts [introduced from 2010 to the 2012 budget] in housing benefit will adversely affect some of the most disadvantaged groups in society and are likely to lead to an increase in homelessness, warns the homeless charity Crisis.”
“The enactment of the Legal Aid, Punishment and Sentencing of Offenders Act 2012 (LASPO) has had widespread consequences for the provision of legal aid in the UK. One key feature of the new scheme, of particular importance to The Children’s Society, were the changes made to the eligibility criteria around legal aid for immigration cases. These changes saw unaccompanied and separated children removed from scope for legal aid unless their claim is for asylum, or if they have been identified as victims of child trafficking.”
“To fulfill its obligations under the UNCRC, the Government should reinstate legal aid for all unaccompanied and separated migrant children in matters of immigration by bringing it back within ‘scope’ under the Legal Aid, Sentencing and Punishment of Offenders Act 2012. Separated and unaccompanied children are super-vulnerable.”
“the number of public libraries and paid staff fall every year since 2010, with spending reduced by 12% in Britain in the last four years.” “We can view libraries as a bit of a canary in the coal mine for what is happening across the local government sector…” “There really needs to be some honest conversations about the direction of travel of our councils and what their role is, as the funding gap will continue to exacerbate these issues.”
No recourse to public funds: FSM and more
source: NRPF Network “No recourse to public funds (NRPF) is a condition imposed on someone due to their immigration status. Section 115 Immigration and Asylum Act 1999 states that a person will have ‘no recourse to public funds’ if they are ‘subject to immigration control’.”
“children only get the opportunity to apply for free school meals if their parents already receive certain benefits. This means that families who cannot access these benefits– because they have what is known as “no recourse to public funds” as a part of their immigration status– are left out from free school meal provision in England.”
“the reduction in hospitalisations at ages 5–11 saves the NHS approximately £5 million, about 0.4% of average annual spending on Sure Start. But the types of hospitalisations avoided – especially those for injuries – also have big lifetime costs both for the individual and the public purse”.
“Figures obtained by the All-Party Parliamentary Group (APPG) on Knife Crime show the average council has cut real-terms spending on youth services by 40% over the past three years. Some local authorities have reduced their spending – which funds services such as youth clubs and youth workers – by 91%.”
Barnardo’s Chief Executive Javed Khan said:
“These figures are alarming but sadly unsurprising. Taking away youth workers and safe spaces in the community contributes to a ‘poverty of hope’ among young people who see little or no chance of a positive future.”
In 1924 the Hibbert Journal published what is accepted as the first printed copy of a well-known joke.
A genial Irishman, cutting peat in the wilds of Connemara, was once asked by a pedestrian Englishman to direct him on his way to Letterfrack. With the wonted enthusiasm of his race the Irishman flung himself into the problem and, taking the wayfarer to the top of a hill commanding a wide prospect of bogs, lakes, and mountains, proceeded to give him, with more eloquence than precision, a copious account of the route to be taken. He then concluded as follows: ‘Tis the divil’s own country, sorr, to find your way in. But a gintleman with a face like your honour’s can’t miss the road; though, if it was meself that was going to Letterfrack, faith, I wouldn’t start from here.’
“The questions now being asked are whether you can protect learning at a time of national emergency? Can you truly connect educators working from home with their pupils?”
and he rightly noted that,
“One problem schools are now attempting to overcome is that many lack the infrastructure, experience and training to use digital resources to support a wholesale move to online teaching at short notice.”
He calls for “bold investment and co-ordination across Whitehall led by Downing Street to really set a sprint towards super-fast connectivity to schools, pupils’ homes and investment in actual devices for students. The Department for Education, too, has done much to think through our recent national edtech strategy – now it needs to own and explain it.”
But the own and explain it, is the same problematic starting point as care-data had in the NHS in 2014. And we know how that went.
The edTech demands and drive for the UK are not a communications issue. Nor are they simply problems of infrastructure, or the age-old idea of shipping suitable tech at scale. The ‘fresh start’ isn’t going to be what anyone wants, least of all the edTech evangelists if we start from where they are.
The edTech UK strategy in effect avoided online learning, and the reasons for that were not public knowledge but likely well founded. They’re mostly unevidenced and often any available research comes from the companies themselves or their partners and promoter think tanks and related, or self interested bodies.
I’ve not seen anyone yet talk about disadvantage and deprivation from not issuing course curriculum standard text books to every child. Why on earth can secondary schools not afford to give each child their text book home? A darn sight cheaper than tech, independent of data costs and a guide to exactly what the exams will demand. Should we not seek to champion the most appropriate and equitable learning solutions, in addition to, rather than exclusively, the digital ones? GSCE children I support(ed) in foreign languages each improved once they had written materials. Getting out Chromebooks by contrast, simply interfered in the process, and wasted valuable classroom time.
Technology can deliver most vital communications, at speed and scale. It can support admin, expand learning and level the playing field through accessible tools. But done wrongly, it makes things worse than without.
Its procurement must assess any potential harmful consequences and safeguard against them, and not accept short term benefits, at the cost of long term harm. It should be safe, fair, and transparent.
“Responsible technology is no longer a nice thing to do to look good, it’s becoming a fundamental pillar of corporate business models. In a post-Cambridge Analytica world, consumers are demanding better technology and more transparency. Companies that do create those services are the ones that will have a better, brighter future.”
“the impacts of technology use on teaching and learning remain uncertain. Andreas Schleicher – the OECD’s director of education – caused some upset in 2015 when suggesting that ICT has negligible impact on classrooms. Yet he was simply voicing what many teachers have long known: good technology use in education is very tricky to pin down.”
That won’t stop edTech being part of the mainstay of the UK export strategy post-Brexit whenever that may now be. But let’s be very clear that if the Department wants to be a world leader it shouldn’t promote products whose founders were last most notably interviewing fellow students online about their porn preferences. Or who are based in offshore organisations with very odd financial structures. Do your due diligence. Work with reputable people and organisations and build a trustworthy network of trustworthy products framed by the rule of law, that is rights’ respecting and appropriate to children. But don’t start with the products.
Above all build a strategy for education, for administrative support, for respecting rights, and for teaching in which tools that may or may not be technology-based add value; but don’t start with the product promotion.
To date the aims are to serve two masters. Our children’s education, and the UK edTech export strategy. You can if you’re prepared to do the proper groundwork, but it’s lacking right now. What is certain, is that if you get it wrong for UK children, the other will inevitably fail.
Covid19 must not be misused to direct our national edTech strategy. I wouldn’t start from here isn’t a joke, it’s a national call for change.
Here’s ten reasons where, why, and how to start instead.
1. The national edTech strategy board should start by demonstrating what it wants to see from others, with full transparency of its members, aims, terms of reference, partners and meeting minutes.There should be no need FOI to ask for them. There are much more sensitive subjects that operate in the open. It unfortunately emulates other DfE strategy, and the UK edTech network which has an in-crowd, and long standing controlling members. Both would be the richer for transparency and openness.
2. Stop bigging up the ‘Big Three’ and doing their market monopolisation for them, unless you want people to see you simply as promoting your friends’-on-the-board/foundation/ethics committee’s products. Yes,” many [educational settings] lack the infrastructure” but that should never mean encouraging ownership and delivery by only closed commercial partners. That is the route to losing control of your state education curriculum, staff training and (e)quality, its delivery, risk management, data, and cost control.
3. Start with designing for fairness in public sector systems. Minimum acceptable ethical standards could be framed around for example, accessibility, design, and restrictions on commercial exploitation and in-product advertising. This needs to be in place first, before fitting products ‘on top’ of an existing unfair, and imbalanced system, to avoid embedding disadvantage and the commodification of children in education, even further.
5. Accessibility and Internet access is a social justice issue. Again as we’ve argued for at defenddigitalme for some time, these come *before* you promote products on top of the delivery systems:
Accessibility standards for all products used in state education should be defined and made compulsory in procurement processes, to ensure access for all and reduce digital exclusion.
All schools must be able to connect to high-speed broadband services to ensure equality of access and participation in the educational, economic, cultural and social opportunities of the world wide web.
6. Core national education infrastructure must be put on the national risk register, as we’ve argued for previously at defenddigitalme (see 6.6). Dependence such as MS Office 365, major cashless payment systems, and Google for Education all need assessed and to be part of the assessment for regular and exceptional delivery of education. We currently operate in the dark. And it should be unthinkable that companies get seats at the national UK edTech strategy table without full transparency over questions on their practices, policy and meeting the rule of law.
8. Start with teacher training. Why on earth is the national strategy all about products, when it should be starting with people?
Introduce data protection and pupil privacy into basic teacher training, to support a rights-respecting environment in policy and practice, using edTech and broader data processing, to give staff the clarity, consistency and confidence in applying the high standards they need.
Ensure ongoing training is available and accessible to all staff for continuous professional development.
A focus on people, nor products, will deliver fundamental basics needed for good tech use.
9. Safe data by design and default. I’m tired of hearing from CEOs of companies that claim to be social entrepreneurs, or non-profit, or teachers who’ve designed apps, how well intentioned their products are. Show me instead. Meet the requirements of the rule of law.
Local systems must stop shipping out (often sensitive) pupil data at scale and speed to companies, and instead stay in control of terms and conditions, data purposes, and ban product developments for example.
Companies must stop using pupil data for their own purposes for profit, or to make inferences about autism or dyslexia for example, if that’s not your stated product aim, it’s likely unlawful.
Establish fair and independent oversight mechanisms of national pupil data, so that transparency and trust are consistently maintained across the public sector, and throughout the chain of data use, from collection, to the end of its life cycle, including annual data usage reports for each child.
10. We need a law that works for children’s rights. Develop a legislative framework for the fair use of a child’s digital footprint from the classroom for direct educational and administrative purposes at local level, including commercial acceptable use policies. Build the national edTech strategy with a rights’ based framework and lawful basis in an Education and Privacy Act. Without this, you are building on sand.
Here’s some thoughts about the Prevent programme, after the half day I spent at the event this week, Youth Empowerment and Addressing Violent Youth Radicalisation in Europe.
Firstly, I appreciated the dynamic and interesting youth panel. Young people, themselves involved in youth work, or early researchers on a range of topics. Panelists shared their thoughts on:
Removal of gang databases and systemic racial targeting
Questions over online content takedown with the general assumption that “someone’s got to do it.”
The purposes of Religious Education and lack of religious understanding as cause of prejudice, discrimination, and fear.
From these connections comes trust.
Next, Simon Chambers, from the British Council, UK National Youth Agency, and Erasmus UK, talked about the programme of Erasmus Plus, under the striking sub theme, from these connections comes trust.
42% of the world’s population are under 25
Young people understand that there are wider, underlying complex factors in this area and are disproportionately affected by conflict, economic change and environmental disaster.
Many young people struggle to access education and decent work.
Young people everywhere can feel unheard and excluded from decision-making — their experience leads to disaffection and grievance, and sometimes to conflict.
We then heard a senior Home Office presenter speak about Radicalisation: the threat, drivers and Prevent programme.
On Contest 2018 Prevent / Pursue / Protect and Prepare
What was perhaps most surprising was his statement that the programme believes there is no checklist, [but in reality there are checklists] no single profile, or conveyer belt towards radicalisation.
“This shouldn’t be seen as some sort of predictive model,” he said. “It is not accurate to say that somehow we can predict who is going to become a terrorist, because they’ve got poor education levels, or because necessarily have a deprived background.”
But he then went on to again highlight the list of identified vulnerabilities in Thomas Mair‘s life, which suggests that these characteristics are indeed seen as indicators.
When I look at the ‘safeguarding-in-school’ software that is using vulnerabilities as signals for exactly that kind of prediction of intent, the gap between theory and practice here, is deeply problematic.
One slide included Internet content take downs, and suggested 300K pieces of illegal terrorist material have been removed since February 2010. That number he later suggested are contact with CTIRU, rather than content removal defined as a particular form. (For example it isn’t clear if this is a picture, a page, or whole site). This is still somewhat unclear and there remain important open questions, given its focus in the online harms policy and discussion.
The big gap that was not discussed and that I believe matters, is how much autonomy teachers have, for example, to make a referral. He suggested “some teachers may feel confident” to do what is needed on their own but others, “may need help” and therefore make a referral. Statistics on those decision processes are missing, and it is very likely I believe that over referral is in part as a result of fearing that non-referral, once a computer has tagged issues as Prevent related, would be seen as negligent, or not meeting the statutory Prevent duty as it applies to schools.
On the Prevent Review, he suggested that the current timeline still stands, of August 2020, even though there is currently no Reviewer. It is for Ministers to make a decision, who will replace Lord Carlile.
Safeguarding children and young people from radicalisation
Mark Chalmers of Westminster City Council., then spoke about ‘safeguarding children and young people from radicalisation.’
He started off with a profile of the local authority demographic, poverty and wealth, migrant turnover, proportion of non-English speaking households. This of itself may seem indicative of deliberate or unconscious bias.
He suggested that Prevent is not a security response, and expects that the policing role in Prevent will be reduced over time, as more is taken over by Local Authority staff and the public services. [Note: this seems inevitable after the changes in the 2019 Counter Terrorism Act, to enable local authorities, as well as the police, to refer persons at risk of being drawn into terrorism to local channel panels. Should this have happened at all, was not consulted on as far as I know]. This claim that Prevent is not a security response, appears different in practice, when Local Authorities refuse FOI questions on the basis of security exemptions in the FOI Act, Section 24(1).
Both speakers declined to accept my suggestion that Prevent and Channel is not consensual. Participation in the programme, they were adamant is voluntary and confidential. The reality is that children do not feel they can make a freely given informed choice, in the face of an authority and the severity of the referral. They also do not understand where their records go to, how confidential are they really, and how long they are kept or why.
The recently concluded legal case and lengths one individual had to go to, to remove their personal record from the Prevent national database, shows just how problematic the mistaken perception of a consensual programme by authorities is.
I knew nothing of the Prevent programme at all in 2015. I only began to hear about it once I started mapping the data flows into, across and out of the state education sector, and teachers started coming to me with stories from their schools.
I found it fascinating to hear those speak at the conference that are so embedded in the programme. They seem unable to see it objectively or able to accept others’ critical point of view as truth. It stems perhaps from the luxury of having the privilege of believing you yourself, will be unaffected by its consequences.
“Yes,” said O’Brien, “we can turn it off. We have that privilege” (1984)
There was no ground given at all for accepting that there are deep flaws in practice. That in fact ‘Prevent is having the opposite of its intended effect: by dividing, stigmatising and alienating segments of the population, Prevent could end up promoting extremism, rather than countering it’ as concluded in the 2016 report Preventing Education: Human Rights and Countering terrorism in UK Schools by Rights Watch UK .
Mark Chalmers conclusion was to suggest perhaps Prevent is not always going to be the current form, of bolt on ‘big programme’ and instead would be just like any other form of child protection, like FGM. That would mean every public sector worker, becomes an extended arm of the Home Office policy, expected to act in counter terrorism efforts.
But the training, the nuance, the level of application of autonomy that the speakers believe exists in staff and in children is imagined. The trust between authorities and people who need shelter, safety, medical care or schooling must be upheld for the public good.
No one asked, if and how children should be seen through the lens of terrorism, extremism and radicalisation at all. No one asked if and how every child, should be able to be surveilled online by school imposed software and covert photos taken through the webcam in the name of children’s safeguarding. Or labelled in school, associated with ‘terrorist.’ What happens when that prevents trust, and who measures its harm?
[click to view larger file]
Far too little is known about who and how makes decisions about the lives of others, the criteria for defining inappropriate activity or referrals, or the opacity of decisions on online content.
What effects will the Prevent programme have on our current and future society, where everyone is expected to surveil and inform upon each other? Failure to do so, to uphold the Prevent duty, becomes civic failure. How is curiosity and intent separated? How do we safeguard children from risk (that is not harm) and protect their childhood experiences, their free and full development of self?
No one wants children to be caught up in activities or radicalisation into terror groups. But is this the correct way to solve it?
“The research provides new evidence that by attempting to profile and predict violent youth radicalisation, we may in fact be breeding the very reasons that lead those at risk to violent acts.” (Professor Theo Gavrielides).
Current case studies of lived experience, and history also say it is mistaken. Prevent when it comes to children, and schools, needs massive reform, at very least, but those most in favour of how it works today, aren’t the ones who can be involved in its reshaping.
“Who denounced you?” said Winston.
“It was my little daughter,” said Parsons with a sort of doleful pride. “She listened at the keyhole. Heard what I was saying, and nipped off to the patrols the very next day. Pretty smart for a nipper of seven, eh? I don’t bear her any grudge for it. In fact I’m proud of her. It shows I brought her up in the right spirit, anyway.” (1984).
The event was the launch of the European study on violent youth radicalisation from YEIP: The project investigated the attitudes and knowledge of young Europeans, youth workers and other practitioners, while testing tools for addressing the phenomenon through positive psychology and the application of the Good Lives Model.
Its findings include that young people at risk of violent radicalisation are “managed” by the existing justice system as “risks”. This creates further alienation and division, while recidivism rates continue to spiral.
I’ve been thinking about FAT, and the explainability of decision making.
There may be few decisions about people at scale, today in the public sector, in which computer stored data aren’t used. For some, computers are used to make or help make decisions.
How we understand those decisions in a vital part of the obligation of fairness, in data processing. How I know that *you* have data about me, and are processing it, in order to make a decision that affects me. So there’s an awful lot of good things that come out of that. The staff member does their job with better understanding. The person affected has an opportunity to question and correct if necessary, the inputs to the decision. And one hopes, that the computer support can make many decisions faster, and with more information in useful ways, than the human staff member alone.
But, why then, does it seem so hard to get this understood and processes in place to make the decision making understandable?
And more importantly, why does there seem to be no consistency in how such decision-making is documented, and communicated?
From school progress measures, to PIP and Universal Credit applications, to predictive ‘risk scores’ for identifying gang membership and child abuse. In a world where you need to be computer literate but there may be no computer to help you make an application, the computers behind the scenes are making millions of life changing decisions.
We cannot see them happen, and often don’t see the data that goes into them. From start to finish, it is a hidden process.
The current focus on FAT — fairness, accountability, and transparency of algorithmic systems — often makes accountability for the computer part of the decision-making in the public sector, appear something that has become too hard to solve and needs complex thinking around.
I want conversations to go back to something more simple. Humans taking responsibility for their actions. And to do so, we need better infrastructure for whole process delivery, where it involves decision making, in public services.
Academics, boards, conferences, are all spending time on how to make the impact of the algorithms fair, accountable, and transparent. But in the search for ways to explain legal and ethical models of fairness, and to explain the mathematics and logic behind algorithmic systems and machine learning, we’ve lost sight of why anyone needs to know. Who cares and why?
Rather in the same way that the concept of ethics has become captured and distorted by companies to suit their own agenda, so if anything, the focus on FAT has undermined the concept of whole process audit and responsibility for human choices, decisions, and actions.
The effect of a machine-made decision on those who are included in the system response, — and more rarely those who may be left out of it, or its community effects, — has been singled out for a lot of people’s funding and attention as what matters to understand and audit in the use of data for making safe and just decisions.
It’s right to do so, but not as a stand alone cog in the machine.
The computer and its data processing have been unjustifiably deified. Rather than supporting public sector staff they are disempowered in the process as a whole. It is assumed the computer knows best, and can be used to justify a poor decision — “well, what could I do, the data told me to do it?” is rather like, “it was not my job to pick up the fax from the fax machine.” But that’s not a position we should encourage.
We have become far too accommodating of this automated helplessness.
If society feels a need to take back control, as a country and of our own lives, we also need to see decision makers take back responsibility.
The focus on FAT emphasises the legal and ethical obligations on companies and organisations, to be accountable for what the computer says, and the narrow algorithmic decision(s) in it. But it is rare that an outcome in most things in real life, is the result of a singular decision.
So does FAT fit these systems at all?
Do I qualify for PIP? Can your child meet the criteria needed for additional help at school? Does the system tag your child as part of a ‘Troubled Family’? These outcomes are life affecting in the public sector. It should therefore be made possible to audit *if* and *how* the public sector should offer to change lives as a holistic process.
That means re-looking at if and how we audit that whole end-to-end process > from policy idea, to legislation, through design to delivery.
There are no simple, clean, machine readable results in that.
Yet here again, the current system-process-solution encourages the public sector to use *data* to assess and incentivise the process to measure the process, and award success and failure, packaged into surveys and payment-by-results.
The data driven measurement, assesses data driven processes, that compound the problems of this infinite human-out-of-the-loop.
This clean laser-like focus misses out on the messy complexity of our human lives. And the complexity of public service provision makes it very hard to understand the process of delivery. As long as the end-to-end system remains weighted to self preservation, to minimise financial risk to the institution for example, or to find a targeted number of interventions, people will be treated unfairly.
Through a hyper focus on algorithms and computer-led decision accountability, the tech sector, academics and everyone involved, is complicit in a debate that should be about human failure. We already have algorithms in every decision process. Human and machine-led algorithms. Before we decide if we need a new process of fairness, accountability and transparency, we should know who’s responsible now for the outcomes and failure in any given activity, and ask, ‘Does it really need to change?’
To restore some of the power imbalance to the public on decisions about us made by authorities today, we urgently need public bodies to compile, publish and maintain at very minimum, some of the basic underpinning and auditable infrastructure — the ‘plumbing’ — inside these processes:
a register of data analytics systems used by Local and Central Government, including but not only those where algorithmic decision-making affects individuals.
a register of data sources used in those analytics systems.
a consistently identifiable and searchable taxonomy of the companies and third-parties delivering those analytics systems.
a diagrammatic mapping of core public service delivery activities, to understand the tasks, roles, and responsibilities within the process. It would benefit government at all levels to be able to see themselves where decision points sit, understand flows of data and cash, and see where which law supports the task, and accountability sits.
Why? Because without knowing what is being used at scale, how and by whom, we are poorly informed and stay helpless. It allows for enormous and often unseen risks without adequate checks and balances — like named records with the sexual orientation data of almost 3.2 million people, and religious belief data on 3.7 million sitting in multiple distributed databases — and with the massive potential for state-wide abuse by any current or future government. And the responsibility for each part of a process remains unclear.
We need to make increasingly lean systems more fat and stuff them with people power again. Yes we need fairness accountability and transparency. But we need those human qualities to reach across thinking beyond computer code. We need to restore humanity to automated systems and it has to be re-instated across whole processes.
FAT focussed only on computer decisions, is a distraction from auditing failure to deliver systems that work for people. It’s a failure to manage change and of governance, and to be accountable for when things go wrong.
What happens when FAT fails? Who cares and what do they do?
What would it mean for you to trust an Internet connected product or service and why would you not?
What has damaged consumer trust in products and services and why do sellers care?
What do we want to see different from today, and what is necessary to bring about that change?
These three pairs of questions implicitly underpinned the intense day of #iotmark discussion at the London Zoo last Friday.
The questions went unasked, and could have been voiced before we started, although were probably assumed to be self-evident:
Why do you want one at all [define the problem]?
What needs to change and why [define the future model]?
How do you deliver that and for whom [set out the solution]?
If a group does not agree on the need and drivers for change, there will be no consensus on what that should look like, what the gap is to achieve it, and even less on making it happen.
So who do you want the trustmark to be for, why will anyone want it, and what will need to change to deliver the aims? No one wants a trustmark per se. Perhaps you want what values or promises it embodies to demonstrate what you stand for, promote good practice, and generate consumer trust. To generate trust, you must be seen to be trustworthy. Will the principles deliver on those goals?
The Open IoT Certification Mark Principles, as a rough draft was the outcome of the day, and are available online.
Here’s my reflections, including what was missing on privacy, and the potential for it to be considered in future.
I’ve structured this first, assuming readers attended the event, at ca 1,000 words. Lists and bullet points. The background comes after that, for anyone interested to read a longer piece.
Many thanks upfront, to fellow participants, to the organisers Alexandra D-S and Usman Haque and the colleague who hosted at the London Zoo. And Usman’s Mum. I hope there will be more constructive work to follow, and that there is space for civil society to play a supporting role and critical friend.
The mark didn’t aim to fix the IoT in a day, but deliver something better for product and service users, by those IoT companies and providers who want to sign up. Here is what I took away.
I learned three things
A sense of privacy is not homogenous, even within people who like and care about privacy in theoretical and applied ways. (I very much look forward to reading suggestions promised by fellow participants, even if enforced personal openness and ‘watching the watchers’ may mean ‘privacy is theft‘.)
Awareness of current data protection regulations needs improved in the field. For example, Subject Access Requests already apply to all data controllers, public and private. Few have read the GDPR, or the e-Privacy directive, despite importance for security measures in personal devices, relevant for IoT.
I truly love working on this stuff, with people who care.
And it reaffirmed things I already knew
Change is hard, no matter in what field.
People working together towards a common goal is brilliant.
Group collaboration can create some brilliantly sharp ideas. Group compromise can blunt them.
Some men are particularly bad at talking over each other, never mind over the women in the conversation. Women notice more. (Note to self: When discussion is passionate, it’s hard to hold back in my own enthusiasm and not do the same myself. To fix.)
The IoT context, and risks within it are not homogenous, but brings new risks and adverseries. The risks for manufacturers and consumers and the rest of the public are different, and cannot be easily solved with a one-size-fits-all solution. But we can try.
Concerns I came away with
If the citizen / customer / individual is to benefit from the IoT trustmark, they must be put first, ahead of companies’ wants.
If the IoT group controls both the design, assessment to adherence and the definition of success, how objective will it be?
The group was not sufficiently diverse and as a result, reflects too little on the risks and impact of the lack of diversity in design and effect, and the implications of dataveillance .
Critical minority thoughts although welcomed, were stripped out from crowdsourced first draft principles in compromise.
More future thinking should be built-in to be robust over time.
What was missing
There was too little discussion of privacy in perhaps the most important context of IoT – inter connectivity and new adversaries. It’s not only about *your* thing, but things that it speaks to, interacts with, of friends, passersby, the cityscape , and other individual and state actors interested in offense and defense. While we started to discuss it, we did not have the opportunity to discuss sufficiently at depth to be able to get any thinking into applying solutions in the principles.
One of the greatest risks that users face is the ubiquitous collection and storage of data about users that reveal detailed, inter-connected patterns of behaviour and our identity and not seeing how that is used by companies behind the scenes.
What we also missed discussing is not what we see as necessary today, but what we can foresee as necessary for the short term future, brainstorming and crowdsourcing horizon scanning for market needs and changing stakeholder wants.
Future thinking
Here’s the areas of future thinking that smart thinking on the IoT mark could consider.
We are moving towards ever greater requirements to declare identity to use a product or service, to register and log in to use anything at all. How will that change trust in IoT devices?
Single identity sign-on is becoming ever more imposed, and any attempts for multiple presentation of who I am by choice, and dependent on context, therefore restricted. [not all users want to use the same social media credentials for online shopping, with their child’s school app, and their weekend entertainment]
Is this imposition what the public wants or what companies sell us as what customers want in the name of convenience? What I believe the public would really want is the choice to do neither.
There is increasingly no private space or time, at places of work.
Limitations on private space are encroaching in secret in all public city spaces. How will ‘handoffs’ affect privacy in the IoT?
There is too little understanding of the social effects of this connectedness and knowledge created, embedded in design.
What effects may there be on the perception of the IoT as a whole, if predictive data analysis and complex machine learning and AI hidden in black boxes becomes more commonplace and not every company wants to be or can be open-by-design?
Ubiquitous collection and storage of data about users that reveal detailed, inter-connected patterns of behaviour and our identity needs greater commitments to disclosure. Where the hand-offs are to other devices, and whatever else is in the surrounding ecosystem, who has responsibility for communicating interaction through privacy notices, or defining legitimate interests, where the data joined up may be much more revealing than stand-alone data in each silo?
Define with greater clarity the privacy threat models for different groups of stakeholders and address the principles for each.
What would better look like?
The draft privacy principles are a start, but they’re not yet aspirational as I would have hoped. Of course the principles will only be adopted if possible, practical and by those who choose to. But where is the differentiator from what everyone is required to do, and better than the bare minimum? How will you sell this to consumers as new? How would you like your child to be treated?
The wording in these 5 bullet points, is the first crowdsourced starting point.
The supplier of this product or service MUST be General Data Protection Regulation (GDPR) compliant.
This product SHALL NOT disclose data to third parties without my knowledge.
I SHOULD get full access to all the data collected about me.
I MAY operate this device without connecting to the internet.
My data SHALL NOT be used for profiling, marketing or advertising without transparent disclosure.
Yes other points that came under security address some of the crossover between privacy and surveillance risks, but there is as yet little substantial that is aspirational to make the IoT mark a real differentiator in terms of privacy. An opportunity remains.
It was that and how young people perceive privacy that I hoped to bring to the table. Because if manufacturers are serious about future success, they cannot ignore today’s children and how they feel. How you treat them today, will shape future purchasers and their purchasing, and there is evidence you are getting it wrong.
The timing is good in that it now also offers the opportunity to promote consistent understanding, and embed the language of GDPR and ePrivacy regulations into consistent and compatible language in policy and practice in the #IoTmark principles.
User rights I would like to see considered
These are some of the points I would think privacy by design would mean. This would better articulate GDPR Article 25 to consumers.
Data sovereignty is a good concept and I believe should be considered for inclusion in explanatory blurb before any agreed privacy principles.
Goods should by ‘dumb* by default’ until the smart functionality is switched on. [*As our group chair/scribe called it] I would describe this as, “off is the default setting out-of-the-box”.
Privact by design. Deniability by default. i.e. not only after opt out, but a company should not access the personal or identifying purchase data of anyone who opts out of data collection about their product/service use during the set up process.
The right to opt out of data collection at a later date while continuing to use services.
A right to object to the sale or transfer of behavioural data, including to third-party ad networks and absolute opt-in on company transfer of ownership.
A requirement that advertising should be targeted to content, [user bought fridge A] not through jigsaw data held on users by the company [how user uses fridge A, B, C and related behaviour].
An absolute rejection of using children’s personal data gathered to target advertising and marketing at children
Background: Starting points before privacy
After a brief recap on 5 years ago, we heard two talks.
The first was a presentation from Bosch. They used the insights from the IoT open definition from 5 years ago in their IoT thinking and embedded it in their brand book. The presenter suggested that in five years time, every fridge Bosch sells will be ‘smart’. And the second was a fascinating presentation, of both EU thinking and the intellectual nudge to think beyond the practical and think what kind of society we want to see using the IoT in future. Hints of hardcore ethics and philosophy that made my brain fizz from Gerald Santucci, soon to retire from the European Commission.
The principles of open sourcing, manufacturing, and sustainable life cycle were debated in the afternoon with intense arguments and clearly knowledgeable participants, including those who were quiet. But while the group had assigned security, and started work on it weeks before, there was no one pre-assigned to privacy. For me, that said something. If they are serious about those who earn the trustmark being better for customers than their competition, then there needs to be greater emphasis on thinking like their customers, and by their customers, and what use the mark will be to customers, not companies. Plan early public engagement and testing into the design of this IoT mark, and make that testing open and diverse.
To that end, I believe it needed to be articulated more strongly, that sustainable public trust is the primary goal of the principles.
Trust that my device will not become unusable or worthless through updates or lack of them.
Trust that my device is manufactured safely and ethically and with thought given to end of life and the environment.
Trust that my source components are of high standards.
Trust in what data and how that data is gathered and used by the manufacturers.
Fundamental to ‘smart’ devices is their connection to the Internet, and so the last for me, is therefore key to successful public perception and it actually making a difference, beyond the PR value to companies. The value-add must be measured from consumers point of view.
All the openness about design functions and practice improvements, without attempting to change privacy infringing practices, may be wasted effort. Why? Because the perceived benefit of the value of the mark, will be proportionate to what risks it is seen to mitigate.
Why?
Because I assume that you know where your source components come from today. I was shocked to find out not all do and that ‘one degree removed’ is going to be an improvement? Holy cow, I thought. What about regulatory requirements for product safety recalls? These differ of course for different product areas, but I was still surprised. Having worked in global Fast Moving Consumer Goods (FMCG) and food industry, semiconductor and optoelectronics, and medical devices it was self-evident for me, that sourcing is rigorous. So that new requirement to know one degree removed, was a suggested minimum. But it might shock consumers to know there is not usually more by default.
Customers also believe they have reasonable expectations of not being screwed by a product update, left with something that does not work because of its computing based components. The public can take vocal, reputation-damaging action when they are let down.
While these are visible, the full extent of the overreach of company market and product surveillance into our whole lives, not just our living rooms, is yet to become understood by the general population. What will happen when it is?
The Internet of Things is exacerbating the power imbalance between consumers and companies, between government and citizens. As Wendy Grossman wrote recently, in one sense this may make privacy advocates’ jobs easier. It was always hard to explain why “privacy” mattered. Power, people understand.
That public discussion is long overdue. If open principles on IoT devices mean that the signed-up companies differentiate themselves by becoming market leaders in transparency, it will be a great thing. Companies need to offer full disclosure of data use in any privacy notices in clear, plain language under GDPR anyway, but to go beyond that, and offer customers fair presentation of both risks and customer benefits, will not only be a point-of-sales benefit, but potentially improve digital literacy in customers too.
The morning discussion touched quite often on pay-for-privacy models. While product makers may see this as offering a good thing, I strove to bring discussion back to first principles.
Privacy is a human right. There can be no ethical model of discrimination based on any non-consensual invasion of privacy. Privacy is not something I should pay to have. You should not design products that reduce my rights. GDPR requires privacy-by-design and data protection by default. Now is that chance for IoT manufacturers to lead that shift towards higher standards.
We also need a new ethics thinking on acceptable fair use. It won’t change overnight, and perfect may be the enemy of better. But it’s not a battle that companies should think consumers have lost. Human rights and information security should not be on the battlefield at all in the war to win customer loyalty. Now is the time to do better, to be better, demand better for us and in particular, for our children.
Privacy will be a genuine market differentiator
If manufacturers do not want to change their approach to exploiting customer data, they are unlikely to be seen to have changed.
Today feelings that people in US and Europe reflect in surveys are loss of empowerment, feeling helpless, and feeling used. That will shift to shock, resentment, and any change curve will predict, anger.
“The poll of just over two thousand British adults carried out by Ipsos MORI found that the media, internet services such as social media and search engines and telecommunication companies were the least trusted to use personal data appropriately.” [2014, Data trust deficit with lessons for policymakers, Royal Statistical Society]
In the British student population, one 2015 survey of university applicants in England, found of 37,000 who responded, the vast majority of UCAS applicants agree that sharing personal data can benefit them and support public benefit research into university admissions, but they want to stay firmly in control. 90% of respondents said they wanted to be asked for their consent before their personal data is provided outside of the admissions service.
In 2010, a multi method model of research with young people aged 14-18, by the Royal Society of Engineering, found that, “despite their openness to social networking, the Facebook generation have real concerns about the privacy of their medical records.” [2010, Privacy and Prejudice, RAE, Wellcome]
When people use privacy settings on Facebook set to maximum, they believe they get privacy, and understand little of what that means behind the scenes.
Are there tools designed by others, like Projects by If licenses, and ways this can be done, that you’re not even considering yet?
What if you don’t do it?
“But do you feel like you have privacy today?” I was asked the question in the afternoon. How do people feel today, and does it matter? Companies exploiting consumer data and getting caught doing things the public don’t expect with their data, has repeatedly damaged consumer trust. Data breaches and lack of information security have damaged consumer trust. Both cause reputational harm. Damage to reputation can harm customer loyalty. Damage to customer loyalty costs sales, profit and upsets the Board.
Where overreach into our living rooms has raised awareness of invasive data collection, we are yet to be able to see and understand the invasion of privacy into our thinking and nudge behaviour, into our perception of the world on social media, the effects on decision making that data analytics is enabling as data shows companies ‘how we think’, granting companies access to human minds in the abstract, even before Facebook is there in the flesh.
Governments want to see how we think too, and is thought crime really that far away using database labels of ‘domestic extremists’ for activists and anti-fracking campaigners, or the growing weight of policy makers attention given to predpol, predictive analytics, the [formerly] Cabinet Office Nudge Unit, Google DeepMind et al?
Had the internet remained decentralized the debate may be different.
I am starting to think of the IoT not as the Internet of Things, but as the Internet of Tracking. If some have their way, it will be the Internet of Thinking.
Considering our centralised Internet of Things model, our personal data from human interactions has become the network infrastructure, and data flows, are controlled by others. Our brains are the new data servers.
In the Internet of Tracking, people become the end nodes, not things.
And it is this where the future users will be so important. Do you understand and plan for factors that will drive push back, and crash of consumer confidence in your products, and take it seriously?
Companies have a choice to act as Empires would – multinationals, joining up even on low levels, disempowering individuals and sucking knowledge and power at the centre. Or they can act as Nation states ensuring citizens keep their sovereignty and control over a selected sense of self.
Look at Brexit. Look at the GE2017. Tell me, what do you see is the direction of travel? Companies can fight it, but will not defeat how people feel. No matter how much they hope ‘nudge’ and predictive analytics might give them this power, the people can take back control.
What might this desire to take-back-control mean for future consumer models? The afternoon discussion whilst intense, reached fairly simplistic concluding statements on privacy. We could have done with at least another hour.
Some in the group were frustrated “we seem to be going backwards” in current approaches to privacy and with GDPR.
But if the current legislation is reactive because companies have misbehaved, how will that be rectified for future? The challenge in the IoT both in terms of security and privacy, AND in terms of public perception and reputation management, is that you are dependent on the behaviours of the network, and those around you. Good and bad. And bad practices by one, can endanger others, in all senses.
If you believe that is going back to reclaim a growing sense of citizens’ rights, rather than accepting companies have the outsourced power to control the rights of others, that may be true.
There was a first principle asked whether any element on privacy was needed at all, if the text was simply to state, that the supplier of this product or service must be General Data Protection Regulation (GDPR) compliant. The GDPR was years in the making after all. Does it matter more in the IoT and in what ways? The room tended, understandably, to talk about it from the company perspective. “We can’t” “won’t” “that would stop us from XYZ.” Privacy would however be better addressed from the personal point of view.
What do people want?
From the company point of view, the language is different and holds clues. Openness, control, and user choice and pay for privacy are not the same thing as the basic human right to be left alone. Afternoon discussion reminded me of the 2014 WAPO article, discussing Mark Zuckerberg’s theory of privacy and a Palo Alto meeting at Facebook:
“Not one person ever uttered the word “privacy” in their responses to us. Instead, they talked about “user control” or “user options” or promoted the “openness of the platform.” It was as if a memo had been circulated that morning instructing them never to use the word “privacy.””
In the afternoon working group on privacy, there was robust discussion whether we had consensus on what privacy even means. Words like autonomy, control, and choice came up a lot. But it was only a beginning. There is opportunity for better. An academic voice raised the concept of sovereignty with which I agreed, but how and where to fit it into wording, which is at once both minimal and applied, and under a scribe who appeared frustrated and wanted a completely different approach from what he heard across the group, meant it was left out.
This group do care about privacy. But I wasn’t convinced that the room cared in the way that the public as a whole does, but rather only as consumers and customers do. But IoT products will affect potentially everyone, even those who do not buy your stuff. Everyone in that room, agreed on one thing. The status quo is not good enough. What we did not agree on, was why, and what was the minimum change needed to make a enough of a difference that matters.
I share the deep concerns of many child rights academics who see the harm that efforts to avoid restrictions Article 8 the GDPR will impose. It is likely to be damaging for children’s right to access information, be discriminatory according to parents’ prejudices or socio-economic status, and ‘cheating’ – requiring secrecy rather than privacy, in attempts to hide or work round the stringent system.
In ‘The Class’ the research showed, ” teachers and young people have a lot invested in keeping their spheres of interest and identity separate, under their autonomous control, and away from the scrutiny of each other.” [2016, Livingstone and Sefton-Green, p235]
Employers require staff use devices with single sign including web and activity tracking and monitoring software. Employee personal data and employment data are blended. Who owns that data, what rights will employees have to refuse what they see as excessive, and is it manageable given the power imbalance between employer and employee?
What is this doing in the classroom and boardroom for stress, anxiety, performance and system and social avoidance strategies?
A desire for convenience creates shortcuts, and these are often met using systems that require a sign-on through the platforms giants: Google, Facebook, Twitter, et al. But we are kept in the dark how by using these platforms, that gives access to them, and the companies, to see how our online and offline activity is all joined up.
Any illusion of privacy we maintain, we discussed, is not choice or control if based on ignorance, and backlash against companies lack of efforts to ensure disclosure and understanding is growing.
“The lack of accountability isn’t just troubling from a philosophical perspective. It’s dangerous in a political climate where people are pushing back at the very idea of globalization. There’s no industry more globalized than tech, and no industry more vulnerable to a potential backlash.”
If your connected *thing* requires registration, why does it? How about a commitment to not forcing one of these registration methods or indeed any at all? Social Media Research by Pew Research in 2016 found that 56% of smartphone owners ages 18 to 29 use auto-delete apps, more than four times the share among those 30-49 (13%) and six times the share among those 50 or older (9%).
Does that tell us anything about the demographics of data retention preferences?
In 2012, they suggested social media has changed the public discussion about managing “privacy” online. When asked, people say that privacy is important to them; when observed, people’s actions seem to suggest otherwise.
Does that tell us anything about how well companies communicate to consumers how their data is used and what rights they have?
There is also data with strong indications about how women act to protect their privacy more but when it comes to basic privacy settings, users of all ages are equally likely to choose a private, semi-private or public setting for their profile. There are no significant variations across age groups in the US sample.
Now think about why that matters for the IoT? I wonder who makes the bulk of purchasing decsions about household white goods for example and has Bosch factored that into their smart-fridges-only decision?
Do you *need* to know who the user is? Can the smart user choose to stay anonymous at all?
The day’s morning challenge was to attend more than one interesting discussion happening at the same time. As invariably happens, the session notes and quotes are always out of context and can’t possibly capture everything, no matter how amazing the volunteer (with thanks!). But here are some of the discussion points from the session on the body and health devices, the home, and privacy. It also included a discussion on racial discrimination, algorithmic bias, and the reasons why care.data failed patients and failed as a programme. We had lengthy discussion on ethics and privacy: smart meters, objections to models of price discrimination, and why pay-for-privacy harms the poor by design.
Smart meter data can track the use of unique appliances inside a person’s home and intimate patterns of behaviour. Information about our consumption of power, what and when every day, reveals personal details about everyday lives, our interactions with others, and personal habits.
Why should company convenience come above the consumer’s? Why should government powers, trump personal rights?
Smart meter is among the knowledge that government is exploiting, without consent, to discover a whole range of issues, including ensuring that “Troubled Families are identified”. Knowing how dodgy some of the school behaviour data might be, that helps define who is “troubled” there is a real question here, is this sound data science? How are errors identified? What about privacy? It’s not your policy, but if it is your product, what are your responsibilities?
If companies do not respect children’s rights, you’d better shape up to be GDPR compliant
For children and young people, more vulnerable to nudge, and while developing their sense of self can involve forming, and questioning their identity, these influences need oversight or be avoided.
In terms of GDPR, providers are going to pay particular attention to Article 8 ‘information society services’ and parental consent, Article 17 on profiling, and rights to restriction of processing (19) right to erasure in recital 65 and rights to portability. (20) However, they may need to simply reassess their exploitation of children and young people’s personal data and behavioural data. Article 57 requires special attention to be paid by regulators to activities specifically targeted at children, as ‘vulnerable natural persons’ of recital 75.
Human Rights, regulations and conventions overlap in similar principles that demand respect for a child, and right to be let alone:
(a) The development of the child ‘s personality, talents and mental and physical abilities to their fullest potential;
(b) The development of respect for human rights and fundamental freedoms, and for the principles enshrined in the Charter of the United Nations.
A weakness of the GDPR is that it allows derogation on age and will create inequality and inconsistency for children as a result. By comparison Article one of the Convention on the Rights of the Child (CRC) defines who is to be considered a “child” for the purposes of the CRC, and states that: “For the purposes of the present Convention, a child means every human being below the age of eighteen years unless, under the law applicable to the child, majority is attained earlier.”<
Article two of the CRC says that States Parties shall respect and ensure the rights set forth in the present Convention to each child within their jurisdiction without discrimination of any kind.
CRC Article 16 says that no child shall be subjected to arbitrary or unlawful interference with his or her honour and reputation.
Article 8 CRC requires respect for the right of the child to preserve his or her identity […] without unlawful interference.
Article 12 CRC demands States Parties shall assure to the child who is capable of forming his or her own views the right to express those views freely in all matters affecting the child, the views of the child being given due weight in accordance with the age and maturity of the child.
That stands in potential conflict with GDPR article 8. There is much on GDPR on derogations by country, and or children, still to be set.
What next for our data in the wild
Hosting the event at the zoo offered added animals, and during a lunch tour we got out on a tour, kindly hosted by a fellow participant. We learned how smart technology was embedded in some of the animal enclosures, and work on temperature sensors with penguins for example. I love tigers, so it was a bonus that we got to see such beautiful and powerful animals up close, if a little sad for their circumstances and as a general basic principle, seeing big animals caged as opposed to in-the-wild.
Freedom is a common desire in all animals. Physical, mental, and freedom from control by others.
I think any manufacturer that underestimates this element of human instinct is ignoring the ‘hidden dragon’ that some think is a myth. Privacy is not dead. It is not extinct, or even unlike the beautiful tigers, endangered. Privacy in the IoT at its most basic, is the right to control our purchasing power. The ultimate people power waiting to be sprung. Truly a crouching tiger. People object to being used and if companies continue to do so without full disclosure, they do so at their peril. Companies seem all-powerful in the battle for privacy, but they are not. Even insurers and data brokers must be fair and lawful, and it is for regulators to ensure that practices meet the law.
When consumers realise our data, our purchasing power has the potential to control, not be controlled, that balance will shift.
“Paper tigers” are superficially powerful but are prone to overextension that leads to sudden collapse. If that happens to the superficially powerful companies that choose unethical and bad practice, as a result of better data privacy and data ethics, then bring it on.
I hope that the IoT mark can champion best practices and make a difference to benefit everyone.
While the companies involved in its design may be interested in consumers, I believe it could be better for everyone, done well. The great thing about the efforts into an #IoTmark is that it is a collective effort to improve the whole ecosystem.
I hope more companies will realise their privacy rights and ethical responsibility in the world to all people, including those interested in just being, those who want to be let alone, and not just those buying.
“If a cat is called a tiger it can easily be dismissed as a paper tiger; the question remains however why one was so scared of the cat in the first place.”
Further reading: Networks of Control – A Report on Corporate Surveillance, Digital Tracking, Big Data & Privacy by Wolfie Christl and Sarah Spiekermann
That backtracks on what he said in Parliament on January 25th, 2014 on opt out of anonymous data transfers, despite the right to object in the NHS constitution [1].
So what’s the solution? If the new opt out methods aren’t working, then back to the old ones and making Section 10 requests? But it seems the Information Centre isn’t keen on making that work either.
All the data the HSCIC holds is sensitive and as such, its release risks patients’ significant harm or distress [2] so it shouldn’t be difficult to tell them to cease and desist, when it comes to data about you.
But how is NHS Digital responding to people who make the effort to write directly?
If anyone asks that their hospital data should not be used in any format and passed to third parties, that’s surely for them to decide.
Let’s take the case study of a woman who spoke to me during the whole care.data debacle who had been let down by the records system after rape. Her NHS records subsequently about her mental health care were inaccurate, and had led to her being denied the benefit of private health insurance at a new job.
Would she have to detail why selling her medical records would cause her distress? What level of detail is fair and who decides? The whole point is, you want to keep info confidential.
Should you have to state what you fear? “I have future distress, what you might do to me?” Once you lose control of data, it’s gone. Based on past planning secrecy and ideas for the future, like mashing up health data with retail loyalty cards as suggested at Strata in November 2013 [from 16:00] [2] no wonder people are sceptical.
Given the long list of commercial companies, charities, think tanks and others that passing out our sensitive data puts at risk and given the Information Centre’s past record, HSCIC might be grateful they have only opt out requests to deal with, and not millions of medical ethics court summonses. So far.
HSCIC / NHS Digital has extracted our identifiable records and has given them away, including for commercial product use, and continues give them away, without informing us. We’ve accepted Ministers’ statements and that a solution would be found. Two years on, patience wears thin.
“Without that external trust, we risk losing our public mandate and then cannot offer the vital insights that quality healthcare requires.”
In 2014 the public was told there should be no more surprises. This latest response is not only a surprise but enormously disrespectful.
When you’re trying to rebuild trust, assuming that we accept that ‘is’ the aim, you can’t say one thing, and do another. Perhaps the Department for Health doesn’t like the public answer to what the public wants from opt out, but that doesn’t make the DH view right.
Perhaps NHS Digital doesn’t want to deal with lots of individual opt out requests, that doesn’t make their refusal right.
Kingsley Manning recognised in July 2014, that the Information Centre “had made big mistakes over the last 10 years.” And there was “a once-in-a-generation chance to get it right.”
I didn’t think I’d have to move into the next one before they fix it.
The recent round of 2016 public feedback was the same as care.data 1.0. Respect nuanced opt outs and you will have all the identifiable public interest research data you want. Solutions must be better for other uses, opt out requests must be respected without distressing patients further in the process, and anonymous must mean anonymous.
“A patient can object to their confidential personal information from being disclosed out of the GP Practice and/or from being shared onwards by the HSCIC for non-direct care purposes (secondary purposes).”
This blog post is also available as an audio file on soundcloud.
What constitutes the public interest must be set in a universally fair and transparent ethics framework if the benefits of research are to be realised – whether in social science, health, education and more – that framework will provide a strategy to getting the pre-requisite success factors right, ensuring research in the public interest is not only fit for the future, but thrives. There has been a climate change in consent. We need to stop talking about barriers that prevent datasharing and start talking about the boundaries within which we can.
What is the purpose for which I provide my personal data?
‘We use math to get you dates’, says OkCupid’s tagline.
That’s the purpose of the site. It’s the reason people log in and create a profile, enter their personal data and post it online for others who are looking for dates to see. The purpose, is to get a date.
When over 68K OkCupid users registered for the site to find dates, they didn’t sign up to have their identifiable data used and published in ‘a very large dataset’ and onwardly re-used by anyone with unregistered access. The users data were extracted “without the express prior consent of the user […].”
Are the registration consent purposes compatible with the purposes to which the researcher put the data should be a simple enough question. Are the research purposes what the person signed up to, or would they be surprised to find out their data were used like this?
Questions the “OkCupid data snatcher”, now self-confessed ‘non-academic’ researcher, thought unimportant to consider.
But it appears in the last month, he has been in good company.
Google DeepMind, and the Royal Free, big players who do know how to handle data and consent well, paid too little attention to the very same question of purposes.
The boundaries of how the users of OkCupid had chosen to reveal information and to whom, have not been respected in this project.
Nor were these boundaries respected by the Royal Free London trust that gave out patient data for use by Google DeepMind with changing explanations, without clear purposes or permission.
The respectful ethical boundaries of consent to purposes, disregarding autonomy, have indisputably broken down, whether by commercial org, public body, or lone ‘researcher’.
Research purposes
The crux of data access decisions is purposes. What question is the research to address – what is the purpose for which the data will be used? The intent by Kirkegaard was to test:
“the relationship of cognitive ability to religious beliefs and political interest/participation…”
In this case the question appears intended rather a test of the data, not the data opened up to answer the test. While methodological studies matter, given the care and attention [or self-stated lack thereof] given to its extraction and any attempt to be representative and fair, it would appear this is not the point of this study either.
The data doesn’t include profiles identified as heterosexual male, because ‘the scraper was’. It is also unknown how many users hide their profiles, “so the 99.7% figure [identifying as binary male or female] should be cautiously interpreted.”
“Furthermore, due to the way we sampled the data from the site, it is not even representative of the users on the site, because users who answered more questions are overrepresented.” [sic]
The paper goes on to say photos were not gathered because they would have taken up a lot of storage space and could be done in a future scraping, and
“other data were not collected because we forgot to include them in the scraper.”
The data are knowingly of poor quality, inaccurate and incomplete. The project cannot be repeated as ‘the scraping tool no longer works’. There is an unclear ethical or peer review process, and the research purpose is at best unclear. We can certainly give someone the benefit of the doubt and say intent appears to have been entirely benevolent. It’s not clear what the intent was. I think it is clearly misplaced and foolish, but not malevolent.
The trouble is, it’s not enough to say, “don’t be evil.” These actions have consequences.
When the researcher asserts in his paper that, “the lack of data sharing probably slows down the progress of science immensely because other researchers would use the data if they could,” in part he is right.
Google and the Royal Free have tried more eloquently to say the same thing. It’s not research, it’s direct care, in effect, ignore that people are no longer our patients and we’re using historical data without re-consent. We know what we’re doing, we’re the good guys.
However the principles are the same, whether it’s a lone project or global giant. And they’re both wildly wrong as well. More people must take this on board. It’s the reason the public interest needs the Dame Fiona Caldicott review published sooner rather than later.
Just because there is a boundary to data sharing in place, does not mean it is a barrier to be ignored or overcome. Like the registration step to the OkCupid site, consent and the right to opt out of medical research in England and Wales is there for a reason.
We’re desperate to build public trust in UK research right now. So to assert that the lack of data sharing probably slows down the progress of science is misplaced, when it is getting ‘sharing’ wrong, that caused the lack of trust in the first place and harms research.
A climate change in consent
There has been a climate change in public attitude to consent since care.data, clouded by the smoke and mirrors of state surveillance. It cannot be ignored. The EUGDPR supports it. Researchers may not like change, but there needs to be an according adjustment in expectations and practice.
Without change, there will be no change. Public trust is low. As technology advances and if we continue to see commercial companies get this wrong, we will continue to see public trust falter unless broken things get fixed. Change is possible for the better. But it has to come from companies, institutions, and people within them.
Like climate change, you may deny it if you choose to. But some things are inevitable and unavoidably true.
There is strong support for public interest research but that is not to be taken for granted. Public bodies should defend research from being sunk by commercial misappropriation if they want to future-proof public interest research.
The purpose for which the people gave consent are the boundaries within which you have permission to use data, that gives you freedom within its limits, to use the data. Purposes and consent are not barriers to be overcome.
If research is to win back public trust developing a future proofed, robust ethical framework for data science must be a priority today.
This case study and indeed the Google DeepMind recent episode by contrast demonstrate the urgency with which working out what common expectations and oversight of applied ethics in research, who gets to decide what is ‘in the public interest’ and data science public engagement must be made a priority, in the UK and beyond.
Boundaries in the best interest of the subject and the user
Society needs research in the public interest. We need good decisions made on what will be funded and what will not be. What will influence public policy and where needs attention for change.
To do this ethically, we all need to agree what is fair use of personal data, when is it closed and when is it open, what is direct and what are secondary uses, and how advances in technology are used when they present both opportunities for benefit or risks to harm to individuals, to society and to research as a whole.
The potential benefits of research are potentially being compromised for the sake of arrogance, greed, or misjudgement, no matter intent. Those benefits cannot come at any cost, or disregard public concern, or the price will be trust in all research itself.
In discussing this with social science and medical researchers, I realise not everyone agrees. For some, using deidentified data in trusted third party settings poses such a low privacy risk, that they feel the public should have no say in whether their data are used in research as long it’s ‘in the public interest’.
For the DeepMind researchers and Royal Free, they were confident even using identifiable data, this is the “right” thing to do, without consent.
For the Cabinet Office datasharing consultation, the parts that will open up national registries, share identifiable data more widely and with commercial companies, they are convinced it is all the “right” thing to do, without consent.
How can researchers, society and government understand what is good ethics of data science, as technology permits ever more invasive or covert data mining and the current approach is desperately outdated?
Who decides where those boundaries lie?
“It’s research Jim, but not as we know it.” This is one aspect of data use that ethical reviewers will need to deal with, as we advance the debate on data science in the UK. Whether independents or commercial organisations. Google said their work was not research. Is‘OkCupid’ research?
If this research and data publication proves anything at all, and can offer lessons to learn from, it is perhaps these three things:
Researchers and ethics committees need to adjust to the climate change of public consent. Purposes must be respected in research particularly when sharing sensitive, identifiable data, and there should be no assumptions made that differ from the original purposes when users give consent.
Data ethics and laws are desperately behind data science technology. Governments, institutions, civil, and all society needs to reach a common vision and leadership how to manage these challenges. Who defines these boundaries that matter?
How do we move forward towards better use of data?
Our data and technology are taking on a life of their own, in space which is another frontier, and in time, as data gathered in the past might be used for quite different purposes today.
The public are being left behind in the game-changing decisions made by those who deem they know best about the world we want to live in. We need a say in what shape society wants that to take, particularly for our children as it is their future we are deciding now.
How about an ethical framework for datasharing that supports a transparent public interest, which tries to build a little kinder, less discriminating, more just world, where hope is stronger than fear?
Working with people, with consent, with public support and transparent oversight shouldn’t be too much to ask. Perhaps it is naive, but I believe that with an independent ethical driver behind good decision-making, we could get closer to datasharing like that.
Purposes and consent are not barriers to be overcome. Within these, shaped by a strong ethical framework, good data sharing practices can tackle some of the real challenges that hinder ‘good use of data’: training, understanding data protection law, communications, accountability and intra-organisational trust. More data sharing alone won’t fix these structural weaknesses in current UK datasharing which are our really tough barriers to good practice.
How our public data will be used in the public interest will not be a destination or have a well defined happy ending, but it is a long term process which needs to be consensual and there needs to be a clear path to setting out together and achieving collaborative solutions.
While we are all different, I believe that society shares for the most part, commonalities in what we accept as good, and fair, and what we believe is important. The family sitting next to me have just counted out their money and bought an ice cream to share, and the staff gave them two. The little girl is beaming. It seems that even when things are difficult, there is always hope things can be better. And there is always love.
Part three: It is vital that the data sharing consultation is not seen in a silo, or even a set of silos each particular to its own stakeholder. To do it justice and ensure the questions that should be asked are answered, we must look instead at the whole story and the background setting. And we must ask each stakeholder, what does your happy ending look like?
Parts one and two to follow address public engagement and ethics, this focuses on current national data practice, tailored public services, and local impact of the change and transformation that will result.
What is your happy ending?
This data sharing consultation is gradually revealing to me how disjoined government appears in practice and strategy. Our digital future, a society that is more inclusive and more just, supported by better uses of technology and data in ‘dot everyone’ will not happen if they cannot first join the dots across all of Cabinet thinking and good practice, and align policies that are out of step with each other.
Last Thursday night’s “Government as a Platform Future” panel discussion (#GaaPFuture) took me back to memories of my old job, working in business implementations of process and cutting edge systems. Our finest hour was showing leadership why success would depend on neither. Success was down to local change management and communications, because change is about people, not the tech.
People in this data sharing consultation, means the public, means the staff of local government public bodies, as well as the people working at national stakeholders of the UKSA (statistics strand), ADRN (de-identified research strand), Home Office (GRO strand), DWP (Fraud and Debt strands), and DECC (energy) and staff at the national driver, the Cabinet Office.
I’ve attended two of the 2016 datasharing meetings, and am most interested from three points of view – because I am directly involved in the de-identified data strand, campaign for privacy, and believe in public engagement.
Engagement with civil society, after almost 2 years of involvement on three projects, and an almost ten month pause in between, the projects had suddenly become six in 2016, so the most sensitive strands of the datasharing legislation have been the least openly discussed.
At the end of the first 2016 meeting, I asked one question.
How will local change management be handled and the consultation tailored to local organisations’ understanding and expectations of its outcome?
Why? Because a top down data extraction programme from all public services opens up the extraction of personal data as business intelligence to national level, of all local services interactions with citizens’ data. Or at least, those parts they have collected or may collect in future.
That means a change in how the process works today. Global business intelligence/data extractions are designed to make processes more efficient, through reductions in current delivery, yet concrete public benefits for citizens are hard to see that would be different from today, so why make this change in practice?
What it might mean for example, would be to enable collection of all citizens’ debt information into one place, and that would allow the service to centralise chasing debt and enforce its collection, outsourced to a single national commercial provider.
So what does the future look like from the top? What is the happy ending for each strand, that will be achieved should this legislation be passed? What will success for each set of plans look like?
What will we stop doing, what will we start doing differently and how will services concretely change from today, the current state, to the future?
Most importantly to understand its implications for citizens and staff, we should ask how will this transformation be managed well to see the benefits we are told it will deliver?
Can we avoid being left holding a pumpkin, after the glitter of ‘use more shiny tech’ and government love affair with the promises of Big Data wear off?
Look into the local future
Those with the vision of the future on a panel at the GDS meeting this week, the new local government model enabled by GaaP, also identified, there are implications for potential loss of local jobs, and “turkeys won’t vote for Christmas”. So who is packaging this change to make it successfully deliverable?
If we can’t be told easily in consultation, then it is not a clear enough policy to deliver. If there is a clear end-state, then we should ask what the applied implications in practice are going to be?
It is vital that the data sharing consultation is not seen in a silo, or even a set of silos each particular to its own stakeholder, about copying datasets to share them more widely, but that we look instead at the whole story and the background setting.
The Tailored Reviews: public bodies guidance suggests massive reform of local government, looking for additional savings, looking to cut back office functions and commercial plans. It asks “What workforce reductions have already been agreed for the body? Is there potential to go further? Are these linked to digital savings referenced earlier?”
Options include ‘abolish, move out of central government, commercial model, bring in-house, merge with another body.’
So where is the local government public bodies engagement with change management plans in the datasharing consultation as a change process? Does it not exist?
I asked at the end of the first datasharing meeting in January and everyone looked a bit blank. A question ‘to take away’ turned into nothing.
Yet to make this work, the buy-in of local public bodies is vital. So why skirt round this issue in local government, if there are plans to address it properly?
If there are none, then with all the data in the world, public services delivery will not be improved, because the issues are friction not of interference by consent, or privacy issues, but working practices.
If the idea is to avoid this ‘friction’ by removing it, then where is the change management plan for public services and our public staff?
Trust depends on transparency
John Pullinger, our National Statistician, this week also said on datasharing we need a social charter on data to develop trust.
Trust can only be built between public and state if the organisations, and all the people in them, are trustworthy.
To implement process change successfully, the people involved in these affected organisations, the staff, must trust that change will mean positive improvement and risks explained.
For the public, what defined levels of data access, privacy protection, and scope limitation that this new consultation will permit in practice, are clearly going to be vital to define if the public will trust its purposes.
The consultation does not do this, and there is no draft code of conduct yet, and no one is willing to define ‘research’ or ‘public interest’.
Public interest models or ‘charter’ for collection and use of research data in health, concluded that ofr ethical purposes, time also mattered. Benefits must be specific, measurable, attainable, relevant and time-bound. So let’s talk about the intended end state that is to be achieved from these changes, and identify how its benefits are to meet those objectives – change without an intended end state will almost never be successful, if you don’t know start knowing what it looks like.
For public trust, that means scope boundaries. Sharing now, with today’s laws and ethics is only fully meaningful if we trust that today’s governance, ethics and safeguards will be changeable in future to the benefit of the citizen, not ever greater powers to the state at the expense of the individual. Where is scope defined?
There is very little information about where limits would be on what data could not be shared, or when it would not be possible to do so without explicit consent. Permissive powers put the onus onto the data controller to share, and given ‘a new law says you should share’ would become the mantra, it is likely to mean less individual accountability. Where are those lines to be drawn to support the staff and public, the data user and the data subject?
So to summarise, so far I have six key questions:
What does your happy ending look like for each data strand?
How will bad practices which conflict with the current consultation proposals be stopped?
How will the ongoing balance of use of data for government purposes, privacy and information rights be decided and by whom?
In what context will the ethical principles be shaped today?
How will the transformation from the current to that future end state be supported, paid for and delivered?
Who will oversee new policies and ensure good data science practices, protection and ethics are applied in practice?
This datasharing consultation is not entirely for something new, but expansion of what is done already. And in some places is done very badly.
How will the old stories and new be reconciled?
Wearing my privacy and public engagement hats, here’s an idea.
Perhaps before the central State starts collecting more, sharing more, and using more of our personal data for ‘tailored public services’ and more, the government should ask for a data amnesty?
It’s time to draw a line under bad practice. Clear out the ethics drawers of bad historical practice, and start again, with a fresh chapter. Because current practices are not future-proofed and covering them up in the language of ‘better data ethics’ will fail.
The consultation assures us that: “These proposals are not about selling public or personal data, collecting new data from citizens or weakening the Data Protection Act 1998.”
However it does already sell out personal data from at least BIS. How will these contradictory positions across all Departments be resolved?
The left hand gives out de-identified data in safe settings for public benefit research while the right hands out over 10 million records to the Telegraph and The Times without parental or schools’ consent. Only in la-la land are these both considered ethical.
Will somebody at the data sharing meeting please ask, “when will this stop?” It is wrong. These are our individual children’s identifiable personal data. Stop giving them away to press and charities and commercial users without informed consent. It’s ludicrous. Yet it is real.
Policy makers should provide an assurance there are plans for this to change as part of this consultation.
Without it, the consultation line about commercial use, is at best disingenuous, at worst a bare cheeked lie.
“These powers will also ensure we can improve the safe handling of citizen data by bringing consistency and improved safeguards to the way it is handled.”
Will it? Show me how and I might believe it.
Privacy, it was said at the RSS event, is the biggest concern in this consultation:
“includes proposals to expand the use of appropriate and ethical data science techniques to help tailor interventions to the public”
“also to start fixing government’s data infrastructure to better support public services.”
The techniques need outlined what they mean, and practices fixed now, because many stand on shaky legal ground. These privacy issues have come about over cumulative governments of different parties in the last ten years, so the problems are non-partisan, but need practical fixes.
Today our government alreadygives our children’s personal data to commercial third parties and sells our higher education data without informed consent, while the DfE and BIS both know they fail processing and its potential consequences: the European Court reaffirmed in 2015 “persons whose personal data are subject to transfer and processing between two public administrative bodies must be informed in advance” in Judgment in Case C-201/14.
In a time that actively cultivates universal public fear, it is time for individuals to be brave and ask the awkward questions because you either solve them up front, or hit the problems later. The child who stood up and said The Emperor has on no clothes, was right.
What’s missing?
The consultation conversation will only be genuine, once the policy makers acknowledge and address solutions regards:
those data practices that are currently unethical and must change
how the tailored public services datasharing legislation will shape the delivery of government services’ infrastructure and staff, as well as the service to the individual in the public.
If we start by understanding what the happy ending looks like, we are much more likely to arrive there, and how to measure success.
How the codes of conduct, and ethics, are to be shaped, and by whom, if outwith the consultation?
What is planned to manage and pay for the future changes in our data infrastructures; ie the models of local government delivery?
What is the happy ending that each data strand wants to achieve through this and how will the success criteria be measured?
Public benefit is supposed to be at the heart of this change. For UK statistics, for academic public benefit research, they are clear.
For some of the other strands, local public benefits that outweigh the privacy risks and do not jeopardise public trust seem like magical unicorns dancing in the land far, far away of centralised government; hard to imagine, and even harder to capture.
*****
Part one: A data sharing fairytale: Engagement
Part two: A data sharing fairytale: Ethics
Part three: A data sharing fairytale: Impact (this post)
Atlas, the Boston Dynamics created robot, won hearts and minds this week as it stoically survived man being mean. Our collective human response was an emotional defence of the machine, and criticism of its unfair treatment by its tester.
The concepts of fairness and of decision making algorithms for ‘abuse avoidance’ are interesting from perspectives of data mining, AI and the wider access to and use of tech in general, and in health specifically.
If the decision to avoid abuse can be taken out of an individual’s human hands and are based on unfathomable amounts of big data, where are its limits applied to human behaviour and activity?
When it is decided that an individual’s decision making capability is impaired or has been forfeited their consent may be revoked in their best interest.
Who has oversight of the boundaries of what is acceptable for one person, or for an organisation, to decide what is in someone else’s best interest, or indeed, the public interest?
Where these boundaries overlap – personal abuse avoidance, individual best interest and the public interest – and how society manage them, with what oversight, is yet to be widely debated.
We must get involved and it must be the start of a debate and dialogue not simply a tick-box to a done-deal, if data derived from us are to be used as a platform for future to “achieve great results for the NHS and everyone who depends on it.”
Administering applied “abuse avoidance” and Restraining Abilities
Administrative uses and secondary research using the public’s personal data are applied not only in health, but across the board of public bodies, including big plans for tech in the justice system.
The use of this technology as a monitoring tool, should not of itself be a punishment. It is said compliance is not intended to affectthedignity of individuals who are being monitored, but through the collection of personal and health data will ensure the deprivation of alcohol – avoiding its abuse for a person’s own good and in the public interest. Is it fair?
Abstinence orders might be applied to those convicted of crimes such as assault, being drunk and disorderly and drunk driving.
We’re yet to see much discussion of how these varying degrees of integration of tech with the human body, and human enhancement will happen through robot elements in our human lives.
How will the boundaries of what is possible and desirable be determined and by whom with what oversight?
What else might be considered as harmful as alcohol to individuals and to society? Drugs? Nictotine? Excess sugar?
As we wonder about the ethics of how humanoids will act and the aesthetics of how human they look, I wonder how humane are we being, in all our ‘public’ tech design and deployment?
Umberto Eco who died on Friday wrote in ‘The birth of ethics’ that there are universal ideas on constraints, effectively that people should not harm other people, through deprivation, restrictions or psychological torture. And that we should not impose anything on others that “diminishes or stifles our capacity to think.”
How will we as a society collectively agree what that should look like, how far some can impose on others, without consent?
Enhancing the Boundaries of Being Human
Technology might be used to impose bodily boundaries on some people, but tech can also be used for the enhancement of others. Antonio Santos retweeted this week, the brilliant Angel Giuffria’s arm.
While the technology in this case is literally hands-on in its application, increasingly it is not the technology itself but the data that it creates or captures which enables action through data-based decision making.
Robots that are tiny may be given big responsibilities to monitor and report massive amounts of data. What if we could swallow them?
Data if analysed and understood, become knowledge.
Knowledge can be used to inform decisions and take action.
So where are the boundaries of what data may be extracted, information collated, and applied as individual interventions?
Defining the Boundaries of “in the Public Interest”
Where are boundaries of what data may be created, stored, and linked to create a detailed picture about us as individuals, if the purpose is determined to be in the public interest?
Who decides which purposes are in the public interest? What qualifies as research purposes? Who qualifies as meeting the criteria of ‘researcher’?
How far can research and interventions go without consent?
Should security services and law enforcement agencies always be entitled to get access to individuals’ data ‘in the public interest’?
That’s something Apple is currently testing in the US.
Should research bodies always be entitled to get access to individuals’ data ‘in the public interest’?
That’s something care.data tried and failed to assume the public supported and has yet to re-test. Impossible before respecting the opt out that was promised over two years ago in March 2014.
The question how much data research bodies may be ‘entitled to’ will be tested again in the datasharing consultation in the UK.
Where is the boundary between access and use of data not in enforcement of acts already committed but in their prediction and prevention?
If you believe there should be an assumption of law enforcement access to data when data are used for prediction and prevention, what about health?
Should there be any difference between researchers’ access to data when data are used for past analysis and for use in prediction?
If ethics define the boundary between what is acceptable and where actions by one person may impose something on another that “diminishes or stifles our capacity to think” – that takes away our decision making capacity – that nudges behaviour, or acts on behaviour that has not yet happened, who decides what is ethical?
How does a public that is poorly informed about current data practices, become well enough informed to participate in the debate of how data management should be designed today for their future?
How Deeply Mined should our Personal Data be?
The application of technology, non-specific but not yet AI, was also announced this week in the Google DeepMind work in the NHS.
Its first key launch app co-founder provided a report that established the operating framework for the Behavioural Insights Team established by Prime Minister David Cameron.
A number of highly respected public figures have been engaged to act in the public interest as unpaid Independent Reviewers of Google DeepMind Health. It will be interesting to see what their role is and how transparent its workings and public engagement will be.
The recent consultation on the NHS gave overwhelming feedback that the public does not support the direction of current NHS change. Even having removed all responses associated with ‘lefty’ campaigns, concerns listed on page 11, are consistent including a request the Government “should end further involvement of the private sector in healthcare”. It appears from the response that this engagement exercise will feed little into practice.
The strength of feeling should however be a clear message to new projects that people are passionate that equal access to healthcare for all matters and that the public wants to be informed and have their voices heard.
How will public involvement be ensured as complexity increases in these healthcare add-ons and changing technology?
Will Google DeepMind pave the way to a new approach to health research? A combination of ‘nudge’ behavioural insights, advanced neural networks, Big Data and technology is powerful. How will that power be used?
I was recently told that if new research is not pushing the boundaries of what is possible and permissible then it may not be worth doing, as it’s probably been done before.
Should anything that is new that becomes possible be realised?
I wonder how the balance will be weighted in requests for patient data and their application, in such a high profile project.
Will NHS Research Ethics Committees turn down research proposals in-house in hospitals that benefit the institution or advance their reputation, or the HSCIC, ever feel able to say no to data use by Google DeepMind?
Ethics committees safeguard the rights, safety, dignity and well-being of research participants, independently of research sponsors whereas these representatives are not all independent of commercial supporters. And it has not claimed it’s trying to be an ethics panel. But oversight is certainly needed.
The boundaries of ownership between what is seen to benefit commercial and state in modern health investment is perhaps more than blurred to an untrained eye. Genomics England – the government’s flagship programme giving commercial access to the genome of 100K people – stockholding companies, data analytics companies, genome analytic companies, genome collection, and human tissue research, commercial and academic research, often share directors, working partnerships and funders. That’s perhaps unsurprising given such a specialist small world.
It’s exciting to think of the possibilities if, “through a focus on patient outcomes, effective oversight, and the highest ethical principles, we can achieve great results for the NHS and everyone who depends on it.”
Where will an ageing society go, if medics can successfully treat more cancer for example? What diseases will be prioritised and others left behind in what is economically most viable to prevent? How much investment will be made in diseases of the poor or in countries where governments cannot afford to fund programmes?
What will we die from instead? What happens when some causes of ‘preventative death’ are deemed more socially acceptable than others? Where might prevention become socially enforced through nudging behaviour into new socially acceptable or ethical norms?
Don’t be Evil
Given the leading edge of the company and its curiosity-by-design to see how far “can we” will reach, “don’t be evil” may be very important. But “be good” might be better. Where is that boundary?
The boundaries of what ‘being human’ means and how Big Data will decide and influence that, are unclear and changing. How will the law and regulation keep up and society be engaged in support?
Data principles such as fairness, keeping data accurate, complete and up-to-date and ensuring data are not excessive retained for no longer than necessary for the purpose are being widely ignored or exempted under the banner of ‘research’.
Can data use retain a principled approach despite this and if we accept commercial users, profit making based on public data, will those principles from academic research remain in practice?
Exempt from the obligation to give a copy of personal data to an individual on request if data are for ‘research’ purposes, data about us and our children, are extracted and stored ‘without us’. Forever. That means in a future that we cannot see, but Google DeepMind among others, is designing.
Lay understanding, and that of many climical professionals is likely to be left far behind if advanced technologies and use of big data decision-making algorithms are hidden in black boxes.
Public transparency of the use of our data and future planned purposes are needed to create trust that these purposes are wise.
Data are increasingly linked and more valuable when identifiable.
Any organisation that wants to future-proof its reputational risk will make sure data collection and use today is with consent, since future outcomes derived are likely to be in interventions for individuals or society. Catching up consent will be hard unless designed in now.
A Dialogue on the Boundaries of Being Human and Big Data
Where the commercial, personal, and public interests are blurred, the highest ethical principles are going to be needed to ensure ‘abuse avoidance’ in the use of new technology, in increased data linkage and resultant data use in research of many different kinds.
How we as a society achieve the benefits of tech and datasharing and where its boundaries lie in “the public interest” needs public debate to co-design the direction we collectively want to partake in.
Once that is over, change needs supported by a method of oversight that is responsive to new technology, data use, and its challenges.
What a channel for ongoing public dialogue, challenge and potentially recourse might look like, should be part of that debate.
If so, is this breathtaking arrogance and a u-turn of unforeseeable magnitude? Had our PM not said before the last GE he planned in stepping down before the end of the next Parliament, you could think so. But this way they cannot lose.
This is in fact bloody brilliant positioning by the whole party.
A yes vote underpins Cameron’s re-negotiation as ‘the right thing to do’, best for business and his own statesmanship while showing that we’re not losing sovereignty because staying in is on our terms.
Renegotiating our relationship with the EU was a key Conservative election promise.
This pacifies the majority of that part of the population who wants out of some of the EU ‘controlling out country’ and beholden to EU law, but keeps us stable and financially secure.
The hardline Out campaigners are seen as a bag of all-sorts that few are taking that seriously. But then comes Boris.
So now there is some weight in the out circle and if the country votes ‘No’ a way to manage the outcome with a ready made leader in waiting. But significantly, it’s not a consistent call for Out across the group. Boris is not for spinning in the same clear ‘out’ direction as the Galloway group.
Boris can keep a foot in the circle saying his heart is pro-In and really wants In, but on better terms. He can lead a future party for Outers Inners and whatever the outcome, be seen to welcome all. Quite a gentleman’s agreement perhaps.
His Out just means out of some things, not others. Given all his past positioning and role as Mayor in the City of London, out wouldn’t mean wanting to risk any of the financial and business related bits.
So what does that leave? Pay attention in his speech to the three long paragraphs on the Charter of Fundamental Human Rights.
His rambling explanation indirectly explained quite brilliantly the bits he wants ‘out’ to mean. Out means in the Boris-controlled circle, only getting out from those parts of EU directives that the party players don’t like. The bits when they get told what to do, or told off for doing something wrong, or not playing nicely with the group.
The human rights rulings and oversight from the CJEU or views which are not aligned with the ECHR for example.
As Joshua Rozenberg wrote on sovereignty, “Human rights reform has been inextricably intertwined with renegotiating the UK’s membership of the EU. And it is all the government’s fault.”
Rozenberg writes that Mr Gove told the Lords constitution committee in December that David Cameron asked him whether “we should use the British Bill of Rights in order to create a constitutional long stop […] and whether the UK Supreme Court should be that body.”
“Our top judges were relabelled a “Supreme Court” not long ago; they’ve been urged to assert themselves against the European Court of Human Rights, and are already doing so against EU law”, commented Carl Gardner elsewhere.
The Gang of Six cabinet ministers are known for their anti EU disaffectation and most often its attachment to human rights – Michael Gove, Iain Duncan Smith, Chris Grayling, Theresa Villiers, Priti Patel and John Whittingdale plus a further 20 junior ministers and potentially dozens of backbenchers.
We can therefore expect the Out campaign to present situations in which British ‘sovereignty’ was undermined by court rulings that some perceive as silly or seriously flawed.
Every case in which a British court ever convicted someone and was overturned ‘by Europe’ that appeared nonsensical will be wheeled out by current justice Secretary Mr Gove.
Every tougher ‘terrorist’ type case, whose human rights were upheld that had been denied them by a UK ruling might be in the more ‘extreme’ remit of the former justice secretary mention whenever Grayling makes his case for Out, especially where opinions may conflict with interpretations and the EU Charter.
Priti Patel has tough views of crime and punishment, reportedly in favour of the the death penalty.
IDS isn’t famous for a generous application of disability rights.
John Whittingdale gave his views on the present relationship with the EU and CJEU here in debate in 2015 and said (22:10) he was profoundly “concerned the CJEU is writing laws which we consider to be against our national interest.”
Data protection and privacy is about to get a new EU directive that will strengthen some aspects of citizens’ data rights. Things like the right to be informed what information is stored about us, or have mistakes corrected.
Don’t forget after all that Mr Gove is the Education SoS who signed off giving away the confidential personal data of now 20 million children to commercial third parties from the National Pupil Database. Clearly not an Article 8 fan.
We are told that we are being over reactive to our loss of rights to privacy. Over generous in protections to people who don’t deserve it. Ignoring that rights are universal and indivisible, we are encouraged to see them as something that must be earned. As such, something which may or may not be respected. And therefore can be removed.
Convince the majority of that, and legislation underpinning our rights will be easier to take away without enough mass outcry that will make a difference.
To be clear, a no vote would make no actual legal difference, “Leaving the EU (if that’s what the people vote for) is NOT at all inconsistent with the United Kingdom’s position as a signatory to the European Convention on Human Rights (ECHR), a creature of the COUNCIL of EUROPE and NOT the European Union.” [ObiterJ]
But by conflating ‘the single market’, ‘the Lisbon Treaty’, and the ‘ability to vindicate people’s rights under the 55-clause “Charter of Fundamental Human Rights”, Boris has made the no vote again equate conflated things: European Union membership = loss of sovereignty = need to reduce the control or influence of all organisations seen as ‘European’ (even if like the ECHR it’s to do with the Council of Europe Convention signed post WWII and long before EU membership) and all because we are a signatory to a package deal.
Boris will bob in and out of both the IN group for business and the OUT group for sovereignty, trying not to fall out with anyone too much and giving serious Statesmanship to the injustices inflicted on the UK. There will be banter and back biting, but the party views will be put ahead of personalities.
And the public? What we vote, won’t really matter.I think we’ll be persuaded to be IN, or to get a two step Out-In.
Either way it will give the relevant party leader, present or future, the mandate to do what he wants. Our engagement is optional.
Like the General Election, the people’s majority viewed as a ‘mandate’ seems to have become a little confused with sign-off to dictate a singular directive, rather than represent a real majority. It cannot do anything but this, since the majority didn’t vote for the government that we have.
In this EU Referendum No wont mean No. It’ll mean a second vote to be able to split the package of no-to-all-things into a no-to-some-things wrapped up in layers of ‘sovereignty’ discussion. Unpack them, and those things will be for the most part, human rights things. How they will then be handled at a later date is utterly unclear but the mandate will have been received.
Imagine if Boris can persuade enough of the undecideds that he is less bonkers than some of the EU rulings on rights, he’ll perhaps get an Out mandate, possibly meaning a second vote just to be sure, splitting off the parts everyone obviously wants to protect, the UK business interests, and allowing the government to negotiate the opt out from legislation of human rights’ protections. Things that may appear to make more people dependent on the state, and contrary to the ideology of shrinking state support.
A long-promised review of the British Human Rights Act 1998 will inevitably follow, and only makes sense if we are first made exempt from the European umbrella.
Perhaps we will hear over the next four months more about what that might mean.
Either way, the Out group will I’m sure take the opportunity to air their views and demand the shake up of where Human Rights laws are out of line for the shape of the UK future nation they wish to see us become.
Some suggest Boris has made a decision that will cost him his political career. I disagree. I think it’s incredibly clever. Not a conspiracy, simply clever party planning to make every outcome a win for the good of the party and the good of the nation, and a nod to Boris as future leader in any case. After all, he didn’t actually say he wanted #Brexit, just reform.
It’s not all about Boris, but is staging party politics at its best, and simultaneously sets the scene for future change in the human rights debate.