Tag Archives: consent

Thoughts from the YEIP Event: Preventing trust.

Here’s some thoughts about the Prevent programme, after the half day I spent at the event this week, Youth Empowerment and Addressing Violent Youth Radicalisation in Europe.

It was hosted by the Youth Empowerment and Innovation Project at the University of East London, to mark the launch of the European study on violent youth radicalisation from YEIP.

Firstly, I appreciated the dynamic and interesting youth panel. Young people, themselves involved in youth work, or early researchers on a range of topics. Panelists shared their thoughts on:

  • Removal of gang databases and systemic racial targeting
  • Questions over online content takedown with the general assumption that “someone’s got to do it.”
  • The purposes of Religious Education and lack of religious understanding as cause of prejudice, discrimination, and fear.

From these connections comes trust.

Next, Simon Chambers, from the British Council, UK National Youth Agency, and Erasmus UK, talked about the programme of Erasmus Plus, under the striking sub theme, from these connections comes trust.

  • 42% of the world’s population are under 25
  • Young people understand that there are wider, underlying complex factors in this area and are disproportionately affected by conflict, economic change and environmental disaster.
  • Many young people struggle to access education and decent work.
  • Young people everywhere can feel unheard and excluded from decision-making — their experience leads to disaffection and grievance, and sometimes to conflict.

We then heard a senior Home Office presenter speak about Radicalisation: the threat, drivers and Prevent programme.

On Contest 2018 Prevent / Pursue / Protect and Prepare

What was perhaps most surprising was his statement that the programme believes there is no checklist, [but in reality there are checklists] no single profile, or conveyer belt towards radicalisation.

“This shouldn’t be seen as some sort of predictive model,” he said. “It is not accurate to say that somehow we can predict who is going to become a terrorist, because they’ve got poor education levels, or because necessarily have a deprived background.”

But he then went on to again highlight the list of identified vulnerabilities in Thomas Mair‘s life, which suggests that these characteristics are indeed seen as indicators.

When I look at the ‘safeguarding-in-school’ software that is using vulnerabilities as signals for exactly that kind of prediction of intent, the gap between theory and practice here, is deeply problematic.

One slide included Internet content take downs, and suggested 300K pieces of illegal terrorist material have been removed since February 2010. That number he later suggested are contact with CTIRU, rather than content removal defined as a particular form. (For example it isn’t clear if this is a picture, a page, or whole site). This is still somewhat unclear and there remain important open questions, given its focus  in the online harms policy and discussion.

The big gap that was not discussed and that I believe matters, is how much autonomy teachers have, for example, to make a referral. He suggested “some teachers may feel confident” to do what is needed on their own but others, “may need help” and therefore make a referral. Statistics on those decision processes are missing, and it is very likely I believe that over referral is in part as a result of fearing that non-referral, once a computer has tagged issues as Prevent related, would be seen as negligent, or not meeting the statutory Prevent duty as it applies to schools.

On the Prevent Review, he suggested that the current timeline still stands, of August 2020, even though there is currently no Reviewer. It is for Ministers to make a decision, who will replace Lord Carlile.

Safeguarding children and young people from radicalisation

Mark Chalmers of Westminster City Council., then spoke about ‘safeguarding children and young people from radicalisation.’

He started off with a profile of the local authority demographic, poverty and wealth, migrant turnover,  proportion of non-English speaking households. This of itself may seem indicative of deliberate or unconscious bias.

He suggested that Prevent is not a security response, and expects  that the policing role in Prevent will be reduced over time, as more is taken over by Local Authority staff and the public services. [Note: this seems inevitable after the changes in the 2019 Counter Terrorism Act, to enable local authorities, as well as the police, to refer persons at risk of being drawn into terrorism to local channel panels. Should this have happened at all, was not consulted on as far as I know]. This claim that Prevent is not a security response, appears different in practice, when Local Authorities refuse FOI questions on the basis of security exemptions in the FOI Act, Section 24(1).

Both speakers declined to accept my suggestion that Prevent and Channel is not consensual. Participation in the programme, they were adamant is voluntary and confidential. The reality is that children do not feel they can make a freely given informed choice, in the face of an authority and the severity of the referral.  They also do not understand where their records go to, how confidential are they really, and how long they are kept or why.

The  recently concluded legal case and lengths one individual had to go to, to remove their personal record from the Prevent national database, shows just how problematic the mistaken perception of a consensual programme by authorities is.

I knew nothing of the Prevent programme at all in 2015. I only began to hear about it once I started mapping the data flows into, across and out of the state education sector, and teachers started coming to me with stories from their schools.

I found it fascinating to hear those speak at the conference that are so embedded in the programme. They seem unable to see it objectively or able to accept others’ critical point of view as truth. It stems perhaps from the luxury of having the privilege of believing you yourself, will be unaffected by its consequences.

“Yes,” said O’Brien, “we can turn it off. We have that privilege” (1984)

There was no ground given at all for accepting that there are deep flaws in practice. That in fact ‘Prevent is having the opposite of its intended effect: by dividing, stigmatising and alienating segments of the population, Prevent could end up promoting extremism, rather than countering it’ as concluded in the 2016 report  Preventing Education: Human Rights and Countering terrorism in UK Schools by Rights Watch UK .

Mark Chalmers conclusion was to suggest perhaps Prevent is not always going to be the current form, of bolt on ‘big programme’ and instead would be just like any other form of child protection, like FGM. That would mean every public sector worker, becomes an extended arm of the Home Office policy, expected to act in counter terrorism efforts.

But the training, the nuance, the level of application of autonomy that the speakers believe exists in staff and in children is imagined. The trust between authorities and people who need shelter, safety, medical care or schooling must be upheld for the public good.

No one asked, if and how children should be seen through the lens of terrorism, extremism and radicalisation at all. No one asked if and how every child, should be able to be surveilled online by school imposed software and covert photos taken through the webcam in the name of children’s safeguarding. Or labelled in school, associated with ‘terrorist.’ What happens when that prevents trust, and who measures its harm?

smoothwall monitor dashboard with terrorist labels on child profile

[click to view larger file]

Far too little is known about who and how makes decisions about the lives of others, the criteria for defining inappropriate activity or referrals, or the opacity of decisions on online content.

What effects will the Prevent programme have on our current and future society, where everyone is expected to surveil and inform upon each other? Failure to do so, to uphold the Prevent duty, becomes civic failure.  How is curiosity and intent separated? How do we safeguard children from risk (that is not harm) and protect their childhood experiences,  their free and full development of self?

No one wants children to be caught up in activities or radicalisation into terror groups. But is this the correct way to solve it?

This comprehensive new research by the YEIP suggests otherwise. The fact that the Home Office disengaged with the project in the last year, speaks volumes.

“The research provides new evidence that by attempting to profile and predict violent youth radicalisation, we may in fact be breeding the very reasons that lead those at risk to violent acts.” (Professor Theo Gavrielides).

Current case studies of lived experience, and history also say it is mistaken. Prevent when it comes to children, and schools, needs massive reform, at very least, but those most in favour of how it works today, aren’t the ones who can be involved in its reshaping.

“Who denounced you?” said Winston.

“It was my little daughter,” said Parsons with a sort of doleful pride. “She listened at the keyhole. Heard what I was saying, and nipped off to the patrols the very next day. Pretty smart for a nipper of seven, eh? I don’t bear her any grudge for it. In fact I’m proud of her. It shows I brought her up in the right spirit, anyway.” (1984).

 



The event was the launch of the European study on violent youth radicalisation from YEIP:  The project investigated the attitudes and knowledge of young Europeans, youth workers and other practitioners, while testing tools for addressing the phenomenon through positive psychology and the application of the Good Lives Model.

Its findings include that young people at risk of violent radicalisation are “managed” by the existing justice system as “risks”. This creates further alienation and division, while recidivism rates continue to spiral.

The Queen’s Speech, Information Society Services and GDPR

The Queen’s Speech promised new laws to ensure that the United Kingdom retains its world-class regime protecting personal data. And the government proposes a new digital charter to make the United Kingdom the safest place to be online for children.

Improving online safety for children should mean one thing. Children should be able to use online services without being used by them and the people and organisations behind it. It should mean that their rights to be heard are prioritised in decisions about them.

As Sir Tim Berners-Lee is reported as saying, there is a need to work with companies to put “a fair level of data control back in the hands of people“. He rightly points out that today terms and conditions are “all or nothing”.

There is a gap in discussions that we fail to address when we think of consent to terms and conditions, or “handing over data”. It is that this assumes that these are always and can be always, conscious acts.

For children the question of whether accepting Ts&Cs giving them control and whether it is meaningful becomes even more moot. What are the agreeing to? Younger children cannot give free and informed consent. After all most privacy policies standardly include phrases such as, “If we sell all or a portion of our business, we may transfer all of your information, including personal information, to the successor organization,” which means in effect that “accepting” a privacy policy today, is effectively a blank cheque for anything tomorrow.

The GDPR requires terms and conditions to be laid out in policies that a child can understand.

The current approach to legislation around children and the Internet is heavily weighted towards protection from seen threats. The threats we need to give more attention to, are those unseen.

By 2024 more than 50% of home Internet traffic will be used by appliances and devices, rather than just for communication and entertainment…The IoT raises huge questions on privacy and security, that have to be addressed by government, corporations and consumers. (WEF, 2017)

Our lives as measured in our behaviours and opinions, purchases and likes, are connected by trillions of sensors. My parents may have described using the Internet as going online. Today’s online world no longer means our time is spent ‘on the computer’, but being online, all day every day. Instead of going to a desk and booting up through a long phone cable, we have wireless computers in our pockets and in our homes, with functionality built-in to enable us to do other things; make a phonecall, make toast, and play. In a smart city surrounded by sensors under pavements, in buildings, cameras and tracking everywhere we go, we are living ever more inside an overarching network of cloud computers that store our data. And from all that data decisions are made, which adverts to show us, on which network sites, what we get offered and do not, and our behaviours and our conscious decision-making may be nudged quite invisibly.

Data about us, whether uniquely identifiable or not, is all too often collected passively, IP Address, linked sign-ins that extract friends lists, and some decide if we can either use the thing or not. It’s part of the deal. We get the service, they get to trade our identity, like Top Trumps, behind the scenes. But we often don’t see it, and under GDPR, there should be no contractual requirement as part of consent. I.e. agree or don’t get the service, is not an option.

From May 25, 2018 there will be special “conditions applicable to child’s consent in relation to information society services,” in Data Protection law which are applicable to the collection of data.

As yet, we have not had debate in the UK what that means in concrete terms, and if we do not soon, we risk it becoming an afterthought that harms more than helps protect children’s privacy, and therefore their digital identity.

I think of five things needed by policy shapers to tackle it:

  • In depth understanding of what ‘online’ and the Internet mean
  • Consistent understanding of what threat models and risk are connected to personal data, which today are underestimated
  • A grasp of why data privacy training is vital to safeguarding
    Confront the idea that user regulation as a stand-alone step will create a better online experience for users, when we know that perceived problems are created by providers or other site users
  • Siloed thinking that fails to be forward thinking or join the dots of tactics across Departments into cohesive inclusive strategy

If the government’s new “major new drive on internet safety” involves the world’s largest technology companies in order to make the UK the “safest place in the world for young people to go online,” then we must also ensure that these strategies and papers join things up and above all, a technical knowledge of how the Internet works needs to join the dots of risks and benefits in order to form a strategy that will actually make children safe, skilled and see into their future.

When it comes to children, there is a further question over consent and parental spyware. Various walk-to-school apps, lauded by the former Secretary of State two years running, use spyware and can be used without a child’s consent. Guardian Gallery, which could be used to scan for nudity in photos on anyone’s phone that the ‘parent’ phone holder has access to install it on, can be made invisible on the ‘child’ phone. Imagine this in coercive relationships.

If these technologies and the online environment are not correctly assessed with regard to “online safety” threat models for all parts of our population, then they fail to address the risk for the most vulnerable who need it.

What will the GDPR really mean for online safety improvement? What will it define as online services for remuneration in the IoT? And who will be considered as children, “targeted at” or “offered to”?

An active decision is required in the UK. Will 16 remain the default age needed for consent to access Information Society Services, or will we adopt 13 which needs a legal change?

As banal as these questions sound they need close attention paid, and clarity, between now and May 25, 2018 if the UK is to be GDPR ready for providers of online services to know who and how they should treat Internet access, participation and age [parental] verification.

How will the “controller” make “reasonable efforts to verify in such cases that consent is given or authorised by the holder of parental responsibility over the child”, and “taking into consideration available technology”.

These are fundamental questions of what the Internet is and means to people today. And if the current government approach to security is anything to go by, safety will not mean what we think it will mean.

It will matter how these plans join up. Age verification was not being considered in UK law in relation to how we would derogate GDPR, even as late as in October 2016 despite age verification requirements already in the Digital Economy Bill. It shows a lack of joined up digital thinking across our government and needs addressed with urgency to get into the next Parliamentary round.

In recent draft legislation I am yet to see the UK government address Internet rights and safety for young people as anything other than a protection issue, treating the online space in the same way as offline, irl, focused on stranger danger, and sexting.

The UK Digital Strategy commits to the implementation of the General Data Protection Regulation by May 2018, and frames it as a business issue, labelling data as “a global commodity” and as such, its handling is framed solely as a requirements needed to ensure “that our businesses can continue to compete and communicate effectively around the world” and that adoption “will ensure a shared and higher standard of protection for consumers and their data.”

The Digital Economy Bill, despite being a perfect vehicle for this has failed to take on children’s rights, and in particular the requirements of GDPR for consent at all. It was clear if we were to do any future digital transactions we need to level up to GDPR, not drop to the lowest common denominator between that and existing laws.

It was utterly ignored. So were children’s rights to have their own views heard in the consultation to comment on the GDPR derogations for children, with little chance for involvement from young people’s organisations, and less than a monthto respond.

We must now get this right in any new Digital Strategy and bill in the coming parliament.

Crouching Tiger Hidden Dragon: the making of an IoT trust mark

The Internet of Things (IoT) brings with it unique privacy and security concerns associated with smart technology and its use of data.

  • What would it mean for you to trust an Internet connected product or service and why would you not?
  • What has damaged consumer trust in products and services and why do sellers care?
  • What do we want to see different from today, and what is necessary to bring about that change?

These three pairs of questions implicitly underpinned the intense day of  discussion at the London Zoo last Friday.

The questions went unasked, and could have been voiced before we started, although were probably assumed to be self-evident:

  1. Why do you want one at all [define the problem]?
  2. What needs to change and why [define the future model]?
  3. How do you deliver that and for whom [set out the solution]?

If a group does not agree on the need and drivers for change, there will be no consensus on what that should look like, what the gap is to achieve it, and even less on making it happen.

So who do you want the trustmark to be for, why will anyone want it, and what will need to change to deliver the aims? No one wants a trustmark per se. Perhaps you want what values or promises it embodies to  demonstrate what you stand for, promote good practice, and generate consumer trust. To generate trust, you must be seen to be trustworthy. Will the principles deliver on those goals?

The Open IoT Certification Mark Principles, as a rough draft was the outcome of the day, and are available online.

Here’s my reflections, including what was missing on privacy, and the potential for it to be considered in future.

I’ve structured this first, assuming readers attended the event, at ca 1,000 words. Lists and bullet points. The background comes after that, for anyone interested to read a longer piece.

Many thanks upfront, to fellow participants, to the organisers Alexandra D-S and Usman Haque and the colleague who hosted at the London Zoo. And Usman’s Mum.  I hope there will be more constructive work to follow, and that there is space for civil society to play a supporting role and critical friend.


The mark didn’t aim to fix the IoT in a day, but deliver something better for product and service users, by those IoT companies and providers who want to sign up. Here is what I took away.

I learned three things

  1. A sense of privacy is not homogenous, even within people who like and care about privacy in theoretical and applied ways. (I very much look forward to reading suggestions promised by fellow participants, even if enforced personal openness and ‘watching the watchers’ may mean ‘privacy is theft‘.)
  2. Awareness of current data protection regulations needs improved in the field. For example, Subject Access Requests already apply to all data controllers, public and private. Few have read the GDPR, or the e-Privacy directive, despite importance for security measures in personal devices, relevant for IoT.
  3. I truly love working on this stuff, with people who care.

And it reaffirmed things I already knew

  1. Change is hard, no matter in what field.
  2. People working together towards a common goal is brilliant.
  3. Group collaboration can create some brilliantly sharp ideas. Group compromise can blunt them.
  4. Some men are particularly bad at talking over each other, never mind over the women in the conversation. Women notice more. (Note to self: When discussion is passionate, it’s hard to hold back in my own enthusiasm and not do the same myself. To fix.)
  5. The IoT context, and risks within it are not homogenous, but brings new risks and adverseries. The risks for manufacturers and consumers and the rest of the public are different, and cannot be easily solved with a one-size-fits-all solution. But we can try.

Concerns I came away with

  1. If the citizen / customer / individual is to benefit from the IoT trustmark, they must be put first, ahead of companies’ wants.
  2. If the IoT group controls both the design, assessment to adherence and the definition of success, how objective will it be?
  3. The group was not sufficiently diverse and as a result, reflects too little on the risks and impact of the lack of diversity in design and effect, and the implications of dataveillance .
  4. Critical minority thoughts although welcomed, were stripped out from crowdsourced first draft principles in compromise.
  5. More future thinking should be built-in to be robust over time.

IoT adversaries: via Twitter, unknown source

What was missing

There was too little discussion of privacy in perhaps the most important context of IoT – inter connectivity and new adversaries. It’s not only about *your* thing, but things that it speaks to, interacts with, of friends, passersby, the cityscape , and other individual and state actors interested in offense and defense. While we started to discuss it, we did not have the opportunity to discuss sufficiently at depth to be able to get any thinking into applying solutions in the principles.

One of the greatest risks that users face is the ubiquitous collection and storage of data about users that reveal detailed, inter-connected patterns of behaviour and our identity and not seeing how that is used by companies behind the scenes.

What we also missed discussing is not what we see as necessary today, but what we can foresee as necessary for the short term future, brainstorming and crowdsourcing horizon scanning for market needs and changing stakeholder wants.

Future thinking

Here’s the areas of future thinking that smart thinking on the IoT mark could consider.

  1. We are moving towards ever greater requirements to declare identity to use a product or service, to register and log in to use anything at all. How will that change trust in IoT devices?
  2. Single identity sign-on is becoming ever more imposed, and any attempts for multiple presentation of who I am by choice, and dependent on context, therefore restricted. [not all users want to use the same social media credentials for online shopping, with their child’s school app, and their weekend entertainment]
  3. Is this imposition what the public wants or what companies sell us as what customers want in the name of convenience? What I believe the public would really want is the choice to do neither.
  4. There is increasingly no private space or time, at places of work.
  5. Limitations on private space are encroaching in secret in all public city spaces. How will ‘handoffs’ affect privacy in the IoT?
  6. Public sector (connected) services are likely to need even more exacting standards than single home services.
  7. There is too little understanding of the social effects of this connectedness and knowledge created, embedded in design.
  8. What effects may there be on the perception of the IoT as a whole, if predictive data analysis and complex machine learning and AI hidden in black boxes becomes more commonplace and not every company wants to be or can be open-by-design?
  9. Ubiquitous collection and storage of data about users that reveal detailed, inter-connected patterns of behaviour and our identity needs greater commitments to disclosure. Where the hand-offs are to other devices, and whatever else is in the surrounding ecosystem, who has responsibility for communicating interaction through privacy notices, or defining legitimate interests, where the data joined up may be much more revealing than stand-alone data in each silo?
  10. Define with greater clarity the privacy threat models for different groups of stakeholders and address the principles for each.

What would better look like?

The draft privacy principles are a start, but they’re not yet aspirational as I would have hoped. Of course the principles will only be adopted if possible, practical and by those who choose to. But where is the differentiator from what everyone is required to do, and better than the bare minimum? How will you sell this to consumers as new? How would you like your child to be treated?

The wording in these 5 bullet points, is the first crowdsourced starting point.

  • The supplier of this product or service MUST be General Data Protection Regulation (GDPR) compliant.
  • This product SHALL NOT disclose data to third parties without my knowledge.
  • I SHOULD get full access to all the data collected about me.
  • I MAY operate this device without connecting to the internet.
  • My data SHALL NOT be used for profiling, marketing or advertising without transparent disclosure.

Yes other points that came under security address some of the crossover between privacy and surveillance risks, but there is as yet little substantial that is aspirational to make the IoT mark a real differentiator in terms of privacy. An opportunity remains.

It was that and how young people perceive privacy that I hoped to bring to the table. Because if manufacturers are serious about future success, they cannot ignore today’s children and how they feel. How you treat them today, will shape future purchasers and their purchasing, and there is evidence you are getting it wrong.

The timing is good in that it now also offers the opportunity to promote consistent understanding, and embed the language of GDPR and ePrivacy regulations into consistent and compatible language in policy and practice in the #IoTmark principles.

User rights I would like to see considered

These are some of the points I would think privacy by design would mean. This would better articulate GDPR Article 25 to consumers.

Data sovereignty is a good concept and I believe should be considered for inclusion in explanatory blurb before any agreed privacy principles.

  1. Goods should by ‘dumb* by default’ until the smart functionality is switched on. [*As our group chair/scribe called it]  I would describe this as, “off is the default setting out-of-the-box”.
  2. Privact by design. Deniability by default. i.e. not only after opt out, but a company should not access the personal or identifying purchase data of anyone who opts out of data collection about their product/service use during the set up process.
  3. The right to opt out of data collection at a later date while continuing to use services.
  4. A right to object to the sale or transfer of behavioural data, including to third-party ad networks and absolute opt-in on company transfer of ownership.
  5. A requirement that advertising should be targeted to content, [user bought fridge A] not through jigsaw data held on users by the company [how user uses fridge A, B, C and related behaviour].
  6. An absolute rejection of using children’s personal data gathered to target advertising and marketing at children

Background: Starting points before privacy

After a brief recap on 5 years ago, we heard two talks.

The first was a presentation from Bosch. They used the insights from the IoT open definition from 5 years ago in their IoT thinking and embedded it in their brand book. The presenter suggested that in five years time, every fridge Bosch sells will be ‘smart’. And the  second was a fascinating presentation, of both EU thinking and the intellectual nudge to think beyond the practical and think what kind of society we want to see using the IoT in future. Hints of hardcore ethics and philosophy that made my brain fizz from , soon to retire from the European Commission.

The principles of open sourcing, manufacturing, and sustainable life cycle were debated in the afternoon with intense arguments and clearly knowledgeable participants, including those who were quiet.  But while the group had assigned security, and started work on it weeks before, there was no one pre-assigned to privacy. For me, that said something. If they are serious about those who earn the trustmark being better for customers than their competition, then there needs to be greater emphasis on thinking like their customers, and by their customers, and what use the mark will be to customers, not companies. Plan early public engagement and testing into the design of this IoT mark, and make that testing open and diverse.

To that end, I believe it needed to be articulated more strongly, that sustainable public trust is the primary goal of the principles.

  • Trust that my device will not become unusable or worthless through updates or lack of them.
  • Trust that my device is manufactured safely and ethically and with thought given to end of life and the environment.
  • Trust that my source components are of high standards.
  • Trust in what data and how that data is gathered and used by the manufacturers.

Fundamental to ‘smart’ devices is their connection to the Internet, and so the last for me, is therefore key to successful public perception and it actually making a difference, beyond the PR value to companies. The value-add must be measured from consumers point of view.

All the openness about design functions and practice improvements, without attempting to change privacy infringing practices, may be wasted effort. Why? Because the perceived benefit of the value of the mark, will be proportionate to what risks it is seen to mitigate.

Why?

Because I assume that you know where your source components come from today. I was shocked to find out not all do and that ‘one degree removed’ is going to be an improvement? Holy cow, I thought. What about regulatory requirements for product safety recalls? These differ of course for different product areas, but I was still surprised. Having worked in global Fast Moving Consumer Goods (FMCG) and food industry, semiconductor and optoelectronics, and medical devices it was self-evident for me, that sourcing is rigorous. So that new requirement to know one degree removed, was a suggested minimum. But it might shock consumers to know there is not usually more by default.

Customers also believe they have reasonable expectations of not being screwed by a product update, left with something that does not work because of its computing based components. The public can take vocal, reputation-damaging action when they are let down.

In the last year alone, some of the more notable press stories include a manufacturer denying service, telling customers, “Your unit will be denied server connection,” after a critical product review. Customer support at Jawbone came in for criticism after reported failings. And even Apple has had problems in rolling out major updates.

While these are visible, the full extent of the overreach of company market and product surveillance into our whole lives, not just our living rooms, is yet to become understood by the general population. What will happen when it is?

The Internet of Things is exacerbating the power imbalance between consumers and companies, between government and citizens. As Wendy Grossman wrote recently, in one sense this may make privacy advocates’ jobs easier. It was always hard to explain why “privacy” mattered. Power, people understand.

That public discussion is long overdue. If open principles on IoT devices mean that the signed-up companies differentiate themselves by becoming market leaders in transparency, it will be a great thing. Companies need to offer full disclosure of data use in any privacy notices in clear, plain language  under GDPR anyway, but to go beyond that, and offer customers fair presentation of both risks and customer benefits, will not only be a point-of-sales benefit, but potentially improve digital literacy in customers too.

The morning discussion touched quite often on pay-for-privacy models. While product makers may see this as offering a good thing, I strove to bring discussion back to first principles.

Privacy is a human right. There can be no ethical model of discrimination based on any non-consensual invasion of privacy. Privacy is not something I should pay to have. You should not design products that reduce my rights. GDPR requires privacy-by-design and data protection by default. Now is that chance for IoT manufacturers to lead that shift towards higher standards.

We also need a new ethics thinking on acceptable fair use. It won’t change overnight, and perfect may be the enemy of better. But it’s not a battle that companies should think consumers have lost. Human rights and information security should not be on the battlefield at all in the war to win customer loyalty.  Now is the time to do better, to be better, demand better for us and in particular, for our children.

Privacy will be a genuine market differentiator

If manufacturers do not want to change their approach to exploiting customer data, they are unlikely to be seen to have changed.

Today feelings that people in US and Europe reflect in surveys are loss of empowerment, feeling helpless, and feeling used. That will shift to shock, resentment, and any change curve will predict, anger.

A 2014 survey for the Royal Statistical Society by Ipsos MORI, found that trust in institutions to use data is much lower than trust in them in general.

“The poll of just over two thousand British adults carried out by Ipsos MORI found that the media, internet services such as social media and search engines and telecommunication companies were the least trusted to use personal data appropriately.” [2014, Data trust deficit with lessons for policymakers, Royal Statistical Society]

In the British student population, one 2015 survey of university applicants in England, found of 37,000 who responded, the vast majority of UCAS applicants agree that sharing personal data can benefit them and support public benefit research into university admissions, but they want to stay firmly in control. 90% of respondents said they wanted to be asked for their consent before their personal data is provided outside of the admissions service.

In 2010, a multi method model of research with young people aged 14-18, by the Royal Society of Engineering, found that, “despite their openness to social networking, the Facebook generation have real concerns about the privacy of their medical records.” [2010, Privacy and Prejudice, RAE, Wellcome]

When people use privacy settings on Facebook set to maximum, they believe they get privacy, and understand little of what that means behind the scenes.

Are there tools designed by others, like Projects by If licenses, and ways this can be done, that you’re not even considering yet?

What if you don’t do it?

“But do you feel like you have privacy today?” I was asked the question in the afternoon. How do people feel today, and does it matter? Companies exploiting consumer data and getting caught doing things the public don’t expect with their data, has repeatedly damaged consumer trust. Data breaches and lack of information security have damaged consumer trust. Both cause reputational harm. Damage to reputation can harm customer loyalty. Damage to customer loyalty costs sales, profit and upsets the Board.

Where overreach into our living rooms has raised awareness of invasive data collection, we are yet to be able to see and understand the invasion of privacy into our thinking and nudge behaviour, into our perception of the world on social media, the effects on decision making that data analytics is enabling as data shows companies ‘how we think’, granting companies access to human minds in the abstract, even before Facebook is there in the flesh.

Governments want to see how we think too, and is thought crime really that far away using database labels of ‘domestic extremists’ for activists and anti-fracking campaigners, or the growing weight of policy makers attention given to predpol, predictive analytics, the [formerly] Cabinet Office Nudge Unit, Google DeepMind et al?

Had the internet remained decentralized the debate may be different.

I am starting to think of the IoT not as the Internet of Things, but as the Internet of Tracking. If some have their way, it will be the Internet of Thinking.

Considering our centralised Internet of Things model, our personal data from human interactions has become the network infrastructure, and data flows, are controlled by others. Our brains are the new data servers.

In the Internet of Tracking, people become the end nodes, not things.

And it is this where the future users will be so important. Do you understand and plan for factors that will drive push back, and crash of consumer confidence in your products, and take it seriously?

Companies have a choice to act as Empires would – multinationals, joining up even on low levels, disempowering individuals and sucking knowledge and power at the centre. Or they can act as Nation states ensuring citizens keep their sovereignty and control over a selected sense of self.

Look at Brexit. Look at the GE2017. Tell me, what do you see is the direction of travel? Companies can fight it, but will not defeat how people feel. No matter how much they hope ‘nudge’ and predictive analytics might give them this power, the people can take back control.

What might this desire to take-back-control mean for future consumer models? The afternoon discussion whilst intense, reached fairly simplistic concluding statements on privacy. We could have done with at least another hour.

Some in the group were frustrated “we seem to be going backwards” in current approaches to privacy and with GDPR.

But if the current legislation is reactive because companies have misbehaved, how will that be rectified for future? The challenge in the IoT both in terms of security and privacy, AND in terms of public perception and reputation management, is that you are dependent on the behaviours of the network, and those around you. Good and bad. And bad practices by one, can endanger others, in all senses.

If you believe that is going back to reclaim a growing sense of citizens’ rights, rather than accepting companies have the outsourced power to control the rights of others, that may be true.

There was a first principle asked whether any element on privacy was needed at all, if the text was simply to state, that the supplier of this product or service must be General Data Protection Regulation (GDPR) compliant. The GDPR was years in the making after all. Does it matter more in the IoT and in what ways? The room tended, understandably, to talk about it from the company perspective.  “We can’t” “won’t” “that would stop us from XYZ.” Privacy would however be better addressed from the personal point of view.

What do people want?

From the company point of view, the language is different and holds clues. Openness, control, and user choice and pay for privacy are not the same thing as the basic human right to be left alone. Afternoon discussion reminded me of the 2014 WAPO article, discussing Mark Zuckerberg’s theory of privacy and a Palo Alto meeting at Facebook:

“Not one person ever uttered the word “privacy” in their responses to us. Instead, they talked about “user control” or “user options” or promoted the “openness of the platform.” It was as if a memo had been circulated that morning instructing them never to use the word “privacy.””

In the afternoon working group on privacy, there was robust discussion whether we had consensus on what privacy even means. Words like autonomy, control, and choice came up a lot. But it was only a beginning. There is opportunity for better. An academic voice raised the concept of sovereignty with which I agreed, but how and where  to fit it into wording, which is at once both minimal and applied, and under a scribe who appeared frustrated and wanted a completely different approach from what he heard across the group, meant it was left out.

This group do care about privacy. But I wasn’t convinced that the room cared in the way that the public as a whole does, but rather only as consumers and customers do. But IoT products will affect potentially everyone, even those who do not buy your stuff. Everyone in that room, agreed on one thing. The status quo is not good enough. What we did not agree on, was why, and what was the minimum change needed to make a enough of a difference that matters.

I share the deep concerns of many child rights academics who see the harm that efforts to avoid restrictions Article 8 the GDPR will impose. It is likely to be damaging for children’s right to access information, be discriminatory according to parents’ prejudices or socio-economic status, and ‘cheating’ – requiring secrecy rather than privacy, in attempts to hide or work round the stringent system.

In ‘The Class’ the research showed, ” teachers and young people have a lot invested in keeping their spheres of interest and identity separate, under their autonomous control, and away from the scrutiny of each other.” [2016, Livingstone and Sefton-Green, p235]

Employers require staff use devices with single sign including web and activity tracking and monitoring software. Employee personal data and employment data are blended. Who owns that data, what rights will employees have to refuse what they see as excessive, and is it manageable given the power imbalance between employer and employee?

What is this doing in the classroom and boardroom for stress, anxiety, performance and system and social avoidance strategies?

A desire for convenience creates shortcuts, and these are often met using systems that require a sign-on through the platforms giants: Google, Facebook, Twitter, et al. But we are kept in the dark how by using these platforms, that gives access to them, and the companies, to see how our online and offline activity is all joined up.

Any illusion of privacy we maintain, we discussed, is not choice or control if based on ignorance, and backlash against companies lack of efforts to ensure disclosure and understanding is growing.

“The lack of accountability isn’t just troubling from a philosophical perspective. It’s dangerous in a political climate where people are pushing back at the very idea of globalization. There’s no industry more globalized than tech, and no industry more vulnerable to a potential backlash.”

[Maciej Ceglowski, Notes from an Emergency, talk at re.publica]

Why do users need you to know about them?

If your connected *thing* requires registration, why does it? How about a commitment to not forcing one of these registration methods or indeed any at all? Social Media Research by Pew Research in 2016 found that  56% of smartphone owners ages 18 to 29 use auto-delete apps, more than four times the share among those 30-49 (13%) and six times the share among those 50 or older (9%).

Does that tell us anything about the demographics of data retention preferences?

In 2012, they suggested social media has changed the public discussion about managing “privacy” online. When asked, people say that privacy is important to them; when observed, people’s actions seem to suggest otherwise.

Does that tell us anything about how well companies communicate to consumers how their data is used and what rights they have?

There is also data with strong indications about how women act to protect their privacy more but when it comes to basic privacy settings, users of all ages are equally likely to choose a private, semi-private or public setting for their profile. There are no significant variations across age groups in the US sample.

Now think about why that matters for the IoT? I wonder who makes the bulk of purchasing decsions about household white goods for example and has Bosch factored that into their smart-fridges-only decision?

Do you *need* to know who the user is? Can the smart user choose to stay anonymous at all?

The day’s morning challenge was to attend more than one interesting discussion happening at the same time. As invariably happens, the session notes and quotes are always out of context and can’t possibly capture everything, no matter how amazing the volunteer (with thanks!). But here are some of the discussion points from the session on the body and health devices, the home, and privacy. It also included a discussion on racial discrimination, algorithmic bias, and the reasons why care.data failed patients and failed as a programme. We had lengthy discussion on ethics and privacy: smart meters, objections to models of price discrimination, and why pay-for-privacy harms the poor by design.

Smart meter data can track the use of unique appliances inside a person’s home and intimate patterns of behaviour. Information about our consumption of power, what and when every day, reveals  personal details about everyday lives, our interactions with others, and personal habits.

Why should company convenience come above the consumer’s? Why should government powers, trump personal rights?

Smart meter is among the knowledge that government is exploiting, without consent, to discover a whole range of issues, including ensuring that “Troubled Families are identified”. Knowing how dodgy some of the school behaviour data might be, that helps define who is “troubled” there is a real question here, is this sound data science? How are errors identified? What about privacy? It’s not your policy, but if it is your product, what are your responsibilities?

If companies do not respect children’s rights,  you’d better shape up to be GDPR compliant

For children and young people, more vulnerable to nudge, and while developing their sense of self can involve forming, and questioning their identity, these influences need oversight or be avoided.

In terms of GDPR, providers are going to pay particular attention to Article 8 ‘information society services’ and parental consent, Article 17 on profiling,  and rights to restriction of processing (19) right to erasure in recital 65 and rights to portability. (20) However, they  may need to simply reassess their exploitation of children and young people’s personal data and behavioural data. Article 57 requires special attention to be paid by regulators to activities specifically targeted at children, as ‘vulnerable natural persons’ of recital 75.

Human Rights, regulations and conventions overlap in similar principles that demand respect for a child, and right to be let alone:

(a) The development of the child ‘s personality, talents and mental and physical abilities to their fullest potential;

(b) The development of respect for human rights and fundamental freedoms, and for the principles enshrined in the Charter of the United Nations.

A weakness of the GDPR is that it allows derogation on age and will create inequality and inconsistency  for children as a result. By comparison Article one of the Convention on the Rights of the Child (CRC) defines who is to be considered a “child” for the purposes of the CRC, and states that: “For the purposes of the present Convention, a child means every human being below the age of eighteen years unless, under the law applicable to the child, majority is attained earlier.”<

Article two of the CRC says that States Parties shall respect and ensure the rights set forth in the present Convention to each child within their jurisdiction without discrimination of any kind.

CRC Article 16 says that no child shall be subjected to arbitrary or unlawful interference with his or her honour and reputation.

Article 8 CRC requires respect for the right of the child to preserve his or her identity […] without unlawful interference.

Article 12 CRC demands States Parties shall assure to the child who is capable of forming his or her own views the right to express those views freely in all matters affecting the child, the views of the child being given due weight in accordance with the age and maturity of the child.

That stands in potential conflict with GDPR article 8. There is much on GDPR on derogations by country, and or children, still to be set.

What next for our data in the wild

Hosting the event at the zoo offered added animals, and during a lunch tour we got out on a tour, kindly hosted by a fellow participant. We learned how smart technology was embedded in some of the animal enclosures, and work on temperature sensors with penguins for example. I love tigers, so it was a bonus that we got to see such beautiful and powerful animals up close, if a little sad for their circumstances and as a general basic principle, seeing big animals caged as opposed to in-the-wild.

Freedom is a common desire in all animals. Physical, mental, and freedom from control by others.

I think any manufacturer that underestimates this element of human instinct is ignoring the ‘hidden dragon’ that some think is a myth.  Privacy is not dead. It is not extinct, or even unlike the beautiful tigers, endangered. Privacy in the IoT at its most basic, is the right to control our purchasing power. The ultimate people power waiting to be sprung. Truly a crouching tiger. People object to being used and if companies continue to do so without full disclosure, they do so at their peril. Companies seem all-powerful in the battle for privacy, but they are not.  Even insurers and data brokers must be fair and lawful, and it is for regulators to ensure that practices meet the law.

When consumers realise our data, our purchasing power has the potential to control, not be controlled, that balance will shift.

“Paper tigers” are superficially powerful but are prone to overextension that leads to sudden collapse. If that happens to the superficially powerful companies that choose unethical and bad practice, as a result of better data privacy and data ethics, then bring it on.

I hope that the IoT mark can champion best practices and make a difference to benefit everyone.

While the companies involved in its design may be interested in consumers, I believe it could be better for everyone, done well. The great thing about the efforts into an #IoTmark is that it is a collective effort to improve the whole ecosystem.

I hope more companies will realise their privacy rights and ethical responsibility in the world to all people, including those interested in just being, those who want to be let alone, and not just those buying.

“If a cat is called a tiger it can easily be dismissed as a paper tiger; the question remains however why one was so scared of the cat in the first place.”

The Resistance to Theory (1982), Paul de Man

Further reading: Networks of Control – A Report on Corporate Surveillance, Digital Tracking, Big Data & Privacy by Wolfie Christl and Sarah Spiekermann

Information society services: Children in the GDPR, Digital Economy Bill & Digital Strategy

In preparation for The General Data Protection Regulation (GDPR) there  must be an active UK decision about policy in the coming months for children and the Internet – provision of ‘Information Society Services’. The age of consent for online content aimed at children from May 25, 2018 will be 16 by default unless UK law is made to lower it.

Age verification for online information services in the GDPR, will mean capturing parent-child relationships. This could mean a parent’s email or credit card unless there are other choices made. What will that mean for access to services for children and to privacy? It is likely to offer companies an opportunity for a data grab, and mean privacy loss for the public, as more data about family relationships will be created and collected than the content provider would get otherwise.

Our interactions create a blended identity of online and offline attributes which I suggested in a previous post, create synthesised versions of our selves raises questions on data privacy and security.

The goal may be to protect the physical child. The outcome will mean it simultaneously expose children and parents to risks that we would not otherwise be put through increased personal data collection. By increasing the data collected, it increases the associated risks of loss, theft, and harm to identity integrity. How will legislation balance these risks and rights to participation?

The UK government has various work in progress before then, that could address these questions:

But will they?

As Sonia Livingstone wrote in the post on the LSE media blog about what to expect from the GDPR and its online challenges for children:

“Now the UK, along with other Member States, has until May 2018 to get its house in order”.

What will that order look like?

The Digital Strategy and Ed Tech

The Digital Strategy commits to changes in National Pupil Data  management. That is, changes in the handling and secondary uses of data collected from pupils in the school census, like using it for national research and planning.

It also means giving data to commercial companies and the press. Companies such as private tutor pupil matching services, and data intermediaries. Journalists at the Times and the Telegraph.

Access to NPD via the ONS VML would mean safe data use, in safe settings, by safe (trained and accredited) users.

Sensitive data — it remains to be seen how DfE intends to interpret ‘sensitive’ and whether that is the DPA1998 term or lay term meaning ‘identifying’ as it should — will no longer be seen by users for secondary uses outside safe settings.

However, a grey area on privacy and security remains in the “Data Exchange” which will enable EdTech products to “talk to each other”.

The aim of changes in data access is to ensure that children’s data integrity and identity are secure.  Let’s hope the intention that “at all times, the need to preserve appropriate privacy and security will remain paramount and will be non-negotiable” applies across all closed pupil data, and not only to that which may be made available via the VML.

This strategy is still far from clear or set in place.

The Digital Strategy and consumer data rights

The Digital Strategy commits under the heading of “Unlocking the power of data in the UK economy and improving public confidence in its use” to the implementation of the General Data Protection Regulation by May 2018. The Strategy frames this as a business issue, labelling data as “a global commodity” and as such, its handling is framed solely as a requirements needed to ensure “that our businesses can continue to compete and communicate effectively around the world” and that adoption “will ensure a shared and higher standard of protection for consumers and their data.”

The GDPR as far as children goes, is far more about protection of children as people. It focuses on returning control over children’s own identity and being able to revoke control by others, rather than consumer rights.

That said, there are data rights issues which are also consumer issues and  product safety failures posing real risk of harm.

Neither The Digital Economy Bill nor the Digital Strategy address these rights and security issues, particularly when posed by the Internet of Things with any meaningful effect.

In fact, the chapter Internet of Things and Smart Infrastructure [ 9/19]  singularly miss out anything on security and safety:

“We want the UK to remain an international leader in R&D and adoption of IoT. We are funding research and innovation through the three year, £30 million IoT UK Programme.”

There was much more thoughtful detail in the 2014 Blackett Review on the IoT to which I was signposted today after yesterday’s post.

If it’s not scary enough for the public to think that their sex secrets and devices are hackable, perhaps it will kill public trust in connected devices more when they find strangers talking to their children through a baby monitor or toy. [BEUC campaign report on #Toyfail]

“The internet-connected toys ‘My Friend Cayla’ and ‘i-Que’ fail miserably when it comes to safeguarding basic consumer rights, security, and privacy. Both toys are sold widely in the EU.”

Digital skills and training in the strategy doesn’t touch on any form of change management plans for existing working sectors in which we expect to see machine learning and AI change the job market. This is something the digital and industrial strategy must be addressing hand in glove.

The tactics and training providers listed sound super, but there does not appear to be an aspirational strategy hidden between the lines.

The Digital Economy Bill and citizens’ data rights

While the rest of Europe in this legislation has recognised that a future thinking digital world without boundaries, needs future thinking on data protection and empowered citizens with better control of identity, the UK government appears intent on taking ours away.

To take only one example for children, the Digital Economy Bill in Cabinet Office led meetings was explicit about use for identifying and tracking individuals labelled under “Troubled Families” and interventions with them. Why, when consent is required to work directly with people, that consent is being ignored to access their information is baffling and in conflict with both the spirit and letter of GDPR. Students and Applicants will see their personal data sent to the Student Loans Company without their consent or knowledge. This overrides the current consent model in place at UCAS.

It is baffling that the government is pursuing the Digital Economy Bill data copying clauses relentlessly, that remove confidentiality by default, and will release our identities in birth, marriage and death data for third party use without consent through Chapter 2, the opening of the Civil Registry, without any safeguards in the bill.

Government has not only excluded important aspects of Parliamentary scrutiny in the bill, it is trying to introduce “almost untrammeled powers” (paragraph 21), that will “very significantly broaden the scope for the sharing of information” and “specified persons”  which applies “whether the service provider concerned is in the public sector or is a charity or a commercial organisation” and non-specific purposes for which the information may be disclosed or used. [Reference: Scrutiny committee comments]

Future changes need future joined up thinking

While it is important to learn from the past, I worry that the effort some social scientists put into looking backwards,  is not matched by enthusiasm to look ahead and making active recommendations for a better future.

Society appears to have its eyes wide shut to the risks of coercive control and nudge as research among academics and government departments moves in the direction of predictive data analysis.

Uses of administrative big data and publicly available social media data for example, in research and statistics, needs further new regulation in practice and policy but instead the Digital Economy Bill looks only at how more data can be got out of Department silos.

A certain intransigence about data sharing with researchers from government departments is understandable. What’s the incentive for DWP to release data showing its policy may kill people?

Westminster may fear it has more to lose from data releases and don’t seek out the political capital to be had from good news.

The ethics of data science are applied patchily at best in government, and inconsistently in academic expectations.

Some researchers have identified this but there seems little will to action:

 “It will no longer be possible to assume that secondary data use is ethically unproblematic.”

[Data Horizons: New forms of Data for Social Research, Elliot, M., Purdam, K., Mackey, E., School of Social Sciences, The University Of Manchester, 2013.]

Research and legislation alike seem hell bent on the low hanging fruit but miss out the really hard things. What meaningful benefit will it bring by spending millions of pounds on exploiting these personal data and opening our identities to risk just to find out whether X course means people are employed in Y tax bracket 5 years later, versus course Z where everyone ends up self employed artists? What ethics will be applied to the outcomes of those questions asked and why?

And while government is busy joining up children’s education data throughout their lifetimes from age 2 across school, FE, HE, into their HMRC and DWP interactions, there is no public plan in the Digital Strategy for the coming 10 to 20 years employment market, when many believe, as do these authors in American Scientific, “around half of today’s jobs will be threatened by algorithms. 40% of today’s top 500 companies will have vanished in a decade.”

What benefit will it have to know what was, or for the plans around workforce and digital skills list ad hoc tactics, but no strategy?

We must safeguard jobs and societal needs, but just teaching people to code is not a solution to a fundamental gap in what our purpose will be, and the place of people as a world-leading tech nation after Brexit. We are going to have fewer talented people from across the world staying on after completing academic studies, because they’re not coming at all.

There may be investment in A.I. but where is the investment in good data practices around automation and machine learning in the Digital Economy Bill?

To do this Digital Strategy well, we need joined up thinking.

Improving online safety for children in The Green Paper on Children’s Internet Safety should mean one thing:

Children should be able to use online services without being used and abused by them.

This article arrived on my Twitter timeline via a number of people. Doteveryone CEO Rachel Coldicutt summed up various strands of thought I started to hear hints of last month at #CPDP2017 in Brussels:

“As designers and engineers, we’ve contributed to a post-thought world. In 2017, it’s time to start making people think again.

“We need to find new ways of putting friction and thoughtfulness back into the products we make.” [Glanceable truthiness, 30.1.2017]

Let’s keep the human in discussions about technology, and people first in our products

All too often in technology and even privacy discussions, people have become ‘consumers’ and ‘customers’ instead of people.

The Digital Strategy may seek to unlock “the power of data in the UK economy” but policy and legislation must put equal if not more emphasis on “improving public confidence in its use” if that long term opportunity is to be achieved.

And in technology discussions about AI and algorithms we hear very little about people at all.  Discussions I hear seem siloed instead into three camps: the academics, the designers and developers,  the politicians and policy makers.  And then comes the lowest circle, ‘the public’ and ‘society’.

It is therefore unsurprising that human rights have fallen down the ranking of importance in some areas of technology development.

It’s time to get this house in order.

Information. Society. Services. Children in the Internet of Things.

In this post, I think out loud about what improving online safety for children in The Green Paper on Children’s Internet Safety means ahead of the General Data Protection Regulation in 2018. Children should be able to use online services without being used and abused by them. If this regulation and other UK Government policy and strategy are to be meaningful for children, I think we need to completely rethink the State approach to what data privacy means in the Internet of Things.
[listen on soundcloud]


Children in the Internet of Things

In 1979 Star Trek: The Motion Picture created a striking image of A.I. as Commander Decker merged with V’Ger and the artificial copy of Lieutenant Ilia, blending human and computer intelligence and creating an integrated, synthesised form of life.

Ten years later, Sir Tim Berners-Lee wrote his proposal and created the world wide web, designing the way for people to share and access knowledge with each other through networks of computers.

In the 90s my parents described using the Internet as spending time ‘on the computer’, and going online meant from a fixed phone point.

Today our wireless computers in our homes, pockets and school bags, have built-in added functionality to enable us to do other things with them at the same time; make toast, play a game, and make a phone call, and we live in the Internet of Things.

Although we talk about it as if it were an environment of inanimate appliances,  it would be more accurate to think of the interconnected web of information that these things capture, create and share about our interactions 24/7, as vibrant snapshots of our lives, labelled with retrievable tags, and stored within the Internet.

Data about every moment of how and when we use an appliance, is captured at a rapid rate, or measured by smart meters, and shared within a network of computers. Computers that not only capture data but create, analyse and exchange new data about the people using them and how they interact with the appliance.

In this environment, children’s lives in the Internet of Things no longer involve a conscious choice to go online. Using the Internet is no longer about going online, but being online. The web knows us. In using the web, we become part of the web.

Our children, to the computers that gather their data, have simply become extensions of the things they use about which data is gathered and sold by the companies who make and sell the things. Things whose makers can even choose who uses them or not and how. In the Internet of things,  children have become things of the Internet.

A child’s use of a smart hairbrush will become part of the company’s knowledge base how the hairbrush works. A child’s voice is captured and becomes part of the database for the development training of the doll or robot they play with.

Our biometrics, measurements of the unique physical parts of our identities, provides a further example of the recent offline-self physically incorporated into banking services. Over 1 million UK children’s biometrics are estimated to be used in school canteens and library services through, often compulsory, fingerprinting.

Our interactions create a blended identity of online and offline attributes.

The web has created synthesised versions of our selves.

I say synthesised not synthetic, because our online self is blended with our real self and ‘synthetic’ gives the impression of being less real. If you take my own children’s everyday life as an example,  there is no ‘real’ life that is without a digital self.  The two are inseparable. And we might have multiple versions.

Our synthesised self is not only about our interactions with appliances and what we do, but who we know and how we think based on how we take decisions.

Data is created and captured not only about how we live, but where we live. These online data can be further linked with data about our behaviours offline generated from trillions of sensors and physical network interactions with our portable devices. Our synthesised self is tracked from real life geolocations. In cities surrounded by sensors under pavements, in buildings, cameras, mapping and tracking everywhere we go, our behaviours are converted into data, and stored inside an overarching network of cloud computers so that our online lives take on life of their own.

Data about us, whether uniquely identifiable on its own or not, is created and collected actively and passively. Online site visits record IP Address and use linked platform log-ins that can even extract friends lists without consent or affirmative action from them.

Using a tool like Privacy Badger from EEF gives you some insight into how many sites create new data about online behaviour once that synthesised self logs in, then tracks your synthesised self across the Internet. How you move from page to page, with what referring and exit pages and URLs, what adverts you click on or ignore,  platform types, number of clicks, cookies, invisible on page gifs and web beacons. Data that computers see, interpret and act on better than us.

Those synthesised identities are tracked online,  just as we move about a shopping mall offline.

Sir Tim Berners-Lee said this week, there is a need to put “a fair level of data control back in the hands of people.” It is not a need but vital to our future flourishing, very survival even. Data control is not about protecting a list of information or facts about ourselves and our identity for its own sake, it is about choosing who can exert influence and control over our life, our choices, and future of democracy.

And while today that who may be companies, it is increasingly A.I. itself that has a degree of control over our lives, as decisions are machine made.

Understanding how the Internet uses people

We get the service, the web gets our identity and our behaviours. And in what is in effect a hidden slave trade, they get access to use our synthesised selves in secret, and forever.

This grasp of what the Internet is, what the web is, is key to getting a rounded view of children’s online safety. Namely, we need to get away from the sole focus of online safeguarding as about children’s use of the web, and also look at how the web uses children.

Online services use children to:

  • mine, and exchange, repackage, and trade profile data, offline behavioural data (location, likes), and invisible Internet-use behavioural data (cookies, website analytics)
  • extend marketing influence in human decision-making earlier in life, even before children carry payment cards of their own,
  • enjoy the insights of parent-child relationships connected by an email account, sometimes a credit card, used as age verification or in online payments.

What are the risks?

Exploitation of identity and behavioural tracking not only puts our synthesised child at risk from exploitation, it puts our real life child’s future adult identity and data integrity at risk. If we cannot know who holds the keys to our digital identity, how can we trust that systems and services will be fair to us, not discriminate or defraud. Or not make errors that we cannot understand in order to correct?

Leaks, loss and hacks abound and manufacturers are slow to respond. Software that monitors children can also be used in coercive control. Organisations whose data are insecure, can be held to ransom. Children’s products should do what we expect them to and nothing more, there should be “no surprises” how data are used.

Companies tailor and target their marketing activity to those identity profiles. Our data is sold on in secret without consent to data brokers we never see, who in turn sell us on to others who monitor, track and target our synthesised selves every time we show up at their sites, in a never-ending cycle.

And from exploiting the knowledge of our synthesised self, decisions are made by companies, that target their audience, select which search results or adverts to show us, or hide, on which network sites, how often, to actively nudge our behaviours quite invisibly.

Nudge misuse is one of the greatest threats to our autonomy and with it democratic control of the society we live in. Who decides on the “choice architecture” that may shape another’s decisions and actions, and on what ethical basis?  once asked these authors who now seem to want to be the decision makers.

Thinking about Sir Tim Berners-Lee’s comments today on things that threaten the web, including how to address the loss of control over our personal data, we must frame it not a user-led loss of control, but autonomy taken by others; by developers, by product sellers, by the biggest ‘nudge controllers’ the Internet giants themselves.

Loss of identity is near impossible to reclaim. Our synthesised selves are sold into unending data slavery and we seem powerless to stop it. Our autonomy and with it our self worth, seem diminished.

How can we protect children better online?

Safeguarding must include ending data slavery of our synthesised self. I think of five things needed by policy shapers to tackle it.

  1. Understanding what ‘online’ and the Internet mean and how the web works – i.e. what data does a visit to a web page collect about the user and what happens to that data?
  2. Threat models and risk must go beyond the usual irl protection issues. Those  posed by undermining citizens’ autonomy, loss of public trust, of control over our identity, misuse of nudge, and how some are intrinsic to the current web business model, site users or government policy are unseen are underestimated.
  3. On user regulation (age verification / filtering) we must confront the idea that as a stand-alone step  it will not create a better online experience for the user, when it will not prevent the misuse of our synthesised selves and may increase risks – regulation of misuse must shift the point of responsibility
  4. Meaningful data privacy training must be mandatory for anyone in contact with children and its role in children’s safeguarding
  5. Siloed thinking must go. Forward thinking must join the dots across Departments into cohesive inclusive digital strategy and that doesn’t just mean ‘let’s join all of the data, all of the time’
  6. Respect our synthesised selves. Data slavery includes government misuse and must end if we respect children’s rights.

In the words of James T. Kirk, “the human adventure is just beginning.”

When our synthesised self is an inseparable blend of offline and online identity, every child is a synthesised child. And they are people. It is vital that government realises their obligation to protect rights to privacy, provision and participation under the Convention of the Rights of the Child and address our children’s real online life.

Governments, policy makers, and commercial companies must not use children’s offline safety as an excuse in a binary trade off to infringe on those digital rights or ignore risk and harm to the synthesised self in law, policy, and practice.

If future society is to thrive we must do all that is technologically possible to safeguard the best of what makes us human in this blend; our free will.


Part 2 follows with thoughts specific to the upcoming regulations, Digital Economy Bill andDigital Strategy

References:

[1] Internet of things WEF film, starting from 19:30

“What do an umbrella, a shark, a houseplant, the brake pads in a mining truck and a smoke detector all have in common?  They can all be connected online, and in this example, in this WEF film, they are.

“By 2024 more than 50% of home Internet traffic will be used by appliances and devices, rather than just for communication and entertainment…The IoT raises huge questions on privacy and security, that have to be addressed by government, corporations and consumers.”

[2] The government has today announced a “major new drive on internet safety”  [The Register, Martin, A. 27.02.2017]

[3] GDPR page 38 footnote (1) indicates the definition of Information Society Services as laid out in the Directive (EU) 2015/1535 of the European Parliament and of the Council of 9 September 2015 laying down a procedure for the provision of information in the field of technical regulations and of rules on Information Society services (OJ L 241, 17.9.2015, p. 1 and Annex 1)

image source: Startrek.com

The illusion that might cheat us: ethical data science vision and practice

This blog post is also available as an audio file on soundcloud.


Anais Nin, wrote in her 1946 diary of the dangers she saw in the growth of technology to expand our potential for connectivity through machines, but diminish our genuine connectedness as people. She could hardly have been more contemporary for today:

“This is the illusion that might cheat us of being in touch deeply with the one breathing next to us. The dangerous time when mechanical voices, radios, telephone, take the place of human intimacies, and the concept of being in touch with millions brings a greater and greater poverty in intimacy and human vision.”
[Extract from volume IV 1944-1947]

Echoes from over 70 years ago, can be heard in the more recent comments of entrepreneur Elon Musk. Both are concerned with simulation, a lack of connection between the perceived, and reality, and the jeopardy this presents for humanity. But both also have a dream. A dream based on the positive potential society has.

How will we use our potential?

Data is the connection we all have between us as humans and what machines and their masters know about us. The values that masters underpin their machine design with, will determine the effect the machines and knowledge they deliver, have on society.

In seeking ever greater personalisation, a wider dragnet of data is putting together ever more detailed pieces of information about an individual person. At the same time data science is becoming ever more impersonal in how we treat people as individuals. We risk losing sight of how we respect and treat the very people whom the work should benefit.

Nin grasped the risk that a wider reach, can mean more superficial depth. Facebook might be a model today for the large circle of friends you might gather, but how few you trust with confidences, with personal knowledge about your own personal life, and the privilege it is when someone chooses to entrust that knowledge to you. Machine data mining increasingly tries to get an understanding of depth, and may also add new layers of meaning through profiling, comparing our characteristics with others in risk stratification.
Data science, research using data, is often talked about as if it is something separate from using information from individual people. Yet it is all about exploiting those confidences.

Today as the reach has grown in what is possible for a few people in institutions to gather about most people in the public, whether in scientific research, or in surveillance of different kinds, we hear experts repeatedly talk of the risk of losing the valuable part, the knowledge, the insights that benefit us as society if we can act upon them.

We might know more, but do we know any better? To use a well known quote from her contemporary, T S Eliot, ‘Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?’

What can humans achieve? We don’t yet know our own limits. What don’t we yet know?  We have future priorities we aren’t yet aware of.

To be able to explore the best of what Nin saw as ‘human vision’ and Musk sees in technology, the benefits we have from our connectivity; our collaboration, shared learning; need to be driven with an element of humility, accepting values that shape  boundaries of what we should do, while constantly evolving with what we could do.

The essence of this applied risk is that technology could harm you, more than it helps you. How do we avoid this and develop instead the best of what human vision makes possible? Can we also exceed our own expectations of today, to advance in moral progress?

Continue reading The illusion that might cheat us: ethical data science vision and practice

OkCupid and Google DeepMind: Happily ever after? Purposes and ethics in datasharing

This blog post is also available as an audio file on soundcloud.


What constitutes the public interest must be set in a universally fair and transparent ethics framework if the benefits of research are to be realised – whether in social science, health, education and more – that framework will provide a strategy to getting the pre-requisite success factors right, ensuring research in the public interest is not only fit for the future, but thrives. There has been a climate change in consent. We need to stop talking about barriers that prevent datasharing  and start talking about the boundaries within which we can.

What is the purpose for which I provide my personal data?

‘We use math to get you dates’, says OkCupid’s tagline.

That’s the purpose of the site. It’s the reason people log in and create a profile, enter their personal data and post it online for others who are looking for dates to see. The purpose, is to get a date.

When over 68K OkCupid users registered for the site to find dates, they didn’t sign up to have their identifiable data used and published in ‘a very large dataset’ and onwardly re-used by anyone with unregistered access. The users data were extracted “without the express prior consent of the user […].”

Are the registration consent purposes compatible with the purposes to which the researcher put the data should be a simple enough question.  Are the research purposes what the person signed up to, or would they be surprised to find out their data were used like this?

Questions the “OkCupid data snatcher”, now self-confessed ‘non-academic’ researcher, thought unimportant to consider.

But it appears in the last month, he has been in good company.

Google DeepMind, and the Royal Free, big players who do know how to handle data and consent well, paid too little attention to the very same question of purposes.

The boundaries of how the users of OkCupid had chosen to reveal information and to whom, have not been respected in this project.

Nor were these boundaries respected by the Royal Free London trust that gave out patient data for use by Google DeepMind with changing explanations, without clear purposes or permission.

The legal boundaries in these recent stories appear unclear or to have been ignored. The privacy boundaries deemed irrelevant. Regulatory oversight lacking.

The respectful ethical boundaries of consent to purposes, disregarding autonomy, have indisputably broken down, whether by commercial org, public body, or lone ‘researcher’.

Research purposes

The crux of data access decisions is purposes. What question is the research to address – what is the purpose for which the data will be used? The intent by Kirkegaard was to test:

“the relationship of cognitive ability to religious beliefs and political interest/participation…”

In this case the question appears intended rather a test of the data, not the data opened up to answer the test. While methodological studies matter, given the care and attention [or self-stated lack thereof] given to its extraction and any attempt to be representative and fair, it would appear this is not the point of this study either.

The data doesn’t include profiles identified as heterosexual male, because ‘the scraper was’. It is also unknown how many users hide their profiles, “so the 99.7% figure [identifying as binary male or female] should be cautiously interpreted.”

“Furthermore, due to the way we sampled the data from the site, it is not even representative of the users on the site, because users who answered more questions are overrepresented.” [sic]

The paper goes on to say photos were not gathered because they would have taken up a lot of storage space and could be done in a future scraping, and

“other data were not collected because we forgot to include them in the scraper.”

The data are knowingly of poor quality, inaccurate and incomplete. The project cannot be repeated as ‘the scraping tool no longer works’. There is an unclear ethical or peer review process, and the research purpose is at best unclear. We can certainly give someone the benefit of the doubt and say intent appears to have been entirely benevolent. It’s not clear what the intent was. I think it is clearly misplaced and foolish, but not malevolent.

The trouble is, it’s not enough to say, “don’t be evil.” These actions have consequences.

When the researcher asserts in his paper that, “the lack of data sharing probably slows down the progress of science immensely because other researchers would use the data if they could,”  in part he is right.

Google and the Royal Free have tried more eloquently to say the same thing. It’s not research, it’s direct care, in effect, ignore that people are no longer our patients and we’re using historical data without re-consent. We know what we’re doing, we’re the good guys.

However the principles are the same, whether it’s a lone project or global giant. And they’re both wildly wrong as well. More people must take this on board. It’s the reason the public interest needs the Dame Fiona Caldicott review published sooner rather than later.

Just because there is a boundary to data sharing in place, does not mean it is a barrier to be ignored or overcome. Like the registration step to the OkCupid site, consent and the right to opt out of medical research in England and Wales is there for a reason.

We’re desperate to build public trust in UK research right now. So to assert that the lack of data sharing probably slows down the progress of science is misplaced, when it is getting ‘sharing’ wrong, that caused the lack of trust in the first place and harms research.

A climate change in consent

There has been a climate change in public attitude to consent since care.data, clouded by the smoke and mirrors of state surveillance. It cannot be ignored.  The EUGDPR supports it. Researchers may not like change, but there needs to be an according adjustment in expectations and practice.

Without change, there will be no change. Public trust is low. As technology advances and if we continue to see commercial companies get this wrong, we will continue to see public trust falter unless broken things get fixed. Change is possible for the better. But it has to come from companies, institutions, and people within them.

Like climate change, you may deny it if you choose to. But some things are inevitable and unavoidably true.

There is strong support for public interest research but that is not to be taken for granted. Public bodies should defend research from being sunk by commercial misappropriation if they want to future-proof public interest research.

The purpose for which the people gave consent are the boundaries within which you have permission to use data, that gives you freedom within its limits, to use the data.  Purposes and consent are not barriers to be overcome.

If research is to win back public trust developing a future proofed, robust ethical framework for data science must be a priority today.

Commercial companies must overcome the low levels of public trust they have generated in the public to date if they ask ‘trust us because we’re not evil‘. If you can’t rule out the use of data for other purposes, it’s not helping. If you delay independent oversight it’s not helping.

This case study and indeed the Google DeepMind recent episode by contrast demonstrate the urgency with which working out what common expectations and oversight of applied ethics in research, who gets to decide what is ‘in the public interest’ and data science public engagement must be made a priority, in the UK and beyond.

Boundaries in the best interest of the subject and the user

Society needs research in the public interest. We need good decisions made on what will be funded and what will not be. What will influence public policy and where needs attention for change.

To do this ethically, we all need to agree what is fair use of personal data, when is it closed and when is it open, what is direct and what are secondary uses, and how advances in technology are used when they present both opportunities for benefit or risks to harm to individuals, to society and to research as a whole.

The potential benefits of research are potentially being compromised for the sake of arrogance, greed, or misjudgement, no matter intent. Those benefits cannot come at any cost, or disregard public concern, or the price will be trust in all research itself.

In discussing this with social science and medical researchers, I realise not everyone agrees. For some, using deidentified data in trusted third party settings poses such a low privacy risk, that they feel the public should have no say in whether their data are used in research as long it’s ‘in the public interest’.

For the DeepMind researchers and Royal Free, they were confident even using identifiable data, this is the “right” thing to do, without consent.

For the Cabinet Office datasharing consultation, the parts that will open up national registries, share identifiable data more widely and with commercial companies, they are convinced it is all the “right” thing to do, without consent.

How can researchers, society and government understand what is good ethics of data science, as technology permits ever more invasive or covert data mining and the current approach is desperately outdated?

Who decides where those boundaries lie?

“It’s research Jim, but not as we know it.” This is one aspect of data use that ethical reviewers will need to deal with, as we advance the debate on data science in the UK. Whether independents or commercial organisations. Google said their work was not research. Is‘OkCupid’ research?

If this research and data publication proves anything at all, and can offer lessons to learn from, it is perhaps these three things:

Who is accredited as a researcher or ‘prescribed person’ matters. If we are considering new datasharing legislation, and for example, who the UK government is granting access to millions of children’s personal data today. Your idea of a ‘prescribed person’ may not be the same as the rest of the public’s.

Researchers and ethics committees need to adjust to the climate change of public consent. Purposes must be respected in research particularly when sharing sensitive, identifiable data, and there should be no assumptions made that differ from the original purposes when users give consent.

Data ethics and laws are desperately behind data science technology. Governments, institutions, civil, and all society needs to reach a common vision and leadership how to manage these challenges. Who defines these boundaries that matter?

How do we move forward towards better use of data?

Our data and technology are taking on a life of their own, in space which is another frontier, and in time, as data gathered in the past might be used for quite different purposes today.

The public are being left behind in the game-changing decisions made by those who deem they know best about the world we want to live in. We need a say in what shape society wants that to take, particularly for our children as it is their future we are deciding now.

How about an ethical framework for datasharing that supports a transparent public interest, which tries to build a little kinder, less discriminating, more just world, where hope is stronger than fear?

Working with people, with consent, with public support and transparent oversight shouldn’t be too much to ask. Perhaps it is naive, but I believe that with an independent ethical driver behind good decision-making, we could get closer to datasharing like that.

That would bring Better use of data in government.

Purposes and consent are not barriers to be overcome. Within these, shaped by a strong ethical framework, good data sharing practices can tackle some of the real challenges that hinder ‘good use of data’: training, understanding data protection law, communications, accountability and intra-organisational trust. More data sharing alone won’t fix these structural weaknesses in current UK datasharing which are our really tough barriers to good practice.

How our public data will be used in the public interest will not be a destination or have a well defined happy ending, but it is a long term  process which needs to be consensual and there needs to be a clear path to setting out together and achieving collaborative solutions.

While we are all different, I believe that society shares for the most part, commonalities in what we accept as good, and fair, and what we believe is important. The family sitting next to me have just counted out their money and bought an ice cream to share, and the staff gave them two. The little girl is beaming. It seems that even when things are difficult, there is always hope things can be better. And there is always love.

Even if some might give it a bad name.

********

img credit: flickr/sofi01/ Beauty and The Beast  under creative commons

The nhs.uk digital platform: a personalised gateway to a new NHS?

In recent weeks rebranding the poverty definitions and the living wage in the UK deservedly received more attention than the rebrand of the website NHS Choices into ‘nhs.uk.

The site that will be available only in England and Wales despite its domain name, will be the doorway to enter a personalised digital NHS offering.

As the plans proceed without public debate, I took some time to consider the proposal announced through the National Information Board (NIB) because it may be a gateway to a whole new world in our future NHS. And if not, will it be a  big splash of cash but create nothing more than a storm-in-a-teacup?

In my previous post I’d addressed some barriers to digital access. Will this be another? What will it offer that isn’t on offer already today and how will the nhs.uk platform avoid the problems of its predecessor HealthSpace?

Everyone it seems is agreed, the coming cuts are going to be ruthless. So, like Alice, I’m curious. What is down the rabbit hole ahead?

What’s the move from NHS Choices to nhs.uk about?

The new web platform nhs.uk would invite users to log on, using a system that requires identity, and if compulsory, would be another example of a barrier to access simply from a convenience point of view, even leaving digital security risks aside.

What will nhs.uk offer to incentivise users and offer benefit as a trade off against these risks, to go down the new path into the unknown and like it?

“At the heart of the domain , will be the development of nhs.uk into a new integrated health and care digital platform that will be a source of access to information, directorate, national services and locally accredited applications.”

In that there is nothing new compared with information, top down governance and signposting done by NHS Choices today.  

What else?

“Nhs.uk will also become the citizen ’s gateway to the creation of their own personal health record, drawing on information from the electronic health records in primary and secondary care.”

nhs.uk will be an access point to patient personal confidential records

Today’s patient online we are told offers 97% of patients access to their own GP created records access. So what will nhs.uk offer more than is supposed to be on offer already today? Adding wearables data into the health record is already possible for some EMIS users, so again, that won’t be new. It does state it will draw on both primary and secondary records which means getting some sort of interoperability to show both hospital systems data and GP records. How will the platform do this?

Until care.data many people didn’t know their hospital record was stored anywhere outside the hospital. In all the care.data debates the public was told that HES/SUS was not like a normal record in the sense we think of it. So what system will secondary care records come from? [Some places may have far to go. My local hospital pushes patients round with beige paper folders.] The answer appears to be an unpublished known or an unknown.

What else?

nhs.uk will be an access point to tailored ‘signposting’ of services

In addition to access to your personal medical records in the new “pull not push” process the nhs.uk platform will also offer information and services, in effect ‘advertising’ local services, to draw users to want to use it, not force its use. And through the power of web tracking tools combined with log in, it can all be ‘tailored’ or ‘targeted’ to you, the user.

“Creating an account will let you save information, receive emails on your chosen topics and health goals and comment on our content.”

Do you want to receive emails on your chosen topics or comment on content today? How does it offer more than can already be done by signing up now to NHS Choices?

NHS Choices today already offers information on local services, on care provision and symptoms’ checker.

What else?

Future nhs.uk users will be able to “Find, Book, Apply, Pay, Order, Register, Report and Access,” according to the NIB platform headers.

platform

“Convenient digital transactions will be offered like ordering and paying for prescriptions, registering with GPs, claiming funds for treatment abroad, registering as an organ and blood donor and reporting the side effects of drugs . This new transactional focus will complement nhs.uk’s existing role as the authoritative source of condition and treatment information, NHS services and health and care quality information.

“This will enable citizens to communicate with clinicians and practices via email, secure video links and fill out pre-consultation questionnaires. They will also be able to include data from their personal applications and wearable devices in their personal record. Personal health records will be able to be linked with care accounts to help people manage their personal budget.”

Let’s consider those future offerings more carefully.

Separating out the the transactions that for most people will be one off, extremely rare or never events (my blue) leaves other activities which you can already do or will do via the patient online programme (in purple).

The question is that although video and email are not yet widespread where they do work today and would in future, would they not be done via a GP practice system, not a centralised service? Or is the plan not that you could have an online consultation with ‘your’ named GP through nhs.uk but perhaps just ‘any’ GP from a centrally provided GP pool? Something like this? 

That leaves two other things, which are both payment tools (my bold).

i. digital transactions will be offered like ordering and paying for prescriptions
ii. …linked with care accounts to help people manage their personal budget.”

Is the core of the new offering about managing money at individual and central level?

Beverly Bryant, ‎Director of Strategic Systems and Technology at NHS England, said at the #kfdigi2015 June 16th event, that implementing these conveniences had costs saving benefits as well: “The driver is customer service, but when you do it it actually costs less.”

How are GP consultations to cost less, significantly less, to be really cost effective compared with the central platform to enable it to happen, when the GP time is the most valuable part and remains unchanged spent on the patient consultation and paperwork and referral for example?

That most valuable part to the patient, may be seen as what is most costly to ‘the system’.

If the emphasis is on the service saving money, it’s not clear what is in it for people to want to use it and it risks becoming another Healthspace, a high cost top down IT rollout without a clear customer driven need.

The stated aim is that it will personalise the user content and experience.

That gives the impression that the person using the system will get access to information and benefits unique and relevant to them.

If this is to be something patients want to use (pull) and are not to be forced to use (push) I wonder what’s really at its core, what’s in it for them, that is truly new and not part of the existing NHS Choices and Patient online offering?

What kind of personalised tailoring do today’s NHS Choices Ts&Cs sign users up to?

“Any information provided, or any information the NHS.uk site may infer from it, are used to provide content and information to your account pages or, if you choose to, by email.  Users may also be invited to take part in surveys if signed up for emails.

“You will have an option to submit personal information, including postcode, age, date of birth, phone number, email address, mobile phone number. In addition you may submit information about your diet and lifestyle, including drinking or exercise habits.”

“Additionally, you may submit health information, including your height and weight, or declare your interest in one or more health goals, conditions or treatments. “

“With your permission, academic institutions may occasionally use our data in relevant studies. In these instances, we shall inform you in advance and you will have the choice to opt out of the study. The information that is used will be made anonymous and will be confidential.”

Today’s NHS Choices terms and conditions say that “we shall inform you in advance and you will have the choice to opt out of the study.”

If that happens already and the NHS is honest about its intent to give patients that opt out right whether to take part in studies using data gathered from registered users of NHS Choices, why is it failing to do so for the 700,000 objections to secondary use of personal data via HSCIC?

If the future system is all about personal choice NIB should perhaps start by enforcing action over the choice the public may have already made in the past.

Past lessons learned – platforms and HealthSpace

In the past, the previous NHS personal platform, HealthSpace, came in for some fairly straightforward criticism including that it offered too little functionality.

The Devil’s in the Detail remarks are as relevant today on what users want as they were in 2010. It looked at the then available Summary Care Record (prescriptions allergies and reactions) and the web platform HealthSpace which tried to create a way for users to access it.

Past questions from Healthspace remain unanswered for today’s care.data or indeed the future nhs.uk data: What happens if there is a mistake in the record and the patient wants it deleted? How will access be given to third party carers/users on behalf of individuals without capacity to consent to their records access?

Reasons given by non-users of HealthSpace included lack of interest in managing their health in this way, a perception that health information was the realm of health professionals and lack of interest or confidence in using IT.

“In summary, these findings show that ‘self management’ is a much more complex, dynamic, and socially embedded activity than original policy documents and technical specifications appear to have assumed.”

What lessons have been learned? People today are still questioning the value of a centrally imposed system. Are they being listened to?

Digital Health reported that Maurice Smith, GP and governing body member for Liverpool CCG, speaking in a session on self-care platforms at the King’s Fund event he said that driving people towards one national hub for online services was not an option he would prefer and that he had no objection to a national portal, “but if you try drive everybody to a national portal and expect everybody to be happy with that I think you will be disappointed.”

How will the past problems that hit Healthspace be avoided for the future?

How will the powers-at-be avoid repeating the same problems for its ongoing roll out of care.data and future projects? I have asked this same question to NHS England/NIB leaders three times in the last year and it remains unanswered.

How will you tell patients in advance of any future changes who will access their data records behind the scenes, for what purpose, to future proof any programmes that plan to use the data?

One of the Healthspace 2010 concerns was: “Efforts of local teams to find creative new uses for the SCR sat in uneasy tension with implicit or explicit allegations of ‘scope creep’.”

Any programme using records can’t ethically sign users up to one thing and change it later without informing them before the change. Who will pay for that and how will it be done? care.data pilots, I’d want that answered before starting pilot communications.

As an example of changes to ‘what’ or content scope screep, future plans will see ‘social care flags added’ to the SCR record, states p.17 of the NIB 2020 timeline. What’s the ‘discovery for the use of genomic data complete’ about on p.11?  Scope creep of ‘who’ will access records, is very current. Recent changes allow pharmacists to access the SCR yet the change went by with little public discussion. Will they in future see social care flags or mental health data under their SCR access? Do I trust the chemist as I trust a GP?

Changes without adequate public consultation and communication cause surprises. Bad idea. Sir Nick Partridge said ensuring ‘no surprises’ is key to citizens’ trust after the audit of HES/SUS data uses. He is right.

The core at the heart of this nhs.uk plan is that it needs to be used by people, and enough people to make the investment vs cost worthwhile. That is what Healthspace failed to achieve.

The change you want to see doesn’t address the needs of the user as a change issue. (slide 4) This is all imposed change. Not user need-driven change.

Dear NIB, done this way seems to ignore learning from Healthspace. The evidence shown is self-referring to Dr. Foster and NHS Choices. The only other two listed are from Wisconsin and the Netherlands, hardly comparable models of UK lifestyle or healthcare systems.

What is really behind the new front door of the nhs.uk platform?

The future nhs.uk looks very much as though it seeks to provide a central front door to data access, in effect an expanded Summary Care Record (GP and secondary care records) – all medical records for direct care – together with a way for users to add their own wider user data.

Will nhs.uk also allow individuals to share their data with digital service providers of other kinds through the nhs.uk platform and apps? Will their data be mined to offer a personalised front door of tailored information and service nudges? Will patients be profiled to know their health needs, use and costs?

If yes, then who will be doing the mining and who will be using that data for what purposes?

If not, then what value will this service offer if it is not personal?

What will drive the need to log on to another new platform, compared with using the existing services of patient online today to access our health records, access GPs via video tools, and without any log-in requirement, browse similar content of information and nudges towards local services offered via NHS Choices today?

If this is core to the future of our “patient experience” of the NHS the public should be given the full and transparent facts  to understand where’s the public benefit and the business case for nhs.uk, and what lies behind the change expected via online GP consultations.

This NIB programme is building the foundation of the NHS offering for the next ten years. What kind of NHS are the NIB and NHS England planning for our children and our retirement through their current digital designs?

If the significant difference behind the new offering for nhs.uk platform is going to be the key change from what HealthSpace offered and separate from what patient online already offers it appears to be around managing cost and payments, not delivering any better user service.

Managing more of our payments with pharmacies and personalised budgets would reflect the talk of a push towards patient-responsible-self-management  direction of travel for the NHS as a whole.

More use of personal budgets is after all what Simon Stevens called a “radical new option” and we would expect to see “wider scale rollout of successful projects is envisaged from 2016-17″.

When the system will have finely drawn profiles of its users, will it have any effect for individuals in our universal risk-shared system? Will a wider roll out of personalised budgets mean more choice or could it start to mirror a private insurance system in which a detailed user profile would determine your level of risk and personal budget once reached, mean no more service?

What I’d like to see and why

To date, transparency has a poor track record on sharing central IT/change programme business plans.  While saying one thing, another happens in practice. Can that be changed? Why all the effort on NHS Citizen and ‘listening’, if the public is not to be engaged in ‘grown up debate‘ to understand the single biggest driver of planned service changes today: cost.

It’s at best patronising in the extreme, to prevent the public from seeing plans which spend public money.

We risk a wasteful, wearing repeat of the past top down failure of an imposed NPfIT-style HealthSpace, spending public money on a project which purports to be designed to save it.

To understand the practical future we can look back to avoid what didn’t work and compare with current plans. I’d suggest they should spell out very clearly what were the failures of Healthspace, and why is nhs.uk different.

If the site will offer an additional new pathway to access services than we already have, it will cost more, not less. If it has genuine expected cost reduction compared with today, where precisely will it come from?

I’d suggest you publish the detailed business plan for the nhs.uk platform and have the debate up front. Not only the headline numbers towards the end of these slides, but where and how it fits together in the big picture of Stevens’ “radical new option”.  This is public money and you *need* the public on side for it to work.

Publish the business cases for the NIB plans before the public engagement meet ups, because otherwise what facts will opinion be based on?

What discussion can be of value without them, when we are continually told by leadership those very  details are at the crux of needed change – the affordability of the future of the UK health and care system?

Now, as with past projects, The Devil’s in the Detail.

***

NIB detail on nhs.uk and other concepts: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/437067/nib-delivering.pdf

The Devil’s in the Detail: Final report of the independent evaluation of the Summary Care Record and HealthSpace programmes 2010

Off the record – a case study in NHS patient data access

Patient online medical records’ access in England was promised by April 2015.

HSCIC_statsJust last month headlines abounded “GPs ensure 97% of patients can access summary record online“. Speeches carried the same statistics.  So what did that actually mean? The HSCIC figures released in May 2015 showed that while around 57 million patients can potentially access something of their care record only 2.5 million or 4.5% of patients had actively signed up for the service.

In that gap lies a gulf of a difference. You cannot access the patient record unless you have signed up for it, so to give the impression that 97% of patients can access a summary record online is untrue.  Only 4.5% can, and have done so. While yes, this states patients must request access, the impression is somewhat misrepresentative.

Here’s my look at what that involved and once signed up, what ‘access your medical records’ actually may mean in practice.

The process to getting access

First I wrote a note to the practice manager about a month ago, and received a phone call a few days later to pop in any time. A week later, I called to check before I would ‘pop in’ and found that the practice manager was not in every day, and it would actually have to be when she was.

I offered to call back and arrange a suitable date and time. Next call, we usefully agreed the potential date I could go in, but I’d have to wait to be sure that the paper records had first been retrieved from the external store (since another practice closed down ours had become more busy than ever and run out of space.) I was asked whether I had already received permission from the practice manager and to confirm that I knew there would be a £10 charge.

So, one letter, four phone calls and ten pounds in hard cash later, I signed a disclosure form this morning to say I was me and I had asked to see my records, and sat in a corner of the lovely practice manager’s office with a small thinly stuffed Lloyd George envelope, and a few photocopied or printed-out A4 pages  (so I didn’t get to actually look at my own on-screen record the GP uses) and a receipt.

What did my paper records look like?

My oldest notes on paper went back as far as 1998 and were for the most part handwritten. Having lived abroad since there was then a ten year gap until my new registration and notes moved onto paper prints of electronic notes.

These included referral for secondary care, correspondence between consultants and my GP and/or to and from me.

The practice manager was very supportive and tolerant of me taking up a corner of her office for half an hour. Clutching a page with my new log-in for the EMIS web for patient records access, I put the papers back, said my thank yous and set off home.

Next step: online

I logged on at home to the patient access system. Having first had it in 2009 when I registered, I hadn’t used the system since as it had very limited functionality, and I had had good health. Now I took the opportunity to try it again.

By asking the GP practice reception, I had been assigned a PIN, given the Practice ID, an Access ID and confirmation of my NHS number all needed entry in Step 1:

emis1

 

Step 2: After these on screen 2, I was asked for my name, DOB, and to create a password.

emis2

 

Step 3: the system generated a long number user ID which I noted down.

Step 4: I looked for the data sharing and privacy policy. Didn’t spot with whom data entered would be shared or for what purposes and any retention or restrictions of purposes. I’d like to see that added.

emis3
Success:

Logged on using my new long user ID and password, I could see an overview page with personal contact details, which were all accurate.  Sections for current meds, allergies, appointments, medical record, personal health record and repeats prescriptions. There was space for overview of height, BMI and basic lifestyle (alcohol and smoking) there too.

emis4c

 

A note from 2010 read: “refused consent to upload national. sharing. electronic record.” Appropriately some may perhaps think, this was recorded in the “problems” section, which was otherwise empty.

Drilling down to view the medication record,  the only data held was the single most recent top line prescription without any history.

emis4b

 

And the only other section to view was allergies, similarly and correctly empty:

emis5

The only error I noted was a line to say I was due an MMR immunization in June 2015. [I will follow up to check whether one of my children should be due for it, rather than me.]

What else was possible?

Order repeat prescription: If your practice offers this service there is a link called Make a request in the Repeat Prescriptions section of the home page after you have signed in. This was possible. Our practice already does it direct with the pharmacy.

Book an appointment: with your own GP from dates in a drop down.

Apple Health app integration: The most interesting part of the online access was this part that suggested it could upload a patient’s Apple health app data, and with active patient consent, that would be shared with the GP.

emis6

 

It claims: “You can consent to the following health data types being shared to Patient Access and added to your Personal Health Record (PHR):”

  • Height
  • Weight
  • BMI
  • Blood Glucose
  • Blood Pressure (Diastolic & Systolic)
  • Distance (walked per day)
  • Forced expired volume
  • Forced Vital capacity
  • Heart Rate
  • Oxygen Saturation
  • Peak Expiratory Flow
  • Respiratory rate
  • Steps (taken per day)

“This new feature is only available to users of IOS8 who are using the Apple Health app and the Patient Access app.”

 

With the important caveat for some: IOS 8.1 has removed the ability to manually enter Blood Glucose data via the Health app. Health will continue to support Blood Glucose measurements added via 3rd party apps such as MySugr and iHealth.

Patient Access will still be able to collect any data entered and we recommend entering Blood Glucose data via one of those free apps until Apple reinstate the capability within Health.

What was not possible:

To update contact details: The practice configures which details you are allowed to change. It may be their policy to restrict access to change some details only in person at the practice.

Viewing my primary care record: other than a current medication there was nothing of my current records in the online record. Things like test results were not in my online record at all, only on paper. Pulse noted sensible concerns about this area in 2013.

Make a correction: clearly the MMR jab note is wrong, but I’ll need to ask for help to remove it.

“Currently the Patient Access app only supports the addition of new information; however, we envisage quickly extending this functionality to delete information via the Patient Access service.” How this will ensure accuracy and avoid self editing I am unsure.

Questions: Who can access this data?

While the system stated that “the information is stored securely in our accredited data centre that deals solely with clinical data. ” there is no indication of where, who manages it and who may access it and why.

In 2014 it was announced that pharmacies would begin to have access to the summary care record.

“A total of 100 pharmacies across Somerset, Northampton, North Derbyshire, Sheffield and West Yorkshire will be able to view a patient’s summary care record (SCR), which contains information such as a patient’s current medications and allergies.”

Yet clearly in the Summary Care Record consent process in 2010 from my record, pharmacists were not mentioned.

Does the patient online access also use the Summary Care Record or not? If so, did I by asking for online access, just create a SCR without asking for one? Or is it a different subset of data? If they are different, which is the definitive record?

Overall:

From stories we read it could appear that there are broad discrepancies between what is possible in one area of the country from another, and between one practice and another.

Clearly to give the impression that 97% of patients can access summary records online is untrue to date if only 4.5% actually can get onto an electronic system, and see any part of their records, on demand today.

How much value is added to patients and practitioners in that 4.5% may vary enormously depending upon what functionality they have chosen to enable at different locations.

For me as a rare user of the practice, there is no obvious benefit right now. I can book appointments during the day by telephone and meds are ordered through the chemist. It contained no other information.

I don’t know what evidence base came from patients to decide that Patient Online should be a priority.

How many really want and need real time, online access to their records? Would patients not far rather the priority in these times of austerity, the cash and time and IT expertise be focused on IT in direct care and visible by their medics? So that when they visit hospital their records would be available to different departments within the hospital?

I know which I would rather have.

What would be good to see?

I’d like to get much clearer distinction between the data purposes we have of what data we share for direct and indirect purposes, and on what legal basis.

Not least because it needs to be understandable within the context of data protection legislation. There is often confusion in discussions of what consent can be implied for direct care and where to draw its limit.

The consultation launched in June 2014 is still to be published since it ended in August 2014, and it too blurred the lines between direct care and secondary purposes.  (https://www.gov.uk/government/consultations/protecting-personal-health-and-care-data).

Secondly, if patients start to generate potentially huge quantities of data in the Apple link and upload it to GP electronic records, we need to get this approach correct from the start. Will that data be onwardly shared by GPs through care.data for example?

But first, let’s start with tighter use of language on communications. Not only for the sake of increased accuracy, but so that as a result expectations are properly set for policy makers, practitioners and patients making future decisions.

There are many impressive visions and great ideas how data are to be used for the benefit of individuals and the public good.

We need an established,  easy to understand, legal and ethical framework about our datasharing in the NHS to build on to turn benefits into an achievable reality.

The Economic Value of Data vs the Public Good? [2] Pay-for-privacy, defining purposes

Differentiation. Telling customers apart and grouping them by similarities is what commercial data managers want.

It enables them to target customers with advertising and sales promotion most effectively. They segment the market into chunks and treat one group differently from another.

They use market research data, our loyalty card data, to get that detailed information about customers, and decide how to target each group for what purposes.

As the EU states debate how research data should be used and how individuals should be both enabled and protected through it, they might consider separating research purposes by type.

While people are happy for the state to use their data without active consent for bona fide research, they are not for commercial consumer research purposes. [ref part 1].

Separating consumer and commercial market research from the definition of research purposes for the public good by the state, could be key to rebuilding people’s trust in government data use.

Having separate purposes would permit separate consent and control procedures to govern them.

But what role will profit make in the state’s definition of ‘in the public interest’ – is it in the public interest if the UK plc makes money from its citizens? and how far along any gauge of public feeling will a government be prepared to go to push making money for the UK plc at our own personal cost?

Pay-for-privacy?

In January this year, the Executive Vice President at Dunnhumby, Nishat Mehta, wrote in this article [7], about how he sees the future of data sharing between consumers and commercial traders:

“Imagine a world where data and services that are currently free had a price tag. You could choose to use Google or Facebook freely if you allowed them to monetize your expressed data through third-party advertisers […]. Alternatively, you could choose to pay a fair price for these services, but use of the data would be forbidden or limited to internal purposes.”

He too, talked about health data. Specifically about its value when accurate expressed and consensual:

“As consumers create and own even more data from health and fitness wearables, connected devices and offline social interactions, market dynamics would set the fair price that would compel customers to share that data. The data is more accurate, and therefore valuable, because it is expressed, rather than inferred, unable to be collected any other way and comes with clear permission from the user for its use.”

What his pay-for-privacy model appears to have forgotten, is that this future consensual sharing is based on the understanding that privacy has a monetary value. And that depends on understanding the status quo.

It is based on the individual realising that there is money made from their personal data by third parties today, and that there is a choice.

The extent of this commercial sharing and re-selling will be a surprise to most loyalty card holders.

“For years, market research firms and retailers have used loyalty cards to offer money back schemes or discounts in return for customer data.”

However despite being signed up for years, I believe most in the public are unaware of the implied deal. It may be in the small print. But everyone knows that few read it, in the rush to sign up to save money.

Most shoppers believe the supermarket is buying our loyalty. We return to spend more cash because of the points. Points mean prizes, petrol coupons, or pounds off.

We don’t realise our personal identity and habits are being invisibly analysed to the nth degree and sold by supermarkets as part of those sweet deals.

But is pay-for-privacy discriminatory? By creating the freedom to choose privacy as a pay-for option, it excludes those who cannot afford it.

Privacy should be seen as a human right, not as a pay-only privilege.

Today we use free services online but our data is used behind the scenes to target sales and ads often with no choice and without our awareness.

Today we can choose to opt in to loyalty schemes and trade our personal data for points and with it we accept marketing emails, and flyers through the door, and unwanted calls in our private time.

The free option is to never sign up at all, but by doing so customers pay a premium by not getting the vouchers and discounts.  Or trading convenience of online shopping.

There is a personal cost in all three cases, albeit in a rather opaque trade off.

 

Does the consumer really benefit in any of these scenarios or does the commercial company get a better deal?

In the sustainable future, only a consensual system based on understanding and trust will work well. That’s assuming by well, we mean organisations wish to prevent PR disasters and practical disruption as resulted for example to NHS data in the last year, through care.data.

For some people the personal cost to the infringement of privacy by commercial firms is great. Others care less. But once informed, there is a choice on offer even today to pay for privacy from commercial business, whether one pays the price by paying a premium for goods if not signed up for loyalty schemes or paying with our privacy.

In future we may see a more direct pay-for-privacy offering along  the lines of Nishat Mehta.

And if so, citizens will be asking ever more about how their data is used in all sorts of places beyond the supermarket.

So how can the state profit from the economic value of our data but not exploit citizens?

‘Every little bit of data’ may help consumer marketing companies.  Gaining it or using it in ways which are unethical and knowingly continue bad practices won’t win back consumers and citizens’ trust.

And whether it is a commercial consumer company or the state, people feel exploited when their information is used to make money without their knowledge and for purposes with which they disagree.

Consumer commercial use and use in bona fide research are separate in the average citizen’s mind and understood in theory.

Achieving differentiation in practice in the definition of research purposes could be key to rebuilding consumers’ trust.

And that would be valid for all their data, not only what data protection labels as ‘personal’. For the average citizen, all data about them is personal.

Separating in practice how consumer businesses are using data about customers to the benefit of company profits, how the benefits are shared on an individual basis in terms of a trade in our privacy, and how bona fide public research benefits us all, would be beneficial to win continued access to our data.

Citizens need and want to be offered paths to see how our data are used in ways which are transparent and easy to access.

Cutting away purposes which appear exploitative from purposes in the public interest could benefit commerce, industry and science.

By reducing the private cost to individuals of the loss of control and privacy of our data, citizens will be more willing to share.

That will create more opportunity for data to be used in the public interest, which will increase the public good; both economic and social which the government hopes to see expand.

And that could mean a happy ending for everyone.

The Economic Value of Data vs the Public Good?  They need not be mutually exclusive. But if one exploits the other, it has the potential to continue be corrosive. The UK plc cannot continue to assume its subjects are willing creators and repositories of information to be used for making money. [ref 1] To do so has lost trust in all uses, not only those in which citizens felt exploited.[6]

The economic value of data used in science and health, whether to individual app creators, big business or the commissioning state in planning and purchasing is clear. Perhaps not quantified or often discussed in the public domain perhaps, but it clearly exists.

Those uses can co-exist with good practices to help people understand what they are signed up to.

By defining ‘research purposes’, by making how data are used transparent, and by giving real choice in practice to consent to differentiated data for secondary uses, both commercial and state will secure their long term access to data.

Privacy, consent and separation of purposes will be wise investments for its growth across commercial and state sectors.

Let’s hope they are part of the coming ‘long-term economic plan’.

****

Related to this:

Part one: The Economic Value of Data vs the Public Good? [1] Concerns and the cost of Consent

Part two: The Economic Value of Data vs the Public Good? [2] Pay-for-privacy and Defining Purposes.

Part three: The Economic Value of Data vs the Public Good? [3] The value of public voice.

****

image via Tesco media

[6] Ipsos MORI research with the Royal Statistical Society into the Trust deficit with lessons for policy makers https://www.ipsos-mori.com/researchpublications/researcharchive/3422/New-research-finds-data-trust-deficit-with-lessons-for-policymakers.aspx

[7] AdExchanger Janaury 2015 http://adexchanger.com/data-driven-thinking/the-newest-asset-class-data/

[8] Tesco clubcard data sale https://jenpersson.com/public_data_in_private_hands/  / Computing 14.01.2015 – article by Sooraj Shah: http://www.computing.co.uk/ctg/feature/2390197/what-does-tescos-sale-of-dunnhumby-mean-for-its-data-strategy

[9] Direct Marketing 2013 http://www.dmnews.com/tesco-every-little-bit-of-customer-data-helps/article/317823/