Tag Archives: transparency

Crouching Tiger Hidden Dragon: the making of an IoT trust mark

The Internet of Things (IoT) brings with it unique privacy and security concerns associated with smart technology and its use of data.

  • What would it mean for you to trust an Internet connected product or service and why would you not?
  • What has damaged consumer trust in products and services and why do sellers care?
  • What do we want to see different from today, and what is necessary to bring about that change?

These three pairs of questions implicitly underpinned the intense day of  discussion at the London Zoo last Friday.

The questions went unasked, and could have been voiced before we started, although were probably assumed to be self-evident:

  1. Why do you want one at all [define the problem]?
  2. What needs to change and why [define the future model]?
  3. How do you deliver that and for whom [set out the solution]?

If a group does not agree on the need and drivers for change, there will be no consensus on what that should look like, what the gap is to achieve it, and even less on making it happen.

So who do you want the trustmark to be for, why will anyone want it, and what will need to change to deliver the aims? No one wants a trustmark per se. Perhaps you want what values or promises it embodies to  demonstrate what you stand for, promote good practice, and generate consumer trust. To generate trust, you must be seen to be trustworthy. Will the principles deliver on those goals?

The Open IoT Certification Mark Principles, as a rough draft was the outcome of the day, and are available online.

Here’s my reflections, including what was missing on privacy, and the potential for it to be considered in future.

I’ve structured this first, assuming readers attended the event, at ca 1,000 words. Lists and bullet points. The background comes after that, for anyone interested to read a longer piece.

Many thanks upfront, to fellow participants, to the organisers Alexandra D-S and Usman Haque and the colleague who hosted at the London Zoo. And Usman’s Mum.  I hope there will be more constructive work to follow, and that there is space for civil society to play a supporting role and critical friend.


The mark didn’t aim to fix the IoT in a day, but deliver something better for product and service users, by those IoT companies and providers who want to sign up. Here is what I took away.

I learned three things

  1. A sense of privacy is not homogenous, even within people who like and care about privacy in theoretical and applied ways. (I very much look forward to reading suggestions promised by fellow participants, even if enforced personal openness and ‘watching the watchers’ may mean ‘privacy is theft‘.)
  2. Awareness of current data protection regulations needs improved in the field. For example, Subject Access Requests already apply to all data controllers, public and private. Few have read the GDPR, or the e-Privacy directive, despite importance for security measures in personal devices, relevant for IoT.
  3. I truly love working on this stuff, with people who care.

And it reaffirmed things I already knew

  1. Change is hard, no matter in what field.
  2. People working together towards a common goal is brilliant.
  3. Group collaboration can create some brilliantly sharp ideas. Group compromise can blunt them.
  4. Some men are particularly bad at talking over each other, never mind over the women in the conversation. Women notice more. (Note to self: When discussion is passionate, it’s hard to hold back in my own enthusiasm and not do the same myself. To fix.)
  5. The IoT context, and risks within it are not homogenous, but brings new risks and adverseries. The risks for manufacturers and consumers and the rest of the public are different, and cannot be easily solved with a one-size-fits-all solution. But we can try.

Concerns I came away with

  1. If the citizen / customer / individual is to benefit from the IoT trustmark, they must be put first, ahead of companies’ wants.
  2. If the IoT group controls both the design, assessment to adherence and the definition of success, how objective will it be?
  3. The group was not sufficiently diverse and as a result, reflects too little on the risks and impact of the lack of diversity in design and effect, and the implications of dataveillance .
  4. Critical minority thoughts although welcomed, were stripped out from crowdsourced first draft principles in compromise.
  5. More future thinking should be built-in to be robust over time.

IoT adversaries: via Twitter, unknown source

What was missing

There was too little discussion of privacy in perhaps the most important context of IoT – inter connectivity and new adversaries. It’s not only about *your* thing, but things that it speaks to, interacts with, of friends, passersby, the cityscape , and other individual and state actors interested in offense and defense. While we started to discuss it, we did not have the opportunity to discuss sufficiently at depth to be able to get any thinking into applying solutions in the principles.

One of the greatest risks that users face is the ubiquitous collection and storage of data about users that reveal detailed, inter-connected patterns of behaviour and our identity and not seeing how that is used by companies behind the scenes.

What we also missed discussing is not what we see as necessary today, but what we can foresee as necessary for the short term future, brainstorming and crowdsourcing horizon scanning for market needs and changing stakeholder wants.

Future thinking

Here’s the areas of future thinking that smart thinking on the IoT mark could consider.

  1. We are moving towards ever greater requirements to declare identity to use a product or service, to register and log in to use anything at all. How will that change trust in IoT devices?
  2. Single identity sign-on is becoming ever more imposed, and any attempts for multiple presentation of who I am by choice, and dependent on context, therefore restricted. [not all users want to use the same social media credentials for online shopping, with their child’s school app, and their weekend entertainment]
  3. Is this imposition what the public wants or what companies sell us as what customers want in the name of convenience? What I believe the public would really want is the choice to do neither.
  4. There is increasingly no private space or time, at places of work.
  5. Limitations on private space are encroaching in secret in all public city spaces. How will ‘handoffs’ affect privacy in the IoT?
  6. Public sector (connected) services are likely to need even more exacting standards than single home services.
  7. There is too little understanding of the social effects of this connectedness and knowledge created, embedded in design.
  8. What effects may there be on the perception of the IoT as a whole, if predictive data analysis and complex machine learning and AI hidden in black boxes becomes more commonplace and not every company wants to be or can be open-by-design?
  9. Ubiquitous collection and storage of data about users that reveal detailed, inter-connected patterns of behaviour and our identity needs greater commitments to disclosure. Where the hand-offs are to other devices, and whatever else is in the surrounding ecosystem, who has responsibility for communicating interaction through privacy notices, or defining legitimate interests, where the data joined up may be much more revealing than stand-alone data in each silo?
  10. Define with greater clarity the privacy threat models for different groups of stakeholders and address the principles for each.

What would better look like?

The draft privacy principles are a start, but they’re not yet aspirational as I would have hoped. Of course the principles will only be adopted if possible, practical and by those who choose to. But where is the differentiator from what everyone is required to do, and better than the bare minimum? How will you sell this to consumers as new? How would you like your child to be treated?

The wording in these 5 bullet points, is the first crowdsourced starting point.

  • The supplier of this product or service MUST be General Data Protection Regulation (GDPR) compliant.
  • This product SHALL NOT disclose data to third parties without my knowledge.
  • I SHOULD get full access to all the data collected about me.
  • I MAY operate this device without connecting to the internet.
  • My data SHALL NOT be used for profiling, marketing or advertising without transparent disclosure.

Yes other points that came under security address some of the crossover between privacy and surveillance risks, but there is as yet little substantial that is aspirational to make the IoT mark a real differentiator in terms of privacy. An opportunity remains.

It was that and how young people perceive privacy that I hoped to bring to the table. Because if manufacturers are serious about future success, they cannot ignore today’s children and how they feel. How you treat them today, will shape future purchasers and their purchasing, and there is evidence you are getting it wrong.

The timing is good in that it now also offers the opportunity to promote consistent understanding, and embed the language of GDPR and ePrivacy regulations into consistent and compatible language in policy and practice in the #IoTmark principles.

User rights I would like to see considered

These are some of the points I would think privacy by design would mean. This would better articulate GDPR Article 25 to consumers.

Data sovereignty is a good concept and I believe should be considered for inclusion in explanatory blurb before any agreed privacy principles.

  1. Goods should by ‘dumb* by default’ until the smart functionality is switched on. [*As our group chair/scribe called it]  I would describe this as, “off is the default setting out-of-the-box”.
  2. Privact by design. Deniability by default. i.e. not only after opt out, but a company should not access the personal or identifying purchase data of anyone who opts out of data collection about their product/service use during the set up process.
  3. The right to opt out of data collection at a later date while continuing to use services.
  4. A right to object to the sale or transfer of behavioural data, including to third-party ad networks and absolute opt-in on company transfer of ownership.
  5. A requirement that advertising should be targeted to content, [user bought fridge A] not through jigsaw data held on users by the company [how user uses fridge A, B, C and related behaviour].
  6. An absolute rejection of using children’s personal data gathered to target advertising and marketing at children

Background: Starting points before privacy

After a brief recap on 5 years ago, we heard two talks.

The first was a presentation from Bosch. They used the insights from the IoT open definition from 5 years ago in their IoT thinking and embedded it in their brand book. The presenter suggested that in five years time, every fridge Bosch sells will be ‘smart’. And the  second was a fascinating presentation, of both EU thinking and the intellectual nudge to think beyond the practical and think what kind of society we want to see using the IoT in future. Hints of hardcore ethics and philosophy that made my brain fizz from , soon to retire from the European Commission.

The principles of open sourcing, manufacturing, and sustainable life cycle were debated in the afternoon with intense arguments and clearly knowledgeable participants, including those who were quiet.  But while the group had assigned security, and started work on it weeks before, there was no one pre-assigned to privacy. For me, that said something. If they are serious about those who earn the trustmark being better for customers than their competition, then there needs to be greater emphasis on thinking like their customers, and by their customers, and what use the mark will be to customers, not companies. Plan early public engagement and testing into the design of this IoT mark, and make that testing open and diverse.

To that end, I believe it needed to be articulated more strongly, that sustainable public trust is the primary goal of the principles.

  • Trust that my device will not become unusable or worthless through updates or lack of them.
  • Trust that my device is manufactured safely and ethically and with thought given to end of life and the environment.
  • Trust that my source components are of high standards.
  • Trust in what data and how that data is gathered and used by the manufacturers.

Fundamental to ‘smart’ devices is their connection to the Internet, and so the last for me, is therefore key to successful public perception and it actually making a difference, beyond the PR value to companies. The value-add must be measured from consumers point of view.

All the openness about design functions and practice improvements, without attempting to change privacy infringing practices, may be wasted effort. Why? Because the perceived benefit of the value of the mark, will be proportionate to what risks it is seen to mitigate.

Why?

Because I assume that you know where your source components come from today. I was shocked to find out not all do and that ‘one degree removed’ is going to be an improvement? Holy cow, I thought. What about regulatory requirements for product safety recalls? These differ of course for different product areas, but I was still surprised. Having worked in global Fast Moving Consumer Goods (FMCG) and food industry, semiconductor and optoelectronics, and medical devices it was self-evident for me, that sourcing is rigorous. So that new requirement to know one degree removed, was a suggested minimum. But it might shock consumers to know there is not usually more by default.

Customers also believe they have reasonable expectations of not being screwed by a product update, left with something that does not work because of its computing based components. The public can take vocal, reputation-damaging action when they are let down.

In the last year alone, some of the more notable press stories include a manufacturer denying service, telling customers, “Your unit will be denied server connection,” after a critical product review. Customer support at Jawbone came in for criticism after reported failings. And even Apple has had problems in rolling out major updates.

While these are visible, the full extent of the overreach of company market and product surveillance into our whole lives, not just our living rooms, is yet to become understood by the general population. What will happen when it is?

The Internet of Things is exacerbating the power imbalance between consumers and companies, between government and citizens. As Wendy Grossman wrote recently, in one sense this may make privacy advocates’ jobs easier. It was always hard to explain why “privacy” mattered. Power, people understand.

That public discussion is long overdue. If open principles on IoT devices mean that the signed-up companies differentiate themselves by becoming market leaders in transparency, it will be a great thing. Companies need to offer full disclosure of data use in any privacy notices in clear, plain language  under GDPR anyway, but to go beyond that, and offer customers fair presentation of both risks and customer benefits, will not only be a point-of-sales benefit, but potentially improve digital literacy in customers too.

The morning discussion touched quite often on pay-for-privacy models. While product makers may see this as offering a good thing, I strove to bring discussion back to first principles.

Privacy is a human right. There can be no ethical model of discrimination based on any non-consensual invasion of privacy. Privacy is not something I should pay to have. You should not design products that reduce my rights. GDPR requires privacy-by-design and data protection by default. Now is that chance for IoT manufacturers to lead that shift towards higher standards.

We also need a new ethics thinking on acceptable fair use. It won’t change overnight, and perfect may be the enemy of better. But it’s not a battle that companies should think consumers have lost. Human rights and information security should not be on the battlefield at all in the war to win customer loyalty.  Now is the time to do better, to be better, demand better for us and in particular, for our children.

Privacy will be a genuine market differentiator

If manufacturers do not want to change their approach to exploiting customer data, they are unlikely to be seen to have changed.

Today feelings that people in US and Europe reflect in surveys are loss of empowerment, feeling helpless, and feeling used. That will shift to shock, resentment, and any change curve will predict, anger.

A 2014 survey for the Royal Statistical Society by Ipsos MORI, found that trust in institutions to use data is much lower than trust in them in general.

“The poll of just over two thousand British adults carried out by Ipsos MORI found that the media, internet services such as social media and search engines and telecommunication companies were the least trusted to use personal data appropriately.” [2014, Data trust deficit with lessons for policymakers, Royal Statistical Society]

In the British student population, one 2015 survey of university applicants in England, found of 37,000 who responded, the vast majority of UCAS applicants agree that sharing personal data can benefit them and support public benefit research into university admissions, but they want to stay firmly in control. 90% of respondents said they wanted to be asked for their consent before their personal data is provided outside of the admissions service.

In 2010, a multi method model of research with young people aged 14-18, by the Royal Society of Engineering, found that, “despite their openness to social networking, the Facebook generation have real concerns about the privacy of their medical records.” [2010, Privacy and Prejudice, RAE, Wellcome]

When people use privacy settings on Facebook set to maximum, they believe they get privacy, and understand little of what that means behind the scenes.

Are there tools designed by others, like Projects by If licenses, and ways this can be done, that you’re not even considering yet?

What if you don’t do it?

“But do you feel like you have privacy today?” I was asked the question in the afternoon. How do people feel today, and does it matter? Companies exploiting consumer data and getting caught doing things the public don’t expect with their data, has repeatedly damaged consumer trust. Data breaches and lack of information security have damaged consumer trust. Both cause reputational harm. Damage to reputation can harm customer loyalty. Damage to customer loyalty costs sales, profit and upsets the Board.

Where overreach into our living rooms has raised awareness of invasive data collection, we are yet to be able to see and understand the invasion of privacy into our thinking and nudge behaviour, into our perception of the world on social media, the effects on decision making that data analytics is enabling as data shows companies ‘how we think’, granting companies access to human minds in the abstract, even before Facebook is there in the flesh.

Governments want to see how we think too, and is thought crime really that far away using database labels of ‘domestic extremists’ for activists and anti-fracking campaigners, or the growing weight of policy makers attention given to predpol, predictive analytics, the [formerly] Cabinet Office Nudge Unit, Google DeepMind et al?

Had the internet remained decentralized the debate may be different.

I am starting to think of the IoT not as the Internet of Things, but as the Internet of Tracking. If some have their way, it will be the Internet of Thinking.

Considering our centralised Internet of Things model, our personal data from human interactions has become the network infrastructure, and data flows, are controlled by others. Our brains are the new data servers.

In the Internet of Tracking, people become the end nodes, not things.

And it is this where the future users will be so important. Do you understand and plan for factors that will drive push back, and crash of consumer confidence in your products, and take it seriously?

Companies have a choice to act as Empires would – multinationals, joining up even on low levels, disempowering individuals and sucking knowledge and power at the centre. Or they can act as Nation states ensuring citizens keep their sovereignty and control over a selected sense of self.

Look at Brexit. Look at the GE2017. Tell me, what do you see is the direction of travel? Companies can fight it, but will not defeat how people feel. No matter how much they hope ‘nudge’ and predictive analytics might give them this power, the people can take back control.

What might this desire to take-back-control mean for future consumer models? The afternoon discussion whilst intense, reached fairly simplistic concluding statements on privacy. We could have done with at least another hour.

Some in the group were frustrated “we seem to be going backwards” in current approaches to privacy and with GDPR.

But if the current legislation is reactive because companies have misbehaved, how will that be rectified for future? The challenge in the IoT both in terms of security and privacy, AND in terms of public perception and reputation management, is that you are dependent on the behaviours of the network, and those around you. Good and bad. And bad practices by one, can endanger others, in all senses.

If you believe that is going back to reclaim a growing sense of citizens’ rights, rather than accepting companies have the outsourced power to control the rights of others, that may be true.

There was a first principle asked whether any element on privacy was needed at all, if the text was simply to state, that the supplier of this product or service must be General Data Protection Regulation (GDPR) compliant. The GDPR was years in the making after all. Does it matter more in the IoT and in what ways? The room tended, understandably, to talk about it from the company perspective.  “We can’t” “won’t” “that would stop us from XYZ.” Privacy would however be better addressed from the personal point of view.

What do people want?

From the company point of view, the language is different and holds clues. Openness, control, and user choice and pay for privacy are not the same thing as the basic human right to be left alone. Afternoon discussion reminded me of the 2014 WAPO article, discussing Mark Zuckerberg’s theory of privacy and a Palo Alto meeting at Facebook:

“Not one person ever uttered the word “privacy” in their responses to us. Instead, they talked about “user control” or “user options” or promoted the “openness of the platform.” It was as if a memo had been circulated that morning instructing them never to use the word “privacy.””

In the afternoon working group on privacy, there was robust discussion whether we had consensus on what privacy even means. Words like autonomy, control, and choice came up a lot. But it was only a beginning. There is opportunity for better. An academic voice raised the concept of sovereignty with which I agreed, but how and where  to fit it into wording, which is at once both minimal and applied, and under a scribe who appeared frustrated and wanted a completely different approach from what he heard across the group, meant it was left out.

This group do care about privacy. But I wasn’t convinced that the room cared in the way that the public as a whole does, but rather only as consumers and customers do. But IoT products will affect potentially everyone, even those who do not buy your stuff. Everyone in that room, agreed on one thing. The status quo is not good enough. What we did not agree on, was why, and what was the minimum change needed to make a enough of a difference that matters.

I share the deep concerns of many child rights academics who see the harm that efforts to avoid restrictions Article 8 the GDPR will impose. It is likely to be damaging for children’s right to access information, be discriminatory according to parents’ prejudices or socio-economic status, and ‘cheating’ – requiring secrecy rather than privacy, in attempts to hide or work round the stringent system.

In ‘The Class’ the research showed, ” teachers and young people have a lot invested in keeping their spheres of interest and identity separate, under their autonomous control, and away from the scrutiny of each other.” [2016, Livingstone and Sefton-Green, p235]

Employers require staff use devices with single sign including web and activity tracking and monitoring software. Employee personal data and employment data are blended. Who owns that data, what rights will employees have to refuse what they see as excessive, and is it manageable given the power imbalance between employer and employee?

What is this doing in the classroom and boardroom for stress, anxiety, performance and system and social avoidance strategies?

A desire for convenience creates shortcuts, and these are often met using systems that require a sign-on through the platforms giants: Google, Facebook, Twitter, et al. But we are kept in the dark how by using these platforms, that gives access to them, and the companies, to see how our online and offline activity is all joined up.

Any illusion of privacy we maintain, we discussed, is not choice or control if based on ignorance, and backlash against companies lack of efforts to ensure disclosure and understanding is growing.

“The lack of accountability isn’t just troubling from a philosophical perspective. It’s dangerous in a political climate where people are pushing back at the very idea of globalization. There’s no industry more globalized than tech, and no industry more vulnerable to a potential backlash.”

[Maciej Ceglowski, Notes from an Emergency, talk at re.publica]

Why do users need you to know about them?

If your connected *thing* requires registration, why does it? How about a commitment to not forcing one of these registration methods or indeed any at all? Social Media Research by Pew Research in 2016 found that  56% of smartphone owners ages 18 to 29 use auto-delete apps, more than four times the share among those 30-49 (13%) and six times the share among those 50 or older (9%).

Does that tell us anything about the demographics of data retention preferences?

In 2012, they suggested social media has changed the public discussion about managing “privacy” online. When asked, people say that privacy is important to them; when observed, people’s actions seem to suggest otherwise.

Does that tell us anything about how well companies communicate to consumers how their data is used and what rights they have?

There is also data with strong indications about how women act to protect their privacy more but when it comes to basic privacy settings, users of all ages are equally likely to choose a private, semi-private or public setting for their profile. There are no significant variations across age groups in the US sample.

Now think about why that matters for the IoT? I wonder who makes the bulk of purchasing decsions about household white goods for example and has Bosch factored that into their smart-fridges-only decision?

Do you *need* to know who the user is? Can the smart user choose to stay anonymous at all?

The day’s morning challenge was to attend more than one interesting discussion happening at the same time. As invariably happens, the session notes and quotes are always out of context and can’t possibly capture everything, no matter how amazing the volunteer (with thanks!). But here are some of the discussion points from the session on the body and health devices, the home, and privacy. It also included a discussion on racial discrimination, algorithmic bias, and the reasons why care.data failed patients and failed as a programme. We had lengthy discussion on ethics and privacy: smart meters, objections to models of price discrimination, and why pay-for-privacy harms the poor by design.

Smart meter data can track the use of unique appliances inside a person’s home and intimate patterns of behaviour. Information about our consumption of power, what and when every day, reveals  personal details about everyday lives, our interactions with others, and personal habits.

Why should company convenience come above the consumer’s? Why should government powers, trump personal rights?

Smart meter is among the knowledge that government is exploiting, without consent, to discover a whole range of issues, including ensuring that “Troubled Families are identified”. Knowing how dodgy some of the school behaviour data might be, that helps define who is “troubled” there is a real question here, is this sound data science? How are errors identified? What about privacy? It’s not your policy, but if it is your product, what are your responsibilities?

If companies do not respect children’s rights,  you’d better shape up to be GDPR compliant

For children and young people, more vulnerable to nudge, and while developing their sense of self can involve forming, and questioning their identity, these influences need oversight or be avoided.

In terms of GDPR, providers are going to pay particular attention to Article 8 ‘information society services’ and parental consent, Article 17 on profiling,  and rights to restriction of processing (19) right to erasure in recital 65 and rights to portability. (20) However, they  may need to simply reassess their exploitation of children and young people’s personal data and behavioural data. Article 57 requires special attention to be paid by regulators to activities specifically targeted at children, as ‘vulnerable natural persons’ of recital 75.

Human Rights, regulations and conventions overlap in similar principles that demand respect for a child, and right to be let alone:

(a) The development of the child ‘s personality, talents and mental and physical abilities to their fullest potential;

(b) The development of respect for human rights and fundamental freedoms, and for the principles enshrined in the Charter of the United Nations.

A weakness of the GDPR is that it allows derogation on age and will create inequality and inconsistency  for children as a result. By comparison Article one of the Convention on the Rights of the Child (CRC) defines who is to be considered a “child” for the purposes of the CRC, and states that: “For the purposes of the present Convention, a child means every human being below the age of eighteen years unless, under the law applicable to the child, majority is attained earlier.”<

Article two of the CRC says that States Parties shall respect and ensure the rights set forth in the present Convention to each child within their jurisdiction without discrimination of any kind.

CRC Article 16 says that no child shall be subjected to arbitrary or unlawful interference with his or her honour and reputation.

Article 8 CRC requires respect for the right of the child to preserve his or her identity […] without unlawful interference.

Article 12 CRC demands States Parties shall assure to the child who is capable of forming his or her own views the right to express those views freely in all matters affecting the child, the views of the child being given due weight in accordance with the age and maturity of the child.

That stands in potential conflict with GDPR article 8. There is much on GDPR on derogations by country, and or children, still to be set.

What next for our data in the wild

Hosting the event at the zoo offered added animals, and during a lunch tour we got out on a tour, kindly hosted by a fellow participant. We learned how smart technology was embedded in some of the animal enclosures, and work on temperature sensors with penguins for example. I love tigers, so it was a bonus that we got to see such beautiful and powerful animals up close, if a little sad for their circumstances and as a general basic principle, seeing big animals caged as opposed to in-the-wild.

Freedom is a common desire in all animals. Physical, mental, and freedom from control by others.

I think any manufacturer that underestimates this element of human instinct is ignoring the ‘hidden dragon’ that some think is a myth.  Privacy is not dead. It is not extinct, or even unlike the beautiful tigers, endangered. Privacy in the IoT at its most basic, is the right to control our purchasing power. The ultimate people power waiting to be sprung. Truly a crouching tiger. People object to being used and if companies continue to do so without full disclosure, they do so at their peril. Companies seem all-powerful in the battle for privacy, but they are not.  Even insurers and data brokers must be fair and lawful, and it is for regulators to ensure that practices meet the law.

When consumers realise our data, our purchasing power has the potential to control, not be controlled, that balance will shift.

“Paper tigers” are superficially powerful but are prone to overextension that leads to sudden collapse. If that happens to the superficially powerful companies that choose unethical and bad practice, as a result of better data privacy and data ethics, then bring it on.

I hope that the IoT mark can champion best practices and make a difference to benefit everyone.

While the companies involved in its design may be interested in consumers, I believe it could be better for everyone, done well. The great thing about the efforts into an #IoTmark is that it is a collective effort to improve the whole ecosystem.

I hope more companies will realise their privacy rights and ethical responsibility in the world to all people, including those interested in just being, those who want to be let alone, and not just those buying.

“If a cat is called a tiger it can easily be dismissed as a paper tiger; the question remains however why one was so scared of the cat in the first place.”

Further reading: Networks of Control – A Report on Corporate Surveillance, Digital Tracking, Big Data & Privacy by Wolfie Christl and Sarah Spiekermann

Are UK teacher and pupil profile data stolen, lost and exposed?

Update received from Edmodo, VP Marketing & Adoption, June 1:


While everyone is focused on #WannaCry ransomware, it appears that a global edTech company has had a potential global data breach that few are yet talking about.

Edmodo is still claiming on its website it is, “The safest and easiest way for teachers to connect and collaborate with students, parents, and each other.” But is it true, and who verifies that safe is safe?

Edmodo data from 78 million users for sale

Matt Burgess wrote in VICE: “Education website Edmodo promises a way for “educators to connect and collaborate with students, parents, and each other”. However, 78 million of its customers have had their user account details stolen. Vice’s Motherboard reports that usernames, email addresses, and hashed passwords were taken from the service and have been put up for sale on the dark web for around $1,000 (£700).

“Data breach notification website LeakBase also has a copy of the data and provided it to Motherboard. According to LeakBase around 40 million of the accounts have email addresses connected to them. The company said it is aware of a “potential security incident” and is investigating.”

The Motherboard article by Joseph Cox, says it happened last month. What has been done since? Why is there no public information or notification about the breach on the company website?

Joseph doesn’t think profile photos are at risk, unless someone can log into an account. He was given usernames, email addresses, and hashed passwords, and as far as he knows, that was all that was stolen.

“The passwords have apparently been hashed with the robust bcrypt algorithm, and a string of random characters known as a salt, meaning hackers will have a much harder time obtaining user’s actual login credentials. Not all of the records include a user email address.”

Going further back, it looks like Edmodo’s weaknesses had already been identified 4 years ago. Did anything change?

So far I’ve been unable to find out from Edmodo directly. There is no telephone technical support. There is no human that can be reached dialling the headquarters telephone number.

Where’s the parental update?

No one has yet responded to say whether UK pupils and teachers’ data was among that reportedly stolen. (Update June 1, the company did respond with confirmation of UK users involved.)

While there is no mention of the other data the site holds being in the breach, details are as yet sketchy, and Edmodo holds children’s data. Where is the company assurance what was and was not stolen?

As it’s a platform log on I would want to know when parents will be told exactly what was compromised and how details have been exposed. I would want clarification if this could potentially be a weakness for further breaches of other integrated systems, or not.

Are edTech and IoT toys fit for UK children?

In 2016, more than 727,000 UK children had their information compromised following a cyber attack on VTech, including images. These toys are sold as educational, even if targeted at an early age.

In Spring 2017, CloudPets, the maker of Internet of Things teddy bears, “smart toys” left more than two million voice recordings from children online without any security protections and exposing children’s personal details.

As yet UK ministers have declined our civil society recommendations to act and take steps on the public sector security of national pupil data or on the private security of Internet connected toys and things. The latter in line with Germany for example.

It is right that the approach is considered. The UK government must take these risks seriously in an evidence based and informed way, and act, not with knee jerk reactions. But it must act.

Two months after Germany banned the Cayla doll, we still had them for sale here.

Parents are often accused of being uninformed, but we must be able to expect that our products pass a minimum standard of tech and data security testing as part of pre-sale consumer safety testing.

Parents have a responsibility to educate themselves to a reasonable level of user knowledge. But the opportunities are limited when there’s no transparency. Much of the use of a child’s personal data and system data’s interaction with our online behaviour, in toys, things, and even plain websites remains hidden to most of us.

So too, the Edmodo privacy policy contained no mention of profiling or behavioural web tracking, for example. Only when this savvy parent spotted it was happening, it appears the company responded properly to fix it. Given strict COPPA rules it is perhaps unsurprising, though it shouldn’t have happened at all.

How will the uses of these smart toys, and edTech apps be made safe, and is the government going to update regulations to do so?

Are public sector policy, practice and people, fit for managing UK children’s data privacy needs?

While these private edTech companies used directly in schools can expose children to risk, so too does public data collected in schools, being handed out to commercial companies, by government departments. Our UK government does not model good practice.

Two years on, I’m still working on asking for fixes in basic national pupil data improvement.  To make safe data policy, this is far too slow.

The Department for Education is still cagey about transparency, not telling schools it gives away national pupil data including to commercial companies without pupil or parental knowledge, and hides the Home Office use, now on a monthly basis, by not publishing it on a regular basis.

These uses of data are not safe, and expose children to potential greater theft, loss and selling of their personal data. It must change.

Whether the government hands out children’s data to commercial companies at national level and doesn’t tell schools, or staff in schools do it directly through in-class app registrations, it is often done without consent, and without any privacy impact assessment or due diligence up front. Some send data to the US or Australia. Schools still tell parents these are ‘required’ without any choice. But have they ensured that there is an equal and adequate level of data protection offered to personal data that they extract from the SIMs?

 

School staff and teachers manage, collect, administer personal data daily, including signing up children as users of web accounts with technology providers. Very often telling parents after the event, and with no choice. How can they and not put others at risk, if untrained in the basics of good data handling practices?

In our UK schools, just like the health system, the basics are still not being fixed or good practices on offer to staff. Teachers in the UK, get no data privacy or data protection training in their basic teacher training. That’s according to what I’ve been told so far from teacher trainers, CDP leaders, union members and teachers themselves,

Would you train fire fighters without ever letting them have hose practice?

Infrastructure is known to be exposed and under invested, but it’s not all about the tech. Security investment must also be in people.

Systemic failures seen this week revealed by WannaCry are not limited to the NHS. This from George Danezis could be, with few tweaks, copy pasted into education. So the question is not if, but when the same happens in education, unless it’s fixed.

“…from poor security standards in heath informatics industries; poor procurement processes in heath organizations; lack of liability on any of the software vendors (incl. Microsoft) for providing insecure software or devices; cost-cutting from the government on NHS cyber security with no constructive alternatives to mitigate risks; and finally the UK/US cyber-offense doctrine that inevitably leads to proliferation of cyber-weapons and their use on civilian critical infrastructures.” [Original post]

The power behind today’s AI in public services

The power behind today’s AI in public services

Thinking about whether education in England is preparing us for the jobs of the future, means also thinking about how technology will influence it.

Time and again, thinking and discussion about these topics is siloed. At the Turing Institute, the Royal Society, the ADRN and EPSRC, in government departments, discussions on data, or within education practitioner, and public circles — we are all having similar discussions about data and ethics, but with little ownership and no goals for future outcomes. If government doesn’t get it, or have time for it, or policy lacks ethics by design, is it in the public interest for private companies, Google et al., to offer a fait accompli?

There is lots of talking about Machine Learning (ML), Artificial Intelligence (AI) and ethics. But what is being done to ensure that real values — respect for rights, human dignity, and autonomy — are built into practice in the public services delivery?

In most recent data policy it is entirely absent. The Digital Economy Act s33 risks enabling, through removal of inter and intra-departmental data protections, an unprecedented expansion of public data transfers, with “untrammelled powers”. Powers without codes of practice, promised over a year ago. That has fall out for the trustworthiness of legislative process, and data practices across public services.

Predictive analytics is growing but poorly understood in the public and public sector.

There is already dependence on computers in aspects of public sector work. Its interactions with others in sensitive situations demands better knowledge of how systems operate and can be wrong. Debt recovery, and social care to take two known examples.

Risk averse, staff appear to choose not to question the outcome of ‘algorithmic decision making’ or do not have the ability to do so. There is reportedly no analysis training for practitioners, to understand the basis or bias of conclusions. This has the potential that instead of making us more informed, decision-making by machine makes us humans less clever.

What does it do to professionals, if they feel therefore less empowered? When is that a good thing if it overrides discriminatory human decisions? How can we tell the difference and balance these risks if we don’t understand or feel able to challenge them?

In education, what is it doing to children whose attainment is profiled, predicted, and acted on to target extra or less focus from school staff, who have no ML training and without informed consent of pupils or parents?

If authorities use data in ways the public do not expect, such as to ID homes of multiple occupancy without informed consent, they will fail the future to deliver uses for good. The ‘public interest’, ‘user need,’ and ethics can come into conflict according to your point of view. The public and data protection law and ethics object to harms from use of data. This type of application has potential to be mind-blowingly invasive and reveal all sorts of other findings.

Widely informed thinking must be made into meaningful public policy for the greatest public good

Our politicians are caught up in the General Election and buried in Brexit.

Meanwhile, the commercial companies taking AI first rights to capitalise on existing commercial advantage could potentially strip public assets, use up our personal data and public trust, and leave the public with little public good. We are already used by global data players, and by machine-based learning companies, without our knowledge or consent. That knowledge can be used to profit business models, that pay little tax into the public purse.

There are valid macro economic arguments about whether private spend and investment are preferable compared with a state’s ability to do the same. But these companies make more than enough to do it all. Does it signal a failure to a commitment to the wider community; not paying just amounts of taxes, is it a red flag to a company’s commitment to public good?

What that public good should look like, depends on who is invited to participate in the room, and not to tick boxes, but to think and to build.

The Royal Society’s Report on AI and Machine Learning published on April 25, showed a working group of 14 participants, including two Google DeepMind representatives, one from Amazon, private equity investors, and academics from cognitive science and genetics backgrounds.

Our #machinelearning working group chair, professor Peter Donnelly FRS, on today’s major #RSMachinelearning report https://t.co/PBYjzlESmB pic.twitter.com/RM9osnvOMX

— The Royal Society (@royalsociety) April 25, 2017

If we are going to form objective policies the inputs that form the basis for them must be informed, but must also be well balanced, and be seen to be balanced. Not as an add on, but be in the same room.

As Natasha Lomas in TechCrunch noted, “Public opinion is understandably a big preoccupation for the report authors — unsurprisingly so, given that a technology that potentially erodes people’s privacy and impacts their jobs risks being drastically unpopular.”

“The report also calls on researchers to consider the wider impact of their work and to receive training in recognising the ethical implications.”

What are those ethical implications? Who decides which matter most? How do we eliminate recognised discriminatory bias? What should data be used for and AI be working on at all? Who is it going to benefit? What questions are we not asking? Why are young people left out of this debate?

Who decides what the public should or should not know?

AI and ML depend on data. Data is often talked about as a panacea to problems of better working together. But data alone does not make people better informed. In the same way that they fail, if they don’t feel it is their job to pick up the fax. A fundamental building block of our future public and private prosperity is understanding data and how we, and the AI, interact. What is data telling us and how do we interpret it, and know it is accurate?

How and where will we start to educate young people about data and ML, if not about their own and use by government and commercial companies?

The whole of Chapter 5 in the report is very good as a starting point for policy makers who have not yet engaged in the area. Privacy while summed up too short in conclusions, is scattered throughout.

Blind spots remain, however.

  • Over willingness to accommodate existing big private players as their expertise leads design, development and a desire to ‘re-write regulation’.
  • Slowness to react to needed regulation in the public sector (caught up in Brexit) while commercial drivers and technology change forge ahead
  • ‘How do we develop technology that benefits everyone’ must not only think UK, but global South, especially in the bias in how AI is being to taught, and broad socio-economic barriers in application
  • Predictive analytics and professional application = unwillingness to question the computer result. In children’s social care this is already having a damaging upturn in the family courts (S31)
  • Data and technology knowledge and ethics training, must be embedded across the public sector, not only post grad students in machine learning.
  • Harms being done to young people today and potential for intense future exploitation, are being ignored by policy makers and some academics. Safeguarding is often only about blocking in case of liability to the provider, stopping children seeing content, or preventing physical exploitation. It ignores exploitation by online platform firms, and app providers and games creators, of a child’s synthesised online life and use. Laws and government departments’ own practices can be deeply flawed.
  • Young people are left out of discussions which, after all, are about their future. [They might have some of the best ideas, we miss at our peril.]

There is no time to waste

Children and young people have the most to lose while their education, skills, jobs market, economy, culture, care, and society goes through a series of gradual but seismic shift in purpose, culture, and acceptance before finding new norms post-Brexit. They will also gain the most if the foundations are right. One of these must be getting age verification right in GDPR, not allowing it to enable a massive data grab of child-parent privacy.

Although the RS Report considers young people in the context of a future workforce who need skills training, they are otherwise left out of this report.

“The next curriculum reform needs to consider the educational needs of young people through the lens of the implications of machine learning and associated technologies for the future of work.”

Yes it does, but it must give young people and the implications of ML broader consideration for their future, than classroom or workplace.

Facebook has targeted vulnerable young people, it is alleged, to facilitate predatory advertising practices. Some argue that emotive computing or MOOCs belong in the classroom. Who decides?

We are not yet talking about the effects of teaching technology to learn, and its effect on public services and interactions with the public. Questions that Sam Smith asked in Shadow of the smart machine: Will machine learning end?

At the end of this Information Age we are at a point when machine learning, AI and biotechnology are potentially life enhancing or could have catastrophic effects, if indeed “AI will cause people ‘more pain than happiness” as described by Alibaba’s founder Jack Ma.

The conflict between commercial profit and public good, what commercial companies say they will do and actually do, and fears and assurances over predicted outcomes is personified in the debate between Demis Hassabis, co-founder of DeepMind Technologies, (a London-based machine learning AI startup), and Elon Musk, discussing the perils of artificial intelligence.

Vanity Fair reported that, Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.””

Musk was of the opinion that A.I. was probably humanity’s “biggest existential threat.”

We are not yet joining up multi disciplinary and cross sector discussions of threats and opportunities

Jobs, shift in needed skill sets for education, how we think, interact, value each other, accept or reject ownership and power models; and later, from the technology itself. We are not yet talking conversely, the opportunities that the seismic shifts offer in real terms. Or how and why to accept or reject or regulate them.

Where private companies are taking over personal data given in trust to public services, it is reckless for the future of public interest research to assume there is no public objection. How can we object, if not asked? How can children make an informed choice? How will public interest be assured to be put ahead of private profit? If it is intended on balance to be all about altruism from these global giants, then they must be open and accountable.

Private companies are shaping how and where we find machine learning and AI gathering data about our behaviours in our homes and public spaces.

SPACE10, an innovation hub for IKEA is currently running a survey on how the public perceives and “wants their AI to look, be, and act”, with an eye on building AI into their products, for us to bring flat-pack into our houses.

As the surveillance technology built into the Things in our homes attached to the Internet becomes more integral to daily life, authorities are now using it to gather evidence in investigations; from mobile phones, laptops, social media, smart speakers, and games. The IoT so far seems less about the benefits of collaboration, and all about the behavioural data it collects and uses to target us to sell us more things. Our behaviours tell much more than how we act. They show how we think inside the private space of our minds.

Do you want Google to know how you think and have control over that? The companies of the world that have access to massive amounts of data, and are using that data to now teach AI how to ‘think’. What is AI learning? And how much should the State see or know about how you think, or try to predict it?

Who cares, wins?

It is not overstated to say society and future public good of public services, depends on getting any co-dependencies right. As I wrote in the time of care.data, the economic value of data, personal rights and the public interest are not opposed to one another, but have synergies and co-dependency. One player getting it wrong, can create harm for all. Government must start to care about this, beyond the side effects of saving political embarrassment.

Without joining up all aspects, we cannot limit harms and make the most of benefits. There is nuance and unknowns. There is opaque decision making and secrecy, packaged in the wording of commercial sensitivity and behind it, people who can be brilliant but at the end of the day, are also, human, with all our strengths and weaknesses.

And we can get this right, if data practices get better, with joined up efforts.

Our future society, as our present, is based on webs of trust, on our social networks on- and offline, that enable business, our education, our cultural, and our interactions. Children must trust they will not be used by systems. We must build trustworthy systems that enable future digital integrity.

The immediate harm that comes from blind trust in AI companies is not their AI, but the hidden powers that commercial companies have to nudge public and policy maker behaviours and acceptance, towards private gain. Their ability and opportunity to influence regulation and future direction outweighs most others. But lack of transparency about their profit motives is concerning. Carefully staged public engagement is not real engagement but a fig leaf to show ‘the public say yes’.

The unwillingness by Google DeepMind, when asked at their public engagement event, to discuss their past use of NHS patient data, or the profit model plan or their terms of NHS deals with London hospitals, should be a warning that these questions need answers and accountability urgently.

As TechCrunch suggested after the event, this is all “pretty standard playbook for tech firms seeking to workaround business barriers created by regulation.” Calls for more data, might mean an ever greater power shift.

Companies that have already extracted and benefited from personal data in the public sector, have already made private profit. They and their machines have learned for their future business product development.

A transparent accountable future for all players, private and public, using public data is a necessary requirement for both the public good and private profit. It is not acceptable for departments to hide their practices, just as it is unacceptable if firms refuse algorithmic transparency.

Rebooting antitrust for the information age will not be easy. It will entail new risks: more data sharing, for instance, could threaten privacy. But if governments don’t want a data economy dominated by a few giants, they will need to act soon.” [The Economist, May 6]

If the State creates a single data source of truth, or private Giant tech thinks it can side-step regulation and gets it wrong, their practices screw up public trust. It harms public interest research, and with it our future public good.

But will they care?

If we care, then across public and private sectors, we must cherish shared values and better collaboration. Embed ethical human values into development, design and policy. Ensure transparency of where, how, who and why my personal data has gone.

We must ensure that as the future becomes “smarter”, we educate ourselves and our children to stay intelligent about how we use data and AI.

We must start today, knowing how we are used by both machines, and man.


First published on Medium for a change.

Information. Society. Services. Children in the Internet of Things.

In this post, I think out loud about what improving online safety for children in The Green Paper on Children’s Internet Safety means ahead of the General Data Protection Regulation in 2018. Children should be able to use online services without being used and abused by them. If this regulation and other UK Government policy and strategy are to be meaningful for children, I think we need to completely rethink the State approach to what data privacy means in the Internet of Things.
[listen on soundcloud]


Children in the Internet of Things

In 1979 Star Trek: The Motion Picture created a striking image of A.I. as Commander Decker merged with V’Ger and the artificial copy of Lieutenant Ilia, blending human and computer intelligence and creating an integrated, synthesised form of life.

Ten years later, Sir Tim Berners-Lee wrote his proposal and created the world wide web, designing the way for people to share and access knowledge with each other through networks of computers.

In the 90s my parents described using the Internet as spending time ‘on the computer’, and going online meant from a fixed phone point.

Today our wireless computers in our homes, pockets and school bags, have built-in added functionality to enable us to do other things with them at the same time; make toast, play a game, and make a phone call, and we live in the Internet of Things.

Although we talk about it as if it were an environment of inanimate appliances,  it would be more accurate to think of the interconnected web of information that these things capture, create and share about our interactions 24/7, as vibrant snapshots of our lives, labelled with retrievable tags, and stored within the Internet.

Data about every moment of how and when we use an appliance, is captured at a rapid rate, or measured by smart meters, and shared within a network of computers. Computers that not only capture data but create, analyse and exchange new data about the people using them and how they interact with the appliance.

In this environment, children’s lives in the Internet of Things no longer involve a conscious choice to go online. Using the Internet is no longer about going online, but being online. The web knows us. In using the web, we become part of the web.

Our children, to the computers that gather their data, have simply become extensions of the things they use about which data is gathered and sold by the companies who make and sell the things. Things whose makers can even choose who uses them or not and how. In the Internet of things,  children have become things of the Internet.

A child’s use of a smart hairbrush will become part of the company’s knowledge base how the hairbrush works. A child’s voice is captured and becomes part of the database for the development training of the doll or robot they play with.

Our biometrics, measurements of the unique physical parts of our identities, provides a further example of the recent offline-self physically incorporated into banking services. Over 1 million UK children’s biometrics are estimated to be used in school canteens and library services through, often compulsory, fingerprinting.

Our interactions create a blended identity of online and offline attributes.

The web has created synthesised versions of our selves.

I say synthesised not synthetic, because our online self is blended with our real self and ‘synthetic’ gives the impression of being less real. If you take my own children’s everyday life as an example,  there is no ‘real’ life that is without a digital self.  The two are inseparable. And we might have multiple versions.

Our synthesised self is not only about our interactions with appliances and what we do, but who we know and how we think based on how we take decisions.

Data is created and captured not only about how we live, but where we live. These online data can be further linked with data about our behaviours offline generated from trillions of sensors and physical network interactions with our portable devices. Our synthesised self is tracked from real life geolocations. In cities surrounded by sensors under pavements, in buildings, cameras, mapping and tracking everywhere we go, our behaviours are converted into data, and stored inside an overarching network of cloud computers so that our online lives take on life of their own.

Data about us, whether uniquely identifiable on its own or not, is created and collected actively and passively. Online site visits record IP Address and use linked platform log-ins that can even extract friends lists without consent or affirmative action from them.

Using a tool like Privacy Badger from EEF gives you some insight into how many sites create new data about online behaviour once that synthesised self logs in, then tracks your synthesised self across the Internet. How you move from page to page, with what referring and exit pages and URLs, what adverts you click on or ignore,  platform types, number of clicks, cookies, invisible on page gifs and web beacons. Data that computers see, interpret and act on better than us.

Those synthesised identities are tracked online,  just as we move about a shopping mall offline.

Sir Tim Berners-Lee said this week, there is a need to put “a fair level of data control back in the hands of people.” It is not a need but vital to our future flourishing, very survival even. Data control is not about protecting a list of information or facts about ourselves and our identity for its own sake, it is about choosing who can exert influence and control over our life, our choices, and future of democracy.

And while today that who may be companies, it is increasingly A.I. itself that has a degree of control over our lives, as decisions are machine made.

Understanding how the Internet uses people

We get the service, the web gets our identity and our behaviours. And in what is in effect a hidden slave trade, they get access to use our synthesised selves in secret, and forever.

This grasp of what the Internet is, what the web is, is key to getting a rounded view of children’s online safety. Namely, we need to get away from the sole focus of online safeguarding as about children’s use of the web, and also look at how the web uses children.

Online services use children to:

  • mine, and exchange, repackage, and trade profile data, offline behavioural data (location, likes), and invisible Internet-use behavioural data (cookies, website analytics)
  • extend marketing influence in human decision-making earlier in life, even before children carry payment cards of their own,
  • enjoy the insights of parent-child relationships connected by an email account, sometimes a credit card, used as age verification or in online payments.

What are the risks?

Exploitation of identity and behavioural tracking not only puts our synthesised child at risk from exploitation, it puts our real life child’s future adult identity and data integrity at risk. If we cannot know who holds the keys to our digital identity, how can we trust that systems and services will be fair to us, not discriminate or defraud. Or not make errors that we cannot understand in order to correct?

Leaks, loss and hacks abound and manufacturers are slow to respond. Software that monitors children can also be used in coercive control. Organisations whose data are insecure, can be held to ransom. Children’s products should do what we expect them to and nothing more, there should be “no surprises” how data are used.

Companies tailor and target their marketing activity to those identity profiles. Our data is sold on in secret without consent to data brokers we never see, who in turn sell us on to others who monitor, track and target our synthesised selves every time we show up at their sites, in a never-ending cycle.

And from exploiting the knowledge of our synthesised self, decisions are made by companies, that target their audience, select which search results or adverts to show us, or hide, on which network sites, how often, to actively nudge our behaviours quite invisibly.

Nudge misuse is one of the greatest threats to our autonomy and with it democratic control of the society we live in. Who decides on the “choice architecture” that may shape another’s decisions and actions, and on what ethical basis?  once asked these authors who now seem to want to be the decision makers.

Thinking about Sir Tim Berners-Lee’s comments today on things that threaten the web, including how to address the loss of control over our personal data, we must frame it not a user-led loss of control, but autonomy taken by others; by developers, by product sellers, by the biggest ‘nudge controllers’ the Internet giants themselves.

Loss of identity is near impossible to reclaim. Our synthesised selves are sold into unending data slavery and we seem powerless to stop it. Our autonomy and with it our self worth, seem diminished.

How can we protect children better online?

Safeguarding must include ending data slavery of our synthesised self. I think of five things needed by policy shapers to tackle it.

  1. Understanding what ‘online’ and the Internet mean and how the web works – i.e. what data does a visit to a web page collect about the user and what happens to that data?
  2. Threat models and risk must go beyond the usual irl protection issues. Those  posed by undermining citizens’ autonomy, loss of public trust, of control over our identity, misuse of nudge, and how some are intrinsic to the current web business model, site users or government policy are unseen are underestimated.
  3. On user regulation (age verification / filtering) we must confront the idea that as a stand-alone step  it will not create a better online experience for the user, when it will not prevent the misuse of our synthesised selves and may increase risks – regulation of misuse must shift the point of responsibility
  4. Meaningful data privacy training must be mandatory for anyone in contact with children and its role in children’s safeguarding
  5. Siloed thinking must go. Forward thinking must join the dots across Departments into cohesive inclusive digital strategy and that doesn’t just mean ‘let’s join all of the data, all of the time’
  6. Respect our synthesised selves. Data slavery includes government misuse and must end if we respect children’s rights.

In the words of James T. Kirk, “the human adventure is just beginning.”

When our synthesised self is an inseparable blend of offline and online identity, every child is a synthesised child. And they are people. It is vital that government realises their obligation to protect rights to privacy, provision and participation under the Convention of the Rights of the Child and address our children’s real online life.

Governments, policy makers, and commercial companies must not use children’s offline safety as an excuse in a binary trade off to infringe on those digital rights or ignore risk and harm to the synthesised self in law, policy, and practice.

If future society is to thrive we must do all that is technologically possible to safeguard the best of what makes us human in this blend; our free will.


Part 2 follows with thoughts specific to the upcoming regulations, Digital Economy Bill andDigital Strategy

References:

[1] Internet of things WEF film, starting from 19:30

“What do an umbrella, a shark, a houseplant, the brake pads in a mining truck and a smoke detector all have in common?  They can all be connected online, and in this example, in this WEF film, they are.

“By 2024 more than 50% of home Internet traffic will be used by appliances and devices, rather than just for communication and entertainment…The IoT raises huge questions on privacy and security, that have to be addressed by government, corporations and consumers.”

[2] The government has today announced a “major new drive on internet safety”  [The Register, Martin, A. 27.02.2017]

[3] GDPR page 38 footnote (1) indicates the definition of Information Society Services as laid out in the Directive (EU) 2015/1535 of the European Parliament and of the Council of 9 September 2015 laying down a procedure for the provision of information in the field of technical regulations and of rules on Information Society services (OJ L 241, 17.9.2015, p. 1 and Annex 1)

image source: Startrek.com

The perfect storm: three bills that will destroy student data privacy in England

Lords have voiced criticism and concern at plans for ‘free market’ universities, that will prioritise competition over collaboration and private interests over social good. But while both Houses have identified the institutional effects, they are yet to discuss the effects on the individuals of a bill in which “too much power is concentrated in the centre”.

The Higher Education and Research Bill sucks in personal data to the centre, as well as power. It creates an authoritarian panopticon of the people within the higher education and further education systems. Section 1, parts 72-74 creates risks but offers no safeguards.

Applicants and students’ personal data is being shifted into a  top-down management model, at the same time as the horizontal safeguards for its distribution are to be scrapped.

Through deregulation and the building of a centralised framework, these bills will weaken the purposes for which personal data are collected, and weaken existing requirements on consent to which the data may be used at national level. Without amendments, every student who enters this system will find their personal data used at the discretion of any future Secretary of State for Education without safeguards or oversight, and forever. Goodbye privacy.

Part of the data extraction plans are for use in public interest research in safe settings with published purpose, governance, and benefit. These are well intentioned and this year’s intake of students will have had to accept that use as part of the service in the privacy policy.

But in addition and separately, the Bill will permit data to be used at the discretion of the Secretary of State, which waters down and removes nuances of consent for what data may or may not be used today when applicants sign up to UCAS.

Applicants today are told in the privacy policy they can consent separately to sharing their data with the Student Loans company for example. This Bill will remove that right when it permits all Applicant data to be used by the State.

This removal of today’s consent process denies all students their rights to decide who may use their personal data beyond the purposes for which they permit its sharing.

And it explicitly overrides the express wishes registered by the 28,000 applicants, 66% of respondents to a 2015 UCAS survey, who said as an example, that they should be asked before any data was provided to third parties for student loan applications (or even that their data should never be provided for this).

Not only can the future purposes be changed without limitation,  by definition, when combined with other legislation, namely the Digital Economy Bill that is in the Lords at the same time right now, this shift will pass personal data together with DWP and in connection with HMRC data expressly to the Student Loans Company.

In just this one example, the Higher Education and Research Bill is being used as a man in the middle. But it will enable all data for broad purposes, and if those expand in future, we’ll never know.

This change, far from making more data available to public interest research, shifts the balance of power between state and citizen and undermines the very fabric of its source of knowledge; the creation and collection of personal data.

Further, a number of amendments have been proposed in the Lords to clause 9 (the transparency duty) which raise more detailed privacy issues for all prospective students, concerns UCAS share.

Why this lack of privacy by design is damaging

This shift takes away our control, and gives it to the State at the very time when ‘take back control’ is in vogue. These bills are building a foundation for a data Brexit.

If the public does not trust who will use it and why or are told that when they provide data they must waive any rights to its future control, they will withhold or fake data. 8% of applicants even said it would put them off applying through UCAS at all.

And without future limitation, what might be imposed is unknown.

This shortsightedness will ultimately cause damage to data integrity and the damage won’t come in education data from the Higher Education Bill alone. The Higher Education and Research Bill is just one of three bills sweeping through Parliament right now which build a cumulative anti-privacy storm together, in what is labelled overtly as data sharing legislation or is hidden in tucked away clauses.

The Technical and Further Education Bill – Part 3

In addition to entirely new Applicant datasets moving from UCAS to the DfE in clauses 73 and 74 of the  Higher Education and Research Bill,  Apprentice and FE student data already under the Secretary of State for Education will see potentially broader use under changed purposes of Part 3 of the Technical and Further Education Bill.

Unlike the Higher Education and Research Bill, it may not fundamentally changing how the State gathers information on further education, but it has the potential to do so on use.

The change is a generalisation of purposes. Currently, subsection 1 of section 54 refers to “purposes of the exercise of any of the functions of the Secretary of State under Part 4 of the Apprenticeships, Skills, Children and Learning Act 2009”.

Therefore, the government argues, “it would not hold good in circumstances where certain further education functions were transferred from the Secretary of State to some combined authorities in England, which is due to happen in 2018.”<

This is why clause 38 will amend that wording to “purposes connected with further education”.

Whatever the details of the reason, the purposes are broader.

Again, combined with the Digital Economy Bill’s open ended purposes, it means the Secretary of State could agree to pass these data on to every other government department, a range of public bodies, and some private organisations.

The TFE BIll is at Report stage in the House of Commons on January 9, 2017.

What could go possibly go wrong?

These loose purposes, without future restrictions, definitions of third parties it could be given to or why, or clear need to consult the public or parliament on future scope changes, is a  repeat of similar legislative changes which have resulted in poor data practices using school pupil data in England age 2-19 since 2000.

Policy makers should consider whether the intent of these three bills is to give out identifiable, individual level, confidential data of young people under 18, for commercial use without their consent? Or to journalists and charities access? Should it mean unfettered access by government departments and agencies such as police and Home Office Removals Casework teams without any transparent register of access, any oversight, or accountability?

These are today’s uses by third-parties of school children’s individual, identifiable and sensitive data from the National Pupil Database.

Uses of data not as statistics, but named individuals for interventions in individual lives.

If the Home Secretaries past and present have put international students at the centre of plans to cut migration to the tens of thousands and government refuses to take student numbers out of migration figures, despite them being seen as irrelevant in the substance of the numbers debate, this should be deeply worrying.

Will the MOU between the DfE and the Home Office Removals Casework team be a model for access to all student data held at the Department for Education, even all areas of public administrative data?

Hoping that the data transfers to the Home Office won’t result in the deportation of thousands we would not predict today, may be naive.

Under the new open wording, the Secretary of State for Education might even  decide to sell the nation’s entire Technical and Further Education student data to Trump University for the purposes of their ‘research’ to target marketing at UK students or institutions that may be potential US post-grad applicants. The Secretary of State will have the data simply because she “may require [it] for purposes connected with further education.”

And to think US buyers or others would not be interested is too late.

In 2015 Stanford University made a request of the National Pupil Database for both academic staff and students’ data. It was rejected. We know this only from the third party release register. Without any duty to publish requests, approved users or purposes of data release, where is the oversight for use of these other datasets?

If these are not the intended purposes of these three bills, if there should be any limitation on purposes of use and future scope change, then safeguards and oversight need built into the face of the bills to ensure data privacy is protected and avoid repeating the same again.

Hoping that the decision is always going to be, ‘they wouldn’t approve a request like that’ is not enough to protect millions of students privacy.

The three bills are a perfect privacy storm

As other Europeans seek to strengthen the fundamental rights of their citizens to take back control of their personal data under the GDPR coming into force in May 2018, the UK government is pre-emptively undermining ours in these three bills.

Young people, and data dependent institutions, are asking for solutions to show what personal data is held where, and used by whom, for what purposes. That buys in the benefit message that builds trust showing what you said you’d do with my data, is what you did with my data. [1] [2]

Reality is that in post-truth politics it seems anything goes, on both sides of the Pond. So how will we trust what our data is used for?

2015-16 advice from the cross party Science and Technology Committee suggested data privacy is unsatisfactory, “to be left unaddressed by Government and without a clear public-policy position set out“.  We hear the need for data privacy debated about use of consumer data, social media, and on using age verification. It’s necessary to secure the public trust needed for long term public benefit and for economic value derived from data to be achieved.

But the British government seems intent on shortsighted legislation which does entirely the opposite for its own use: in the Higher Education Bill, the Technical and Further Education Bill and in the Digital Economy Bill.

These bills share what Baroness Chakrabarti said of the Higher Education Bill in its Lords second reading on the 6th December, “quite an achievement for a policy to combine both unnecessary authoritarianism with dangerous degrees of deregulation.”

Unchecked these Bills create the conditions needed for catastrophic failure of public trust. They shift ever more personal data away from personal control, into the centralised control of the Secretary of State for unclear purposes and use by undefined third parties. They jeopardise the collection and integrity of public administrative data.

To future-proof the immediate integrity of student personal data collection and use, the DfE reputation, and public and professional trust in DfE political leadership, action must be taken on safeguards and oversight, and should consider:

  • Transparency register: a public record of access, purposes, and benefits to be achieved from use
  • Subject Access Requests: Providing the public ways to access copies of their own data
  • Consent procedures should be strengthened for collection and cannot say one thing, and do another
  • Ability to withdraw consent from secondary purposes should be built in by design, looking to GDPR from 2018
  • Clarification of the legislative purpose of intended current use by the Secretary of State and its boundaries shoud be clear
  • Future purpose and scope change limitations should require consultation – data collected today must not used quite differently tomorrow without scrutiny and ability to opt out (i.e. population wide registries of religion, ethnicity, disability)
  • Review or sunset clause

If the legislation in these three bills pass without amendment, the potential damage to privacy will be lasting.


[1] http://www.parliament.uk/business/publications/written-questions-answers-statements/written-question/Commons/2016-07-15/42942/  Parliamentary written question 42942 on the collection of pupil nationality data in the school census starting in September 2016:   “what limitations will be placed by her Department on disclosure of such information to (a) other government departments?”

Schools Minister Nick Gibb responded on July 25th 2016: ”

“These new data items will provide valuable statistical information on the characteristics of these groups of children […] “The data will be collected solely for internal Departmental use for the analytical, statistical and research purposes described above. There are currently no plans to share the data with other government Departments”

[2] December 15, publication of MOU between the Home Office  Casework Removals Team and the DfE, reveals “the previous agreement “did state that DfE would provide nationality information to the Home Office”, but that this was changed “following discussions” between the two departments.” http://schoolsweek.co.uk/dfe-had-agreement-to-share-pupil-nationality-data-with-home-office/ 

The agreement was changed on 7th October 2016 to not pass nationality data over. It makes no mention of not using the data within the DfE for the same purposes.

OkCupid and Google DeepMind: Happily ever after? Purposes and ethics in datasharing

This blog post is also available as an audio file on soundcloud.


What constitutes the public interest must be set in a universally fair and transparent ethics framework if the benefits of research are to be realised – whether in social science, health, education and more – that framework will provide a strategy to getting the pre-requisite success factors right, ensuring research in the public interest is not only fit for the future, but thrives. There has been a climate change in consent. We need to stop talking about barriers that prevent datasharing  and start talking about the boundaries within which we can.

What is the purpose for which I provide my personal data?

‘We use math to get you dates’, says OkCupid’s tagline.

That’s the purpose of the site. It’s the reason people log in and create a profile, enter their personal data and post it online for others who are looking for dates to see. The purpose, is to get a date.

When over 68K OkCupid users registered for the site to find dates, they didn’t sign up to have their identifiable data used and published in ‘a very large dataset’ and onwardly re-used by anyone with unregistered access. The users data were extracted “without the express prior consent of the user […].”

Are the registration consent purposes compatible with the purposes to which the researcher put the data should be a simple enough question.  Are the research purposes what the person signed up to, or would they be surprised to find out their data were used like this?

Questions the “OkCupid data snatcher”, now self-confessed ‘non-academic’ researcher, thought unimportant to consider.

But it appears in the last month, he has been in good company.

Google DeepMind, and the Royal Free, big players who do know how to handle data and consent well, paid too little attention to the very same question of purposes.

The boundaries of how the users of OkCupid had chosen to reveal information and to whom, have not been respected in this project.

Nor were these boundaries respected by the Royal Free London trust that gave out patient data for use by Google DeepMind with changing explanations, without clear purposes or permission.

The legal boundaries in these recent stories appear unclear or to have been ignored. The privacy boundaries deemed irrelevant. Regulatory oversight lacking.

The respectful ethical boundaries of consent to purposes, disregarding autonomy, have indisputably broken down, whether by commercial org, public body, or lone ‘researcher’.

Research purposes

The crux of data access decisions is purposes. What question is the research to address – what is the purpose for which the data will be used? The intent by Kirkegaard was to test:

“the relationship of cognitive ability to religious beliefs and political interest/participation…”

In this case the question appears intended rather a test of the data, not the data opened up to answer the test. While methodological studies matter, given the care and attention [or self-stated lack thereof] given to its extraction and any attempt to be representative and fair, it would appear this is not the point of this study either.

The data doesn’t include profiles identified as heterosexual male, because ‘the scraper was’. It is also unknown how many users hide their profiles, “so the 99.7% figure [identifying as binary male or female] should be cautiously interpreted.”

“Furthermore, due to the way we sampled the data from the site, it is not even representative of the users on the site, because users who answered more questions are overrepresented.” [sic]

The paper goes on to say photos were not gathered because they would have taken up a lot of storage space and could be done in a future scraping, and

“other data were not collected because we forgot to include them in the scraper.”

The data are knowingly of poor quality, inaccurate and incomplete. The project cannot be repeated as ‘the scraping tool no longer works’. There is an unclear ethical or peer review process, and the research purpose is at best unclear. We can certainly give someone the benefit of the doubt and say intent appears to have been entirely benevolent. It’s not clear what the intent was. I think it is clearly misplaced and foolish, but not malevolent.

The trouble is, it’s not enough to say, “don’t be evil.” These actions have consequences.

When the researcher asserts in his paper that, “the lack of data sharing probably slows down the progress of science immensely because other researchers would use the data if they could,”  in part he is right.

Google and the Royal Free have tried more eloquently to say the same thing. It’s not research, it’s direct care, in effect, ignore that people are no longer our patients and we’re using historical data without re-consent. We know what we’re doing, we’re the good guys.

However the principles are the same, whether it’s a lone project or global giant. And they’re both wildly wrong as well. More people must take this on board. It’s the reason the public interest needs the Dame Fiona Caldicott review published sooner rather than later.

Just because there is a boundary to data sharing in place, does not mean it is a barrier to be ignored or overcome. Like the registration step to the OkCupid site, consent and the right to opt out of medical research in England and Wales is there for a reason.

We’re desperate to build public trust in UK research right now. So to assert that the lack of data sharing probably slows down the progress of science is misplaced, when it is getting ‘sharing’ wrong, that caused the lack of trust in the first place and harms research.

A climate change in consent

There has been a climate change in public attitude to consent since care.data, clouded by the smoke and mirrors of state surveillance. It cannot be ignored.  The EUGDPR supports it. Researchers may not like change, but there needs to be an according adjustment in expectations and practice.

Without change, there will be no change. Public trust is low. As technology advances and if we continue to see commercial companies get this wrong, we will continue to see public trust falter unless broken things get fixed. Change is possible for the better. But it has to come from companies, institutions, and people within them.

Like climate change, you may deny it if you choose to. But some things are inevitable and unavoidably true.

There is strong support for public interest research but that is not to be taken for granted. Public bodies should defend research from being sunk by commercial misappropriation if they want to future-proof public interest research.

The purpose for which the people gave consent are the boundaries within which you have permission to use data, that gives you freedom within its limits, to use the data.  Purposes and consent are not barriers to be overcome.

If research is to win back public trust developing a future proofed, robust ethical framework for data science must be a priority today.

Commercial companies must overcome the low levels of public trust they have generated in the public to date if they ask ‘trust us because we’re not evil‘. If you can’t rule out the use of data for other purposes, it’s not helping. If you delay independent oversight it’s not helping.

This case study and indeed the Google DeepMind recent episode by contrast demonstrate the urgency with which working out what common expectations and oversight of applied ethics in research, who gets to decide what is ‘in the public interest’ and data science public engagement must be made a priority, in the UK and beyond.

Boundaries in the best interest of the subject and the user

Society needs research in the public interest. We need good decisions made on what will be funded and what will not be. What will influence public policy and where needs attention for change.

To do this ethically, we all need to agree what is fair use of personal data, when is it closed and when is it open, what is direct and what are secondary uses, and how advances in technology are used when they present both opportunities for benefit or risks to harm to individuals, to society and to research as a whole.

The potential benefits of research are potentially being compromised for the sake of arrogance, greed, or misjudgement, no matter intent. Those benefits cannot come at any cost, or disregard public concern, or the price will be trust in all research itself.

In discussing this with social science and medical researchers, I realise not everyone agrees. For some, using deidentified data in trusted third party settings poses such a low privacy risk, that they feel the public should have no say in whether their data are used in research as long it’s ‘in the public interest’.

For the DeepMind researchers and Royal Free, they were confident even using identifiable data, this is the “right” thing to do, without consent.

For the Cabinet Office datasharing consultation, the parts that will open up national registries, share identifiable data more widely and with commercial companies, they are convinced it is all the “right” thing to do, without consent.

How can researchers, society and government understand what is good ethics of data science, as technology permits ever more invasive or covert data mining and the current approach is desperately outdated?

Who decides where those boundaries lie?

“It’s research Jim, but not as we know it.” This is one aspect of data use that ethical reviewers will need to deal with, as we advance the debate on data science in the UK. Whether independents or commercial organisations. Google said their work was not research. Is‘OkCupid’ research?

If this research and data publication proves anything at all, and can offer lessons to learn from, it is perhaps these three things:

Who is accredited as a researcher or ‘prescribed person’ matters. If we are considering new datasharing legislation, and for example, who the UK government is granting access to millions of children’s personal data today. Your idea of a ‘prescribed person’ may not be the same as the rest of the public’s.

Researchers and ethics committees need to adjust to the climate change of public consent. Purposes must be respected in research particularly when sharing sensitive, identifiable data, and there should be no assumptions made that differ from the original purposes when users give consent.

Data ethics and laws are desperately behind data science technology. Governments, institutions, civil, and all society needs to reach a common vision and leadership how to manage these challenges. Who defines these boundaries that matter?

How do we move forward towards better use of data?

Our data and technology are taking on a life of their own, in space which is another frontier, and in time, as data gathered in the past might be used for quite different purposes today.

The public are being left behind in the game-changing decisions made by those who deem they know best about the world we want to live in. We need a say in what shape society wants that to take, particularly for our children as it is their future we are deciding now.

How about an ethical framework for datasharing that supports a transparent public interest, which tries to build a little kinder, less discriminating, more just world, where hope is stronger than fear?

Working with people, with consent, with public support and transparent oversight shouldn’t be too much to ask. Perhaps it is naive, but I believe that with an independent ethical driver behind good decision-making, we could get closer to datasharing like that.

That would bring Better use of data in government.

Purposes and consent are not barriers to be overcome. Within these, shaped by a strong ethical framework, good data sharing practices can tackle some of the real challenges that hinder ‘good use of data’: training, understanding data protection law, communications, accountability and intra-organisational trust. More data sharing alone won’t fix these structural weaknesses in current UK datasharing which are our really tough barriers to good practice.

How our public data will be used in the public interest will not be a destination or have a well defined happy ending, but it is a long term  process which needs to be consensual and there needs to be a clear path to setting out together and achieving collaborative solutions.

While we are all different, I believe that society shares for the most part, commonalities in what we accept as good, and fair, and what we believe is important. The family sitting next to me have just counted out their money and bought an ice cream to share, and the staff gave them two. The little girl is beaming. It seems that even when things are difficult, there is always hope things can be better. And there is always love.

Even if some might give it a bad name.

********

img credit: flickr/sofi01/ Beauty and The Beast  under creative commons

Can new datasharing laws win social legitimacy, public trust and support without public engagement?

I’ve been struck by stories I’ve heard on the datasharing consultation, on data science, and on data infrastructures as part of ‘government as a platform’ (#GaaPFuture) in recent weeks. The audio recorded by the Royal Statistical Society on March 17th is excellent, and there were some good questions asked.

There were even questions from insurance backed panels to open up more data for commercial users, and calls for journalists to be seen as accredited researchers, as well as to include health data sharing. Three things that some stakeholders, all users of data, feel are  missing from consultation, and possibly some of those with the most widespread public concern and lowest levels of public trust. [1]

What I feel is missing in consultation discussions are:

  1. a representative range of independent public voice
  2. a compelling story of needs – why tailored public services benefits citizens from whom data is taken, not only benefits data users
  3. the impacts we expect to see in local government
  4. any cost/risk/benefit assessment of those impacts, or for citizens
  5. how the changes will be independently evaluated – as some are to be reviewed

The Royal Statistical Society and ODI have good summaries here of their thoughts, more geared towards the statistical and research aspects of data,  infrastructure and the consultation.

I focus on the other strands that use identifiable data for targeted interventions. Tailored public services, Debt, Fraud, Energy Companies’ use. I think we talk too little of people, and real needs.

Why the State wants more datasharing is not yet a compelling story and public need and benefit seem weak.

So far the creation of new data intermediaries, giving copies of our personal data to other public bodies  – and let’s be clear that this often means through commercial representatives like G4S, Atos, Management consultancies and more –  is yet to convince me of true public needs for the people, versus wants from parts of the State.

What the consultation hopes to achieve, is new powers of law, to give increased data sharing increased legal authority. However this alone will not bring about the social legitimacy of datasharing that the consultation appears to seek through ‘open policy making’.

Legitimacy is badly needed if there is to be public and professional support for change and increased use of our personal data as held by the State, which is missing today,  as care.data starkly exposed. [2]

The gap between Social Legitimacy and the Law

Almost 8 months ago now, before I knew about the datasharing consultation work-in-progress, I suggested to BIS that there was an opportunity for the UK to drive excellence in public involvement in the use of public data by getting real engagement, through pro-active consent.

The carrot for this, is achieving the goal that government wants – greater legal clarity, the use of a significant number of consented people’s personal data for complex range of secondary uses as a secondary benefit.

It was ignored.

If some feel entitled to the right to infringe on citizens’ privacy through a new legal gateway because they believe the public benefit outweighs private rights, then they must also take on the increased balance of risk of doing so, and a responsibility to  do so safely. It is in principle a slippery slope. Any new safeguards and ethics for how this will be done are however unclear in those data strands which are for targeted individual interventions. Especially if predictive.

Upcoming discussions on codes of practice [which have still to be shared] should demonstrate how this is to happen in practice, but codes are not sufficient. Laws which enable will be pushed to their borderline of legal and beyond that of ethical.

In England who would have thought that the 2013 changes that permitted individual children’s data to be given to third parties [3] for educational purposes, would mean giving highly sensitive, identifiable data to journalists without pupils or parental consent? The wording allows it. It is legal. However it fails the DPA Act legal requirement of fair processing.  Above all, it lacks social legitimacy and common sense.

In Scotland, there is current anger over the intrusive ‘named person’ laws which lack both professional and public support and intrude on privacy. Concerns raised should be lessons to learn from in England.

Common sense says laws must take into account social legitimacy.

We have been told at the open policy meetings that this change will not remove the need for informed consent. To be informed, means creating the opportunity for proper communications, and also knowing how you can use the service without coercion, i.e. not having to consent to secondary data uses in order to get the service, and knowing to withdraw consent at any later date. How will that be offered with ways of achieving the removal of data after sharing?

The stick for change, is the legal duty that the recent 2015 CJEU ruling reiterating the legal duty to fair processing [4] waved about. Not just a nice to have, but State bodies’ responsibility to inform citizens when their personal data are used for purposes other than those for which those data had initially been consented and given. New legislation will not  remove this legal duty.

How will it be achieved without public engagement?

Engagement is not PR

Failure to act on what you hear from listening to the public is costly.

Engagement is not done *to* people, don’t think explain why we need the data and its public benefit’ will work. Policy makers must engage with fears and not seek to dismiss or diminish them, but acknowledge and mitigate them by designing technically acceptable solutions. Solutions that enable data sharing in a strong framework of privacy and ethics, not that sees these concepts as barriers. Solutions that have social legitimacy because people support them.

Mr Hunt’s promised February 2014 opt out of anonymised data being used in health research, has yet to be put in place and has had immeasurable costs for delayed public research, and public trust.

How long before people consider suing the DH as data controller for misuse? From where does the arrogance stem that decides to ignore legal rights, moral rights and public opinion of more people than those who voted for the Minister responsible for its delay?

 

This attitude is what fails care.data and the harm is ongoing to public trust and to confidence for researchers’ continued access to data.

The same failure was pointed out by the public members of the tiny Genomics England public engagement meeting two years ago in March 2014, called to respond to concerns over the lack of engagement and potential harm for existing research. The comms lead made a suggestion that the new model of the commercialisation of the human genome in England, to be embedded in the NHS by 2017 as standard clinical practice, was like steam trains in Victorian England opening up the country to new commercial markets. The analogy was felt by the lay attendees to be, and I quote, ‘ridiculous.’

Exploiting confidential personal data for public good must have support and good two-way engagement if it is to get that support, and what is said and agreed must be acted on to be trustworthy.

Policy makers must take into account broad public opinion, and that is unlikely to be submitted to a Parliamentary consultation. (Personally, I first knew such  processes existed only when care.data was brought before the Select Committee in 2014.) We already know what many in the public think about sharing their confidential data from the work with care.data and objections to third party access, to lack of consent. Just because some policy makers don’t like what was said, doesn’t make that public opinion any less valid.

We must bring to the table the public voice from past but recent public engagement work on administrative datasharing [5], the voice of the non-research community, and from those who are not stakeholders who will use the data but the ‘data subjects’, the public  whose data are to be used.

Policy Making must be built on Public Trust

Open policy making is not open just because it says it is. Who has been invited, participated, and how their views actually make a difference on content and implementation is what matters.

Adding controversial ideas at the last minute is terrible engagement, its makes the process less trustworthy and diminishes its legitimacy.

This last minute change suggests some datasharing will be dictated despite critical views in the policy making and without any public engagement. If so, we should ask policy makers on what mandate?

Democracy depends on social legitimacy. Once you lose public trust, it is not easy to restore.

Can new datasharing laws win social legitimacy, public trust and support without public engagement?

In my next post I’ll post look at some of the public engagement work done on datasharing to date, and think about ethics in how data are applied.

*************

References:

[1] The Royal Statistical Society data trust deficit

[2] “The social licence for research: why care.data ran into trouble,” by Carter et al.

[3] FAQs: Campaign for safe and ethical National Pupil Data

[4] CJEU Bara 2015 Ruling – fair processing between public bodies

[5] Public Dialogues using Administrative data (ESRC / ADRN)

img credit: flickr.com/photos/internetarchivebookimages/

Destination smart-cities: design, desire and democracy (Part four)

Who is using all this Big Data? What decisions are being made on the back of it that we never see?

In the everyday and press it often seems that the general public does not understand data, and can easily be told things which we misinterpret.

There are tools in social media influencing public discussions and leading conversations in a different direction from that it had taken, and they operate without regulation.

It is perhaps meaningful that pro-reform Wellington School last week opted out of some of the greatest uses of Big Data sharing in the UK. League tables. Citing their failures. Deciding they werein fact, a key driver for poor educational practice.”

Most often we cannot tell from the data provided what we are told those Big Data should be telling us. And we can’t tell if the data are accurate, genuine and reliable.

Yet big companies are making big money selling the dream that Big Data is the key to decision making. Cumulatively through lack of skills to spot inaccuracy, and inability to do necessary interpretation, we’re being misled by what we find in Big Data.

Being misled is devastating for public trust, as the botched beginnings of care.data found in 2014. Trust has come to be understood as vital for future based on datasharing. Public involvement in how we are used in Big Data in the future, needs to include how our data are used in order to trust they are used well. And interpreting those data well is vital. Those lessons of the past and present must be learned, and not forgotten.

It’s time to invest some time in thinking about safeguarding trust in the future, in the unknown, and the unseen.

We need to be told which private companies like Cinven and FFT have copies of datasets like HES, the entire 62m national hospital records, or the NPD, our entire schools database population of 20 million, or even just its current cohort of 8+ million.

If the public is to trust the government and public bodies to use our data well, we need to know exactly how those data are used today and all these future plans that others have for our personal data.

When we talk about public bodies sharing data they hold for administrative purposes, do we know which private companies this may mean in reality?

The UK government has big plans for big data sharing, sharing across all public bodies, some tailored for individual interventions.

While there are interesting opportunities for public benefit from at-scale systems, the public benefit is at risk not only from lack of trust in how systems gather data and use them, but that interoperability gets lost in market competition.

Openness and transparency can be absent in public-private partnerships until things go wrong. Given the scale of smart-cities, we must have more than hope that data management and security will not be one of those things.

But how will we know if new plans design well, or not?

Who exactly holds and manages those data and where is the oversight of how they are being used?

Using Big Data to be predictive and personal

How do we definde “best use of data” in “public services” right across the board in a world in which boundaries between private and public in the provision of services have become increasingly blurred?

UK researchers and police are already analysing big data for predictive factors at postcode level for those at risk or harm, for example in combining health and education data.

What has grown across the Atlantic is now spreading here. When I lived there I could already see some of what is deeply flawed.

When your system has been as racist in its policing and equity of punishment as institutionally systemic as it is in the US, years of cumulative data bias translates into ‘heat lists’ and means “communities of color will be systematically penalized by any risk assessment tool that uses criminal history as a legitimate criterion.”

How can we ensure British policing does not pursue flawed predictive policies and methodologies, without seeing them?

What transparency have our use of predictive prisons and justice data?

What oversight will the planned new increase in use of satellite tags, and biometrics access in prisons have?

What policies can we have in place to hold data-driven decision-making processes accountable?<

What tools do we need to seek redress for decisions made using flawed algorithms that are apparently indisputable?

Is government truly committed to being open and talking about how far the nudge unit work is incorporated into any government predictive data use? If not, why not?

There is a need for a broad debate on the direction of big data and predictive technology and whether the public understands and wants it.If we don’t understand, it’s time someone explained it.

If I can’t opt out of O2 picking up my travel data ad infinitum on the Tube, I will opt out of their business model and try to find a less invasive provider. If I can’t opt out of EE picking up my personal data as I move around Hyde park, it won’t be them.

Most people just want to be left alone and their space is personal.

A public consultation on smart-technology, and its growth into public space and effect on privacy could be insightful.

Feed me Seymour?

With the encroachment of integrated smart technology over our cities – our roads, our parking, our shopping, our parks, our classrooms, our TV and our entertainment, even our children’s toys – surveillance and sharing information from systems we cannot see  start defining what others may view, or decide about us, behind the scenes in everything we do.

As it expands city wide, it will be watched closely if data are to be open for public benefit, but not invade privacy if “The data stored in this infrastructure won’t be confidential.”

If the destination of digital in all parts of our lives is smart-cities then we have to collectively decide, what do we want, what do we design, and how do we keep it democratic?

What price is our freedom to decide how far its growth should reach into public space and private lives?

The cost of smart cities to individuals and the public is not what it costs in investment made by private conglomerates.

Already the cost of smart technology is privacy inside our homes, our finances, and autonomy of decision making.

Facebook and social media may run algorithms we never see that influence our mood or decision making. Influencing that decision making is significant enough when it’s done through advertising encouraging us to decide which sausages to buy for your kids tea.

It is even more significant when you’re talking about influencing voting.

Who influences most voters wins an election. If we can’t see the technology behind the influence, have we also lost sight of how democracy is decided? The power behind the mechanics of the cogs of Whitehall may weaken inexplicably as computer driven decision from the tech companies’ hidden tools takes hold.

What opportunity and risk to “every part of government” does ever expanding digital bring?

The design and development of smart technology that makes decisions for us and about us, lies in in the hands of large private corporations, not government.

The means the public-interest values that could be built by design and their protection and oversight are currently outside our control.

There is no disincentive for companies that have taken private information that is none of their business, and quite literally, made it their business to not want to collect ever more data about us. It is outside our control.

We must plan by-design for the values we hope for, for ethics, to be embedded in systems, in policies, embedded in public planning and oversight of service provision by all providers. And that the a fair framework of values is used when giving permission to private providers who operate in public spaces.

We must plan for transparency and interoperability.

We must plan by-design for the safe use of data that does not choke creativity and innovation but both protects and champions privacy as a fundamental building block of trust for these new relationships between providers of private and public services, private and public things, in private and public space.

If “digital is changing how we deliver every part of government,” and we want to “harness the best of digital and technology, and the best use of data to improve public services right across the board” then we must see integration in the planning of policy and its application.

Across the board “the best use of data” must truly value privacy, and enable us to keep our autonomy as individuals.

Without this, the cost of smart cities growing unchecked, will be an ever growing transfer of power to the funders behind corporations and campaign politics.

The ultimate price of this loss of privacy, will be democracy itself.

****

This is the conclusion to a four part set of thoughts: On smart technology and data from the Sprint16 session (part one). I thought about this more in depth on “Smart systems and Public Services” here (part two), and the design and development of smart technology making “The Best Use of Data” here looking at today in a UK company case study (part three) and this part four, “The Best Use of Data” used in predictions and the Future.

Destination smart-cities: design, desire and democracy (Part three)

Smart Technology we have now: A UK Case Study

In places today, where climate surveillance sensors are used to predict and decide which smog-days cars should be banned from cities, automatic number-plate recognition (ANPR) can identify cars driving on the wrong days and send automatic penalties.

Similarly ANPR technology is used in our UK tunnels and congestion charging systems. One British company encouraging installation of ANPR in India is the same provider of a most significant part of our British public administrative data and surveillance softwares in a range of sectors.

About themselves that company says:

“Northgate Public Services has a unique experience of delivering ANPR software to all Home Office police forces. We developed and managed the NADC, the mission critical solution providing continuous surveillance of the UK’s road network.  The NADC is integrated with other databases, including the Police National Computer, and supports more than 30 million reads a day across the country.”

30 million snapshots from ‘continuous surveillance of the UK’s road network‘. That’s surprised me. That’s half the population in England, not all of whom drive. 30 million every day. It’s massive, unreasonable, and risks backlash.

Northgate Public Services’ clients also include 80% of UK water companies, as well as many other energy and utility suppliers.

And in the social housing market they stretch to debt collection, or ‘income management’.

So who I wondered, who is this company that owns all this data-driven access to our homes, our roads, our utilities, life insurance, hospital records and registeries, half of all UK calls to emergency services?

Northgate Information Solutions announced the sale of its Public Services division in December 2014 to venture capital firm Cinven. Cinven that also owns a 62% shareholding in the UK private healthcare provider Spire with all sorts of influence given their active share of services and markets. 

Not only does this private equity firm hold these vast range of data systems across a wide range of sectors, but it’s making decisions about how our public policies and money are being driven.

Using health screening data they’re even making decisions that affect our future and our behaviour and affect our private lives: software provides the information and tools that housing officers need to proactively support residents, such as sending emails, letters or rent reminders by SMS and freeing up time for face-to-face support.”

Of their ANPR systems, Northgate says the data should be even more widely used “to turn CONNECT: ANPR into a critical source of intelligence for proactive policing.”

If the company were to start to ‘proactively’ use all the data it owns across the sectors we should be asking, is ‘smart’ sensible and safe?

Where is the boundary between proactive and predictive? Or public and private?

Where do companies draw the line between public and personal space?

The public services provided by the company seem to encroach into our private lives in many ways, In Northgate’s own words, “It’s also deeply personal.”

Who’s driving decision making is clear. The source of their decision making is data. And it’s data about us.

Today already whether collected by companies proactively like ANPR or through managing data we give them with consent for direct administrative purpose, private companies are the guardians of massive amounts of our personal and public data.

What is shocking to me, is how collected data in one area of public services are also used for entirely different secondary purposes without informed consent or an FYI, for example in schools.

If we don’t know which companies manage our data, how can we trust that it is looked after well and that we are told if things go wrong?

Steps must be taken in administrative personal data security, transparency and public engagement to shore up public trust as the foundation for future datasharing as part of the critical infrastructure for any future strategy, for public or commercial application. Strategy must include more transparency of the processing of our data and public involvement, not the minimum, if ‘digital citizenship’ is to be meaningful.

How would our understanding of data improve if anyone using personal data were required to put in place clear public statements about their collection, use and analysis of data?  If the principles of data protection were actually upheld, in particular that individuals should be informed? How would our understanding of data improve especially regards automated decision making and monitoring technology? Not ninety page privacy policies. Plain English. If you need ninety pages, you’re doing too much with my data.

Independent privacy impact assessments should be mandatory and published before data are collected and shared with any party other than that to which it was given for a specific purpose. Extensions broadening that purpose should require consultation and consent. If that’s a street, then make it public in plain sight.

Above all, planning committees in local government, in policy making and practical application, need to think of data in every public decision they make and its ethical implications. We need some more robust decision-making in the face of corporate data grabs, to defend data collected in public space safe, and to keep some private.

How much less fun is a summer’s picnic spent smooching, if you feel watched? How much more anxious will we make our children if they’re not allowed to ever have their own time to themselves, and every word they type in a school computer is monitored?

How much individual creativity and innovation does that stifle? We are effectively censoring children before they have written a word.

Large corporations have played historically significant and often shadowy roles in surveillance that retrospectively were seen as unethical.

We should consider sooner rather than later, if corporations such as BAE systems, Siemens and the IMSs of the world act in ways worthy of our trust in such massive reach into our lives, with little transparency and oversight.

“Big data is big opportunity but Government should tackle misuse”

The Select Committee warned in its recent report on Big Data that distrust arising from concerns about privacy and security is often well-founded and must be resolved by industry and Government.

If ‘digital’ means smart technology in the future is used in “every part of government” as announced at #Sprint16, what will its effects be on the involvement and influence these massive corporations on democracy itself?

******

I thought about this more in depth on Part one here,  “Smart systems and Public Services” here (part two), and continue after this by looking at “The Best Use of Data” used in predictions and the Future (part four).

Destination smart-cities: design, desire and democracy (Part two)

Smart cities: private reach in public space and personal lives

Smart-cities are growing in the UK through private investment and encroachment on public space. They are being built by design at home, and supported by UK money abroad, with enormous expansion plans in India for example, in almost 100 cities.

With this rapid expansion of “smart” technology not only within our living rooms but my living space and indeed across all areas of life, how do we ensure equitable service delivery, (what citizens generally want, as demonstrated by strength of feeling on the NHS) continues in public ownership, when the boundary in current policy is ever more blurred between public and private corporate ownership?

How can we know and plan by-design that the values we hope for, are good values, and that they will be embedded in systems, in policies and planning? Values that most people really care about. How do we ensure “smart” does not ultimately mean less good? That “smart” does not in the end mean, less human.

Economic benefits seem to be the key driver in current government thinking around technology – more efficient = costs less.

While using technology progressing towards replacing repetitive work may be positive, how will we accommodate for those whose skills will no longer be needed? In particular its gendered aspect, and the more vulnerable in the workforce, since it is women and other minorities who work disproportionately in our part-time, low skill jobs. Jobs that are mainly held by women, even what we think of as intrinsically human, such as carers, are being trialed for outsourcing or assistance by technology. These robots monitor people, in their own homes and reduce staffing levels and care home occupancy. We’ll no doubt hear how good it is we need fewer carers because after all, we have a shortage of care staff. We’ll find out whether it is positive for the cared, or whether they find it it less ‘human'[e]. How will we measure those costs?

The ideal future of us all therefore having more leisure time sounds fab, but if we can’t afford it, we won’t be spending more of our time employed in leisure. Some think we’ll simply be unemployed. And more people live in the slums of Calcutta than in Soho.

One of the greatest benefits of technology is how more connected the world can be, but will it also be more equitable?

There are benefits in remote sensors monitoring changes in the atmosphere that dictate when cars should be taken off the roads on smog-days, or indicators when asthma risk-factors are high.

Crowd sourcing information about things which are broken, like fix-my-street, or lifts out-of-order are invaluable in cities for wheelchair users.

Innovative thinking and building things through technology can create things which solve simple problems and add value to the person using the tool.

But what of the people that cannot afford data, cannot be included in the skilled workforce, or will not navigate apps on a phone?

How this dis-incentivises the person using the technology has not only an effect on their disappointment with the tool, but the service delivery, and potentially wider still even to societal exclusion or stigma.These were the findings of the e-red book in Glasgow explained at the Digital event in health, held at the King’s Fund in summer 2015.

Further along the scale of systems and potential for negative user experience, how do we expect citizens to react to finding punishments handed out by unseen monitoring systems, finding out our behaviour was ‘nudged’ or find decisions taken about us, without us?

And what is the oversight and system of redress for people using systems, or whose data are used but inaccurate in a system, and cause injustice?

And wider still, while we encourage big money spent on big data in our part of the world how is it contributing to solving problems for millions for whom they will never matter? Digital and social media makes increasingly transparent our one connected world, with even less excuse for closing our eyes.

Approximately 15 million girls worldwide are married each year – that’s one girl, aged under 18, married off against her will every two seconds. [Huff Post, 2015]

Tinder-type apps are luxury optional extras for many in the world.

Without embedding values and oversight into some of what we do through digital tools implemented by private corporations for profit, ‘smart’ could mean less fair, less inclusive, less kind. Less global.

If digital becomes a destination, and how much it is implemented is seen as a measure of success, by measuring how “smart” we become risks losing sight of seeing technology as solutions and steps towards solving real problems for real people.

We need to be both clever and sensible, in our ‘smart’.

Are public oversight and regulation built in to make ‘smart’ also be safe?

If there were public consultation on how “smart” society will look would we all agree if and how we want it?

Thinking globally, we need to ask if we are prioritising the wrong problems? Are we creating more tech that we already have invented solutions for place where governments are willing to spend on them? And will it in those places make the society more connected across class and improve it for all, or enhance the lives of the ‘haves’ by having more, and the ‘have-nots’ be excluded?

Does it matter how smart your TV gets, or carer, or car, if you cannot afford any of these convenient add-ons to Life v1.1?

As we are ever more connected, we are a global society, and being ‘smart’ in one area may be reckless if at the expense or ignorance of another.

People need to Understand what “Smart” means

“Consistent with the wider global discourse on ‘smart’ cities, in India urban problems are constructed in specific ways to facilitate the adoption of “smart hi-tech solutions”. ‘Smart’ is thus likely to mean technocratic and centralized, undergirded by alliances between the Indian government and hi-technology corporations.”  [Saurabh Arora, Senior Lecturer in Technology and Innovation for Development at SPRU]

Those investing in both countries are often the same large corporations. Very often, venture capitalists.

Systems designed and owned by private companies provide the information technology infrastructure that i:

the basis for providing essential services to residents. There are many technological platforms involved, including but not limited to automated sensor networks and data centres.’

What happens when the commercial and public interest conflict and who decides that they do?

Decision making, Mining and Value

Massive amounts of data generated are being mined for making predictions, decisions and influencing public policy: in effect using Big Data for research purposes.

Using population-wide datasets for social and economic research today, is done in safe settings, using deidentified data, in the public interest, and has independent analysis of the risks and benefits of projects as part of the data access process.

Each project goes before an ethics committee review to assess its considerations for privacy and not only if the project can be done, but should be done, before it comes for central review.

Similarly our smart-cities need ethics committee review assessing the privacy impact and potential of projects before commissioning or approving smart-technology. Not only assessing if they are they feasible, and that we ‘can’ do it, but ‘should’ we do it. Not only assessing the use of the data generated from the projects, but assessing the ethical and privacy implications of the technology implementation itself.

The Committee recommendations on Big Data recently proposed that a ‘Council of Data Ethics’ should be created to explicitly address these consent and trust issues head on. But how?

Unseen smart-technology continues to grow unchecked often taking root in the cracks between public-private partnerships.

We keep hearing about Big Data improving public services but that “public” data is often held by private companies. In fact our personal data for public administration has been widely outsourced to private companies of which we have little oversight.

We’re told we paid the price in terms of skills and are catching up.

But if we simply roll forward in first gear into the connected city that sees all, we may find we arrive at a destination that was neither designed nor desired by the majority.

We may find that the “revolution, not evolution”, hoped for in digital services will be of the unwanted kind if companies keep pushing more and more for more data without the individual’s consent and our collective public buy-in to decisions made about data use.

Having written all this, I’ve now read the Royal Statistical Society’s publication which eloquently summarises their recent work and thinking. But I wonder how we tie all this into practical application?

How we do governance and regulation is tied tightly into the practicality of public-private relationships but also into deciding what should society look like? That is what our collective and policy decisions about what smart-cities should be and may do, is ultimately defining.

I don’t think we are addressing in depth yet the complexity of regulation and governance that will be sufficient to make Big Data and Public Spaces safe because companies say too much regulation risks choking off innovation and creativity.

But that risk must not be realised if it is managed well.

Rather we must see action to manage the application of smart-technology in a thoughtful way quickly, because if we do not, very soon, we’ll have lost any say in how our service providers deliver.

*******

I began my thoughts about this in Part one, on smart technology and data from the Sprint16 session and after this (Part two), continue to look at the design and development of smart technology making “The Best Use of Data” with a UK company case study (Part three) and “The Best Use of Data” used in predictions and the Future (Part four).