Category Archives: Tech

Gotta know it all? Pokémon GO, privacy and behavioural research

I caught my first Pokémon and I liked it. Well, OK, someone else handed me a phone and insisted I have a go. Turns out my curve ball is pretty good. Pokémon GO is enabling all sorts of new discoveries.

Discoveries reportedly including a dead man, robbery, picking up new friends, and scrapes and bruises. While players are out hunting anime in augmented reality, enjoying the novelty, and discovering interesting fun facts about their vicinity, Pokémon GO is gathering a lot of data. It’s influencing human activity in ways that other games can only envy, taking in-game interaction to a whole new level.

And it’s popular.

But what is it learning about us as we do it?

This week questions have been asked about the depth of interaction that the app gets by accessing users’ log in credentials.

What I would like to know is what access goes in the other direction?

Google, heavily invested in AI and Machine intelligence research, has “learning systems placed at the core of interactive services in a fast changing and sometimes adversarial environment, combinations of techniques including deep learning and statistical models need to be combined with ideas from control and game theory.”

The app, which is free to download, has raised concerns over suggestions the app could access a user’s entire Google account, including email and passwords. Then it seemed it couldn’t. But Niantic is reported to have made changes to permissions to limit access to basic profile information anyway.

If Niantic gets access to data owned by Google through its use of google log in credentials, does Nantic’s investor, Google’s Alphabet, get the reverse: user data from the Google log in interaction with the app, and if so, what does Google learn through the interaction?

Who gets access to what data and why?

Brian Crecente writes that Apple, Google, Niantic likely making more on Pokémon Go than Nintendo, with 30 percent of revenue from in-app purchases on their online stores.

Next stop  is to make money from marketing deals between Niantic and the offline stores used as in-game focal points, gyms and more, according to Bryan Menegus at Gizmodo who reported Redditors had discovered decompiled code in the Android and iOS versions of Pokémon Go earlier this week “that indicated a potential sponsorship deal with global burger chain McDonald’s.”

The logical progressions of this, is that the offline store partners, i.e. McDonald’s and friends, will be making money from players, the people who get led to their shops, restaurants and cafes where players will hang out longer than the Pokéstop, because the human interaction with other humans, the battles between your collected creatures and teamwork, are at the heart of the game. Since you can’t visit gyms until you are level 5 and have chosen a team, players are building up profiles over time and getting social in real life. Location data that may build up patterns about the players.

This evening the two players that I spoke to were already real-life friends on their way home from work (that now takes at least an hour longer every evening) and they’re finding the real-life location facts quite fun, including that thing they pass on the bus every day, and umm, the Scientology centre. Well, more about that later**.

Every player I spotted looking at the phone with that finger flick action gave themselves away with shared wry smiles. All 30 something men. There is possibly something of a legacy in this they said, since the initial Pokémon game released 20 years ago is drawing players who were tweens then.

Since the app is online and open to all, children can play too. What this might mean for them in the offline world, is something the NSPCC picked up on here before the UK launch. Its focus  of concern is the physical safety of young players, citing the risk of in-game lures misuse. I am not sure how much of an increased risk this is compared with existing scenarios and if children will be increasingly unsupervised or not. It’s not a totally new concept. Players of all ages must be mindful of where they are playing**. Some stories of people getting together in the small hours of the night has generated some stories which for now are mostly fun. (Go Red Team.) Others are worried about hacking. And it raises all sorts of questions if private and public space is has become a Pokestop.

While the NSPCC includes considerations on the approach to privacy in a recent more general review of apps, it hasn’t yet mentioned the less obvious considerations of privacy and ethics in Pokémon GO. Encouraging anyone, but particularly children, out of their home or protected environments and into commercial settings with the explicit aim of targeting their spending. This is big business.

Privacy in Pokémon GO

I think we are yet to see a really transparent discussion of the broader privacy implications of the game because the combination of multiple privacy policies involved is less than transparent. They are long, they seem complete, but are they meaningful?

We can’t see how they interact.

Google has crowd sourced the collection of real time traffic data via mobile phones.  Geolocation data from google maps using GPS data, as well as network provider data seem necessary to display the street data to players. Apparently you can download and use the maps offline since Pokémon GO uses the Google Maps API. Google goes to “great lengths to make sure that imagery is useful, and reflects the world our users explore.” In building a Google virtual reality copy of the real world, how data are also collected and will be used about all of us who live in it,  is a little wooly to the public.

U.S. Senator Al Franken is apparently already asking Niantic these questions. He points out that Pokémon GO has indicated it shares de-identified and aggregate data with other third parties for a multitude of purposes but does not describe the purposes for which Pokémon GO would share or sell those data [c].

It’s widely recognised that anonymisation in many cases fails so passing only anonymised data may be reassuring but fail in reality. Stripping out what are considered individual personal identifiers in terms of data protection, can leave individuals with unique characteristics or people profiled as groups.

Opt out he feels is inadequate as a consent model for the personal and geolocational data that the app is collecting and passing to others in the U.S.

While the app provider would I’m sure argue that the UK privacy model respects the European opt in requirement, I would be surprised if many have read it. Privacy policies fail.

Poor practices must be challenged if we are to preserve the integrity of controlling the use of our data and knowledge about ourselves. Being aware of who we have ceded control of marketing to us, or influencing how we might be interacting with our environment, is at least a step towards not blindly giving up control of free choice.

The Pokémon GO permissions “for the purpose of performing services on our behalf“, “third party service providers to work with us to administer and provide the Services” and  “also use location information to improve and personalize our Services for you (or your authorized child)” are so broad as they could mean almost anything. They can also be changed without any notice period. It’s therefore pretty meaningless. But it’s the third parties’ connection, data collection in passing, that is completely hidden from players.

If we are ever to use privacy policies as meaningful tools to enable consent, then they must be transparent to show how a chain of permissions between companies connect their services.

Otherwise they are no more than get out of jail free cards for the companies that trade our data behind the scenes, if we were ever to claim for its misuse.  Data collectors must improve transparency.

Behavioural tracking and trust

Covert data collection and interaction is not conducive to user trust, whether through a failure to communicate by design or not.

By combining location data and behavioural data, measuring footfall is described as “the holy grail for retailers and landlords alike” and it is valuable.  “Pavement Opportunity” data may be sent anonymously, but if its analysis and storage provides ways to pitch to people, even if not knowing who they are individually, or to groups of people, it is discriminatory and potentially invisibly predatory. The pedestrian, or the player, Jo Public, is a commercial opportunity.

Pokémon GO has potential to connect the opportunity for profit makers with our pockets like never before. But they’re not alone.

Who else is getting our location data that we don’t sign up for sharing “in 81 towns and cities across Great Britain?

Whether footfall outside the shops or packaged as a game that gets us inside them, public interest researchers and commercial companies alike both risk losing our trust if we feel used as pieces in a game that we didn’t knowingly sign up to. It’s creepy.

For children the ethical implications are even greater.

There are obligations to meet higher legal and ethical standards when processing children’s data and presenting them marketing. Parental consent requirements fail children for a range of reasons.

So far, the UK has said it will implement the EU GDPR. Clear and affirmative consent is needed. Parental consent will be required for the processing of personal data of children under age 16. EU Member States may lower the age requiring parental consent to 13, so what that will mean for children here in the UK is unknown.

The ethics of product placement and marketing rules to children of all ages go out the window however, when the whole game or programme is one long animated advert. On children’s television and YouTube, content producers have turned brand product placement into programmes: My Little Pony, Barbie, Playmobil and many more.

Alice Webb, Director of BBC Children’s and BBC North,  looked at some of the challenges in this as the BBC considers how to deliver content for children whilst adapting to technological advances in this LSE blog and the publication of a new policy brief about families and ‘screen time’, by Alicia Blum-Ross and Sonia Livingstone.

So is this augmented reality any different from other platforms?

Yes because you can’t play the game without accepting the use of the maps and by default some sacrifice of your privacy settings.

Yes because the ethics and implications of of putting kids not simply in front of a screen that pitches products to them, but puts them physically into the place where they can consume products – if the McDonalds story is correct and a taster of what will follow – is huge.

Boundaries between platforms and people

Blum-Ross says, “To young people, the boundaries and distinctions that have traditionally been established between genres, platforms and devices mean nothing; ditto the reasoning behind the watershed system with its roots in decisions about suitability of content. “

She’s right. And if those boundaries and distinctions mean nothing to providers, then we must have that honest conversation with urgency. With our contrived consent, walking and running and driving without coercion, we are being packaged up and delivered right to the door of for-profit firms, paying for the game with our privacy. Smart cities are exploiting street sensors to do the same.

Freewill is at the very heart of who we are. “The ability to choose between different possible courses of action. It is closely linked to the concepts of responsibility, praise, guilt, sin, and other judgments which apply only to actions that are freely chosen.” Free choice of where we shop, what we buy and who we interact with is open to influence. Influence that is not entirely transparent presents opportunity for hidden manipulation, while the NSPCC might be worried about the risk of rare physical threat, the potential for the influencing of all children’s behaviour, both positive and negative, reaches everyone.

Some stories of how behaviour is affected, are heartbreakingly positive. And I met and chatted with complete strangers who shared the joy of something new and a mutual curiosity of the game. Pokémon GOis clearly a lot of fun. It’s also unclear on much more.

I would like to explicitly understand if Pokémon GO is gift packaging behavioural research by piggybacking on the Google platforms that underpin it, and providing linked data to Google or third parties.

Fishing for frequent Pokémon encourages players to ‘check in’ and keep that behaviour tracking live. 4pm caught a Krabby in the closet at work. 6pm another Krabby. Yup, still at work. 6.32pm Pidgey on the street outside ThatGreenCoffeeShop. Monday to Friday.

The Google privacy policies changed in the last year require ten clicks for opt out, and in part, the download of an add-on. Google has our contacts, calendar events, web searches, health data, has invested in our genetics, and all the ‘Things that make you “you”. They have our history, and are collecting our present. Machine intelligence work on prediction, is the future. For now, perhaps that will be pinging you with a ‘buy one get one free’ voucher at 6.20, or LCD adverts shifting as you drive back home.

Pokémon GO doesn’t have to include what data Google collects in its privacy policy. It’s in Google’s privacy policy. And who really read that when it came out months ago, or knows what it means in combination with new apps and games we connect it with today? Tracking and linking data on geolocation, behavioural patterns, footfall, whose other phones are close by,  who we contact, and potentially even our spend from Google wallet.

Have Google and friends of Niantic gotta know it all?

The illusion that might cheat us: ethical data science vision and practice

This blog post is also available as an audio file on soundcloud.


Anais Nin, wrote in her 1946 diary of the dangers she saw in the growth of technology to expand our potential for connectivity through machines, but diminish our genuine connectedness as people. She could hardly have been more contemporary for today:

“This is the illusion that might cheat us of being in touch deeply with the one breathing next to us. The dangerous time when mechanical voices, radios, telephone, take the place of human intimacies, and the concept of being in touch with millions brings a greater and greater poverty in intimacy and human vision.”
[Extract from volume IV 1944-1947]

Echoes from over 70 years ago, can be heard in the more recent comments of entrepreneur Elon Musk. Both are concerned with simulation, a lack of connection between the perceived, and reality, and the jeopardy this presents for humanity. But both also have a dream. A dream based on the positive potential society has.

How will we use our potential?

Data is the connection we all have between us as humans and what machines and their masters know about us. The values that masters underpin their machine design with, will determine the effect the machines and knowledge they deliver, have on society.

In seeking ever greater personalisation, a wider dragnet of data is putting together ever more detailed pieces of information about an individual person. At the same time data science is becoming ever more impersonal in how we treat people as individuals. We risk losing sight of how we respect and treat the very people whom the work should benefit.

Nin grasped the risk that a wider reach, can mean more superficial depth. Facebook might be a model today for the large circle of friends you might gather, but how few you trust with confidences, with personal knowledge about your own personal life, and the privilege it is when someone chooses to entrust that knowledge to you. Machine data mining increasingly tries to get an understanding of depth, and may also add new layers of meaning through profiling, comparing our characteristics with others in risk stratification.
Data science, research using data, is often talked about as if it is something separate from using information from individual people. Yet it is all about exploiting those confidences.

Today as the reach has grown in what is possible for a few people in institutions to gather about most people in the public, whether in scientific research, or in surveillance of different kinds, we hear experts repeatedly talk of the risk of losing the valuable part, the knowledge, the insights that benefit us as society if we can act upon them.

We might know more, but do we know any better? To use a well known quote from her contemporary, T S Eliot, ‘Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?’

What can humans achieve? We don’t yet know our own limits. What don’t we yet know?  We have future priorities we aren’t yet aware of.

To be able to explore the best of what Nin saw as ‘human vision’ and Musk sees in technology, the benefits we have from our connectivity; our collaboration, shared learning; need to be driven with an element of humility, accepting values that shape  boundaries of what we should do, while constantly evolving with what we could do.

The essence of this applied risk is that technology could harm you, more than it helps you. How do we avoid this and develop instead the best of what human vision makes possible? Can we also exceed our own expectations of today, to advance in moral progress?

Continue reading The illusion that might cheat us: ethical data science vision and practice

A data sharing fairytale (3): transformation and impact

Part three: It is vital that the data sharing consultation is not seen in a silo, or even a set of silos each particular to its own stakeholder. To do it justice and ensure the questions that should be asked are answered, we must look instead at the whole story and the background setting. And we must ask each stakeholder, what does your happy ending look like?

Parts one and two to follow address public engagement and ethics, this focuses on current national data practice, tailored public services, and local impact of the change and transformation that will result.

What is your happy ending?

This data sharing consultation is gradually revealing to me how disjoined government appears in practice and strategy. Our digital future, a society that is more inclusive and more just, supported by better uses of technology and data in ‘dot everyone’ will not happen if they cannot first join the dots across all of Cabinet thinking and good practice, and align policies that are out of step with each other.

Last Thursday night’s “Government as a Platform Future” panel discussion (#GaaPFuture) took me back to memories of my old job, working in business implementations of process and cutting edge systems. Our finest hour was showing leadership why success would depend on neither. Success was down to local change management and communications, because change is about people, not the tech.

People in this data sharing consultation, means the public, means the staff of local government public bodies, as well as the people working at national stakeholders of the UKSA (statistics strand), ADRN (de-identified research strand), Home Office (GRO strand), DWP (Fraud and Debt strands), and DECC (energy) and staff at the national driver, the Cabinet Office.

I’ve attended two of the 2016 datasharing meetings,  and am most interested from three points of view  – because I am directly involved in the de-identified data strand,  campaign for privacy, and believe in public engagement.

Engagement with civil society, after almost 2 years of involvement on three projects, and an almost ten month pause in between, the projects had suddenly become six in 2016, so the most sensitive strands of the datasharing legislation have been the least openly discussed.

At the end of the first 2016 meeting, I asked one question.

How will local change management be handled and the consultation tailored to local organisations’ understanding and expectations of its outcome?

Why? Because a top down data extraction programme from all public services opens up the extraction of personal data as business intelligence to national level, of all local services interactions with citizens’ data.  Or at least, those parts they have collected or may collect in future.

That means a change in how the process works today. Global business intelligence/data extractions are designed to make processes more efficient, through reductions in current delivery, yet concrete public benefits for citizens are hard to see that would be different from today, so why make this change in practice?

What it might mean for example, would be to enable collection of all citizens’ debt information into one place, and that would allow the service to centralise chasing debt and enforce its collection, outsourced to a single national commercial provider.

So what does the future look like from the top? What is the happy ending for each strand, that will be achieved should this legislation be passed?  What will success for each set of plans look like?

What will we stop doing, what will we start doing differently and how will services concretely change from today, the current state, to the future?

Most importantly to understand its implications for citizens and staff, we should ask how will this transformation be managed well to see the benefits we are told it will deliver?

Can we avoid being left holding a pumpkin, after the glitter of ‘use more shiny tech’ and government love affair with the promises of Big Data wear off?

Look into the local future

Those with the vision of the future on a panel at the GDS meeting this week, the new local government model enabled by GaaP, also identified, there are implications for potential loss of local jobs, and “turkeys won’t vote for Christmas”. So who is packaging this change to make it successfully deliverable?

If we can’t be told easily in consultation, then it is not a clear enough policy to deliver. If there is a clear end-state, then we should ask what the applied implications in practice are going to be?

It is vital that the data sharing consultation is not seen in a silo, or even a set of silos each particular to its own stakeholder, about copying datasets to share them more widely, but that we look instead at the whole story and the background setting.

The Tailored Reviews: public bodies guidance suggests massive reform of local government, looking for additional savings, looking to cut back office functions and commercial plans. It asks “What workforce reductions have already been agreed for the body? Is there potential to go further? Are these linked to digital savings referenced earlier?”

Options include ‘abolish, move out of central government, commercial model, bring in-house, merge with another body.’

So where is the local government public bodies engagement with change management plans in the datasharing consultation as a change process? Does it not exist?

I asked at the end of the first datasharing meeting in January and everyone looked a bit blank. A question ‘to take away’ turned into nothing.

Yet to make this work, the buy-in of local public bodies is vital. So why skirt round this issue in local government, if there are plans to address it properly?

If there are none, then with all the data in the world, public services delivery will not be improved, because the issues are friction not of interference by consent, or privacy issues, but working practices.

If the idea is to avoid this ‘friction’ by removing it, then where is the change management plan for public services and our public staff?

Trust depends on transparency

John Pullinger, our National Statistician, this week also said on datasharing we need a social charter on data to develop trust.

Trust can only be built between public and state if the organisations, and all the people in them, are trustworthy.

To implement process change successfully, the people involved in these affected organisations, the staff, must trust that change will mean positive improvement and risks explained.

For the public, what defined levels of data access, privacy protection, and scope limitation that this new consultation will permit in practice, are clearly going to be vital to define if the public will trust its purposes.

The consultation does not do this, and there is no draft code of conduct yet, and no one is willing to define ‘research’ or ‘public interest’.

Public interest models or ‘charter’ for collection and use of research data in health, concluded that ofr ethical purposes, time also mattered. Benefits must be specific, measurable, attainable, relevant and time-bound. So let’s talk about the intended end state that is to be achieved from these changes, and identify how its benefits are to meet those objectives – change without an intended end state will almost never be successful, if you don’t know start knowing what it looks like.

For public trust, that means scope boundaries. Sharing now, with today’s laws and ethics is only fully meaningful if we trust that today’s governance, ethics and safeguards will be changeable in future to the benefit of the citizen, not ever greater powers to the state at the expense of the individual. Where is scope defined?

There is very little information about where limits would be on what data could not be shared, or when it would not be possible to do so without explicit consent. Permissive powers put the onus onto the data controller to share, and given ‘a new law says you should share’ would become the mantra, it is likely to mean less individual accountability. Where are those lines to be drawn to support the staff and public, the data user and the data subject?

So to summarise, so far I have six key questions:

  • What does your happy ending look like for each data strand?
  • How will bad practices which conflict with the current consultation proposals be stopped?
  • How will the ongoing balance of use of data for government purposes, privacy and information rights be decided and by whom?
  • In what context will the ethical principles be shaped today?
  • How will the transformation from the current to that future end state be supported, paid for and delivered?
  • Who will oversee new policies and ensure good data science practices, protection and ethics are applied in practice?

This datasharing consultation is not entirely for something new, but expansion of what is done already. And in some places is done very badly.

How will the old stories and new be reconciled?

Wearing my privacy and public engagement hats, here’s an idea.

Perhaps before the central State starts collecting more, sharing more, and using more of our personal data for ‘tailored public services’ and more, the government should ask for a data amnesty?

It’s time to draw a line under bad practice.  Clear out the ethics drawers of bad historical practice, and start again, with a fresh chapter. Because current practices are not future-proofed and covering them up in the language of ‘better data ethics’ will fail.

The consultation assures us that: “These proposals are not about selling public or personal data, collecting new data from citizens or weakening the Data Protection Act 1998.”

However it does already sell out personal data from at least BIS. How will these contradictory positions across all Departments be resolved?

The left hand gives out de-identified data in safe settings for public benefit research while the right hands out over 10 million records to the Telegraph and The Times without parental or schools’ consent. Only in la-la land are these both considered ethical.

Will somebody at the data sharing meeting please ask, “when will this stop?” It is wrong. These are our individual children’s identifiable personal data. Stop giving them away to press and charities and commercial users without informed consent. It’s ludicrous. Yet it is real.

Policy makers should provide an assurance there are plans for this to change as part of this consultation.

Without it, the consultation line about commercial use, is at best disingenuous, at worst a bare cheeked lie.

“These powers will also ensure we can improve the safe handling of citizen data by bringing consistency and improved safeguards to the way it is handled.”

Will it? Show me how and I might believe it.

Privacy, it was said at the RSS event, is the biggest concern in this consultation:

“includes proposals to expand the use of appropriate and ethical data science techniques to help tailor interventions to the public”

“also to start fixing government’s data infrastructure to better support public services.”

The techniques need outlined what they mean, and practices fixed now, because many stand on shaky legal ground. These privacy issues have come about over cumulative governments of different parties in the last ten years, so the problems are non-partisan, but need practical fixes.

Today, less than transparent international agreements push ‘very far-reaching chapters on the liberalisation of data trading’ while according to the European Court of Justice these practices lack a solid legal basis.

Today our government already gives our children’s personal data to commercial third parties and sells our higher education data without informed consent, while the DfE and BIS both know they fail processing and its potential consequences: the European Court reaffirmed in 2015 “persons whose personal data are subject to transfer and processing between two public administrative bodies must be informed in advance” in Judgment in Case C-201/14.

In a time that actively cultivates universal public fear,  it is time for individuals to be brave and ask the awkward questions because you either solve them up front, or hit the problems later. The child who stood up and said The Emperor has on no clothes, was right.

What’s missing?

The consultation conversation will only be genuine, once the policy makers acknowledge and address solutions regards:

  1. those data practices that are currently unethical and must change
  2. how the tailored public services datasharing legislation will shape the delivery of government services’ infrastructure and staff, as well as the service to the individual in the public.

If we start by understanding what the happy ending looks like, we are much more likely to arrive there, and how to measure success.

The datasharing consultation engagement, the ethics of data science, and impact on data infrastructures as part of ‘government as a platform’ need seen as a whole joined up story if we are each to consider what success for us as stakeholders, looks like.

We need to call out current data failings and things that are missing, to get them fixed.

Without a strong, consistent ethical framework you risk 3 things:

  1. data misuse and loss of public trust
  2. data non-use because your staff don’t trust they’re doing it right
  3. data is becoming a toxic asset

The upcoming meetings should address this and ask practically:

  1. How the codes of conduct, and ethics, are to be shaped, and by whom, if outwith the consultation?
  2. What is planned to manage and pay for the future changes in our data infrastructures;  ie the models of local government delivery?
  3. What is the happy ending that each data strand wants to achieve through this and how will the success criteria be measured?

Public benefit is supposed to be at the heart of this change. For UK statistics, for academic public benefit research, they are clear.

For some of the other strands, local public benefits that outweigh the privacy risks and do not jeopardise public trust seem like magical unicorns dancing in the land far, far away of centralised government; hard to imagine, and even harder to capture.

*****

Part one: A data sharing fairytale: Engagement
Part two: A data sharing fairytale: Ethics
Part three: A data sharing fairytale: Impact (this post)

Tailored public bodies review: Feb 2016

img credit: Hermann Vogel illustration ‘Cinderella’

On the Boundaries of Being Human and Big Data

Atlas, the Boston Dynamics created robot, won hearts and minds this week as it stoically survived man being mean.  Our collective human response was an emotional defence of the machine, and criticism of its unfair treatment by its tester.

Some on Twitter recalled the incident of Lord of The Flies style bullying by children in Japan that led the programmers to create an algorithm for ‘abuse avoidance’.

The concepts of fairness and of decision making algorithms for ‘abuse avoidance’ are interesting from perspectives of data mining, AI and the wider access to and use of tech in general, and in health specifically.

If the decision to avoid abuse can be taken out of an individual’s human hands and are based on unfathomable amounts of big data, where are its limits applied to human behaviour and activity?

When it is decided that an individual’s decision making capability is impaired or has been forfeited their consent may be revoked in their best interest.

Who has oversight of the boundaries of what is acceptable for one person, or for an organisation, to decide what is in someone else’s best interest, or indeed, the public interest?

Where these boundaries overlap – personal abuse avoidance, individual best interest and the public interest – and how society manage them, with what oversight, is yet to be widely debated.

The public will shortly be given the opportunity to respond to plans for the expansion of administrative datasharing in England through consultation.

We must get involved and it must be the start of a debate and dialogue not simply a tick-box to a done-deal, if data derived from us are to be used as a platform for future to “achieve great results for the NHS and everyone who depends on it.”

Administering applied “abuse avoidance” and Restraining Abilities

Administrative uses and secondary research using the public’s personal data are applied not only in health, but across the board of public bodies, including big plans for tech in the justice system.

An example in the news this week of applied tech and its restraint on human behaviour was ankle monitors.  While one type was abandoned by the MOJ at a cost of £23m on the same day more funding for transdermal tags was announced in London.

The use of this technology as a monitoring tool, should not of itself be a punishment. It is said compliance is not intended to affect the dignity of individuals who are being monitored, but through the collection of personal and health data  will ensure the deprivation of alcohol – avoiding its abuse for a person’s own good and in the public interest. Is it fair?

Abstinence orders might be applied to those convicted of crimes such as assault, being drunk and disorderly and drunk driving.

We’re yet to see much discussion of how these varying degrees of integration of tech with the human body, and human enhancement will happen through robot elements in our human lives.

How will the boundaries of what is possible and desirable be determined and by whom with what oversight?

What else might be considered as harmful as alcohol to individuals and to  society? Drugs? Nictotine? Excess sugar?

As we wonder about the ethics of how humanoids will act and the aesthetics of how human they look, I wonder how humane are we being, in all our ‘public’ tech design and deployment?

Umberto Eco who died on Friday wrote in ‘The birth of ethics’ that there are universal ideas on constraints, effectively that people should not harm other people, through deprivation, restrictions or psychological torture. And that we should not impose anything on others that “diminishes or stifles our capacity to think.”

How will we as a society collectively agree what that should look like, how far some can impose on others, without consent?

Enhancing the Boundaries of Being Human

Technology might be used to impose bodily boundaries on some people, but tech can also be used for the enhancement of others. retweeted this week, the brilliant Angel Giuffria’s arm.

While the technology in this case is literally hands-on in its application, increasingly it is not the technology itself but the data that it creates or captures which enables action through data-based decision making.

Robots that are tiny may be given big responsibilities to monitor and report massive amounts of data. What if we could swallow them?

Data if analysed and understood, become knowledge.

Knowledge can be used to inform decisions and take action.

So where are the boundaries of what data may be extracted,  information collated, and applied as individual interventions?

Defining the Boundaries of “in the Public Interest”

Where are boundaries of what data may be created, stored, and linked to create a detailed picture about us as individuals, if the purpose is determined to be in the public interest?

Who decides which purposes are in the public interest? What qualifies as research purposes? Who qualifies as meeting the criteria of ‘researcher’?

How far can research and interventions go without consent?

Should security services and law enforcement agencies always be entitled to get access to individuals’ data ‘in the public interest’?

That’s something Apple is currently testing in the US.

Should research bodies always be entitled to get access to individuals’ data ‘in the public interest’?

That’s something care.data tried and failed to assume the public supported and has yet to re-test. Impossible before respecting the opt out that was promised over two years ago in March 2014.

The question how much data research bodies may be ‘entitled to’ will be tested again in the datasharing consultation in the UK.

How data already gathered are used in research may be used differently from it is when we consent to its use at colllection. How this changes over time and its potential for scope creep is seen in Education. Pupil data has gone from passive collection of name to giving it out to third parties, to use in national surveys, so far.

And what of the future?

Where is the boundary between access and use of data not in enforcement of acts already committed but in their prediction and prevention?

If you believe there should be an assumption of law enforcement access to data when data are used for prediction and prevention, what about health?

Should there be any difference between researchers’ access to data when data are used for past analysis and for use in prediction?

If ethics define the boundary between what is acceptable and where actions by one person may impose something on another that “diminishes or stifles our capacity to think” – that takes away our decision making capacity – that nudges behaviour, or acts on behaviour that has not yet happened, who decides what is ethical?

How does a public that is poorly informed about current data practices, become well enough informed to participate in the debate of how data management should be designed today for their future?

How Deeply Mined should our Personal Data be?

The application of technology, non-specific but not yet AI, was also announced this week in the Google DeepMind work in the NHS.

Its first key launch app co-founder provided a report that established the operating framework for the Behavioural Insights Team established by Prime Minister David Cameron.

A number of highly respected public figures have been engaged to act in the public interest as unpaid Independent Reviewers of Google DeepMind Health. It will be interesting to see what their role is and how transparent its workings and public engagement will be.

The recent consultation on the NHS gave overwhelming feedback that the public does not support the direction of current NHS change. Even having removed all responses associated with ‘lefty’ campaigns, concerns listed on page 11, are consistent including a request the Government “should end further involvement of the private sector in healthcare”. It appears from the response that this engagement exercise will feed little into practice.

The strength of feeling should however be a clear message to new projects that people are passionate that equal access to healthcare for all matters and that the public wants to be informed and have their voices heard.

How will public involvement be ensured as complexity increases in these healthcare add-ons and changing technology?

Will Google DeepMind pave the way to a new approach to health research? A combination of ‘nudge’ behavioural insights, advanced neural networks, Big Data and technology is powerful. How will that power be used?

I was recently told that if new research is not pushing the boundaries of what is possible and permissible then it may not be worth doing, as it’s probably been done before.

Should anything that is new that becomes possible be realised?

I wonder how the balance will be weighted in requests for patient data and their application, in such a high profile project.

Will NHS Research Ethics Committees turn down research proposals in-house in hospitals that benefit the institution or advance their reputation, or the HSCIC, ever feel able to say no to data use by Google DeepMind?

Ethics committees safeguard the rights, safety, dignity and well-being of research participants, independently of research sponsors whereas these representatives are not all independent of commercial supporters. And it has not claimed it’s trying to be an ethics panel. But oversight is certainly needed.

The boundaries of ownership between what is seen to benefit commercial and state in modern health investment is perhaps more than blurred to an untrained eye. Genomics England – the government’s flagship programme giving commercial access to the genome of 100K people –  stockholding companies, data analytics companies, genome analytic companies, genome collection, and human tissue research, commercial and academic research,  often share directors, working partnerships and funders. That’s perhaps unsurprising given such a specialist small world.

It’s exciting to think of the possibilities if, “through a focus on patient outcomes, effective oversight, and the highest ethical principles, we can achieve great results for the NHS and everyone who depends on it.”

Where will an ageing society go, if medics can successfully treat more cancer for example? What diseases will be prioritised and others left behind in what is economically most viable to prevent? How much investment will be made in diseases of the poor or in countries where governments cannot afford to fund programmes?

What will we die from instead? What happens when some causes of ‘preventative death’ are deemed more socially acceptable than others? Where might prevention become socially enforced through nudging behaviour into new socially acceptable or ethical norms?

Don’t be Evil

Given the leading edge of the company and its curiosity-by-design to see how far “can we” will reach, “don’t be evil” may be very important. But “be good” might be better. Where is that boundary?

The boundaries of what ‘being human’ means and how Big Data will decide and influence that, are unclear and changing. How will the law and regulation keep up and society be engaged in support?

Data principles such as fairness, keeping data accurate, complete and up-to-date and ensuring data are not excessive retained for no longer than necessary for the purpose are being widely ignored or exempted under the banner of ‘research’.

Can data use retain a principled approach despite this and if we accept commercial users, profit making based on public data, will those principles from academic research remain in practice?

Exempt from the obligation to give a copy of personal data to an individual on request if data are for ‘research’ purposes, data about us and our children, are extracted and stored ‘without us’. Forever. That means in a future that we cannot see, but Google DeepMind among others, is designing.

Lay understanding, and that of many climical professionals is likely to be left far behind if advanced technologies and use of big data decision-making algorithms are hidden in black boxes.

Public transparency of the use of our data and future planned purposes are needed to create trust that these purposes are wise.

Data are increasingly linked and more valuable when identifiable.

Any organisation that wants to future-proof its reputational risk will make sure data collection and use today is with consent, since future outcomes derived are likely to be in interventions for individuals or society. Catching up consent will be hard unless designed in now.

A Dialogue on the Boundaries of Being Human and Big Data

Where the commercial, personal, and public interests are blurred, the highest ethical principles are going to be needed to ensure ‘abuse avoidance’ in the use of new technology, in increased data linkage and resultant data use in research of many different kinds.

How we as a society achieve the benefits of tech and datasharing and where its boundaries lie in “the public interest” needs public debate to co-design the direction we collectively want to partake in.

Once that is over, change needs supported by a method of oversight that is responsive to new technology, data use, and its challenges.

What a channel for ongoing public dialogue, challenge and potentially recourse might look like, should be part of that debate.

Destination smart-cities: design, desire and democracy (Part two)

Smart cities: private reach in public space and personal lives

Smart-cities are growing in the UK through private investment and encroachment on public space. They are being built by design at home, and supported by UK money abroad, with enormous expansion plans in India for example, in almost 100 cities.

With this rapid expansion of “smart” technology not only within our living rooms but my living space and indeed across all areas of life, how do we ensure equitable service delivery, (what citizens generally want, as demonstrated by strength of feeling on the NHS) continues in public ownership, when the boundary in current policy is ever more blurred between public and private corporate ownership?

How can we know and plan by-design that the values we hope for, are good values, and that they will be embedded in systems, in policies and planning? Values that most people really care about. How do we ensure “smart” does not ultimately mean less good? That “smart” does not in the end mean, less human.

Economic benefits seem to be the key driver in current government thinking around technology – more efficient = costs less.

While using technology progressing towards replacing repetitive work may be positive, how will we accommodate for those whose skills will no longer be needed? In particular its gendered aspect, and the more vulnerable in the workforce, since it is women and other minorities who work disproportionately in our part-time, low skill jobs. Jobs that are mainly held by women, even what we think of as intrinsically human, such as carers, are being trialed for outsourcing or assistance by technology. These robots monitor people, in their own homes and reduce staffing levels and care home occupancy. We’ll no doubt hear how good it is we need fewer carers because after all, we have a shortage of care staff. We’ll find out whether it is positive for the cared, or whether they find it it less ‘human'[e]. How will we measure those costs?

The ideal future of us all therefore having more leisure time sounds fab, but if we can’t afford it, we won’t be spending more of our time employed in leisure. Some think we’ll simply be unemployed. And more people live in the slums of Calcutta than in Soho.

One of the greatest benefits of technology is how more connected the world can be, but will it also be more equitable?

There are benefits in remote sensors monitoring changes in the atmosphere that dictate when cars should be taken off the roads on smog-days, or indicators when asthma risk-factors are high.

Crowd sourcing information about things which are broken, like fix-my-street, or lifts out-of-order are invaluable in cities for wheelchair users.

Innovative thinking and building things through technology can create things which solve simple problems and add value to the person using the tool.

But what of the people that cannot afford data, cannot be included in the skilled workforce, or will not navigate apps on a phone?

How this dis-incentivises the person using the technology has not only an effect on their disappointment with the tool, but the service delivery, and potentially wider still even to societal exclusion or stigma.These were the findings of the e-red book in Glasgow explained at the Digital event in health, held at the King’s Fund in summer 2015.

Further along the scale of systems and potential for negative user experience, how do we expect citizens to react to finding punishments handed out by unseen monitoring systems, finding out our behaviour was ‘nudged’ or find decisions taken about us, without us?

And what is the oversight and system of redress for people using systems, or whose data are used but inaccurate in a system, and cause injustice?

And wider still, while we encourage big money spent on big data in our part of the world how is it contributing to solving problems for millions for whom they will never matter? Digital and social media makes increasingly transparent our one connected world, with even less excuse for closing our eyes.

Approximately 15 million girls worldwide are married each year – that’s one girl, aged under 18, married off against her will every two seconds. [Huff Post, 2015]

Tinder-type apps are luxury optional extras for many in the world.

Without embedding values and oversight into some of what we do through digital tools implemented by private corporations for profit, ‘smart’ could mean less fair, less inclusive, less kind. Less global.

If digital becomes a destination, and how much it is implemented is seen as a measure of success, by measuring how “smart” we become risks losing sight of seeing technology as solutions and steps towards solving real problems for real people.

We need to be both clever and sensible, in our ‘smart’.

Are public oversight and regulation built in to make ‘smart’ also be safe?

If there were public consultation on how “smart” society will look would we all agree if and how we want it?

Thinking globally, we need to ask if we are prioritising the wrong problems? Are we creating more tech that we already have invented solutions for place where governments are willing to spend on them? And will it in those places make the society more connected across class and improve it for all, or enhance the lives of the ‘haves’ by having more, and the ‘have-nots’ be excluded?

Does it matter how smart your TV gets, or carer, or car, if you cannot afford any of these convenient add-ons to Life v1.1?

As we are ever more connected, we are a global society, and being ‘smart’ in one area may be reckless if at the expense or ignorance of another.

People need to Understand what “Smart” means

“Consistent with the wider global discourse on ‘smart’ cities, in India urban problems are constructed in specific ways to facilitate the adoption of “smart hi-tech solutions”. ‘Smart’ is thus likely to mean technocratic and centralized, undergirded by alliances between the Indian government and hi-technology corporations.”  [Saurabh Arora, Senior Lecturer in Technology and Innovation for Development at SPRU]

Those investing in both countries are often the same large corporations. Very often, venture capitalists.

Systems designed and owned by private companies provide the information technology infrastructure that i:

the basis for providing essential services to residents. There are many technological platforms involved, including but not limited to automated sensor networks and data centres.’

What happens when the commercial and public interest conflict and who decides that they do?

Decision making, Mining and Value

Massive amounts of data generated are being mined for making predictions, decisions and influencing public policy: in effect using Big Data for research purposes.

Using population-wide datasets for social and economic research today, is done in safe settings, using deidentified data, in the public interest, and has independent analysis of the risks and benefits of projects as part of the data access process.

Each project goes before an ethics committee review to assess its considerations for privacy and not only if the project can be done, but should be done, before it comes for central review.

Similarly our smart-cities need ethics committee review assessing the privacy impact and potential of projects before commissioning or approving smart-technology. Not only assessing if they are they feasible, and that we ‘can’ do it, but ‘should’ we do it. Not only assessing the use of the data generated from the projects, but assessing the ethical and privacy implications of the technology implementation itself.

The Committee recommendations on Big Data recently proposed that a ‘Council of Data Ethics’ should be created to explicitly address these consent and trust issues head on. But how?

Unseen smart-technology continues to grow unchecked often taking root in the cracks between public-private partnerships.

We keep hearing about Big Data improving public services but that “public” data is often held by private companies. In fact our personal data for public administration has been widely outsourced to private companies of which we have little oversight.

We’re told we paid the price in terms of skills and are catching up.

But if we simply roll forward in first gear into the connected city that sees all, we may find we arrive at a destination that was neither designed nor desired by the majority.

We may find that the “revolution, not evolution”, hoped for in digital services will be of the unwanted kind if companies keep pushing more and more for more data without the individual’s consent and our collective public buy-in to decisions made about data use.

Having written all this, I’ve now read the Royal Statistical Society’s publication which eloquently summarises their recent work and thinking. But I wonder how we tie all this into practical application?

How we do governance and regulation is tied tightly into the practicality of public-private relationships but also into deciding what should society look like? That is what our collective and policy decisions about what smart-cities should be and may do, is ultimately defining.

I don’t think we are addressing in depth yet the complexity of regulation and governance that will be sufficient to make Big Data and Public Spaces safe because companies say too much regulation risks choking off innovation and creativity.

But that risk must not be realised if it is managed well.

Rather we must see action to manage the application of smart-technology in a thoughtful way quickly, because if we do not, very soon, we’ll have lost any say in how our service providers deliver.

*******

I began my thoughts about this in Part one, on smart technology and data from the Sprint16 session and after this (Part two), continue to look at the design and development of smart technology making “The Best Use of Data” with a UK company case study (Part three) and “The Best Use of Data” used in predictions and the Future (Part four).

Monitoring software in schools: the Department for Education’s digital dream or nightmare? (2)

“Children do not lose their human rights by virtue of passing through the school gates” (UN Committee on the Rights of the Child, General Comment on ‘The aims of education’, 2001).

The Digital Skills in Schools inquiry [1] is examining the gap in education of our children to enable them to be citizens fit for the future.

We have an “educational gap” in digital skills and I have suggested it should not be seen only as functional or analytical, but should also address a gap in ethical skills and framework to equip our young people to understand their digital rights, as well as responsibilities.

Children must be enabled in education with opportunity to understand how they can grow “to develop physically, mentally, morally, spiritually and socially in a healthy and normal manner and in conditions of freedom and dignity”. [2]

Freedom to use the internet in privacy does not mean having to expose children to risks, but we should ask, are there ways of implementing practices which are more proportionate, and less intrusive than monitoring and logging keywords [3] for every child in the country? What problem is the DfE trying to solve and how?

Nicky Morgan’s “fantastic” GPS tracking App

The second technology tool Nicky Morgan mentioned in her BETT speech on January 22nd, is an app with GPS tracking and alerts creation. Her app verdict was “excellent” and “fantastic”:

“There are excellent examples at the moment such as the Family First app by Group Call. It uses GPS in mobile phones to help parents keep track of their children’s whereabouts, allowing them to check that they have arrived safely to school, alerting them if they stray from their usual schedule.” [4]

I’m not convinced tracking every child’s every move is either excellent or fantastic. Primarily because it will foster a nation of young people who feel untrusted, and I see a risk it could create a lower sense of self-reliance, self-confidence and self-responsibility.

Just as with the school software monitoring [see part one], there will be a chilling effect on children’s freedom if these technologies become the norm. If you fear misusing a word in an online search, or worry over stigma what others think, would you not change your behaviour? Our young people need to feel both secure and trusted at school.

How we use digital in schools shapes our future society

A population that trusts one another and trusts its government and organisations and press, is vital to a well functioning society.

If we want the benefits of a global society, datasharing for example to contribute to medical advance, people must understand how their own data and digital footprint fits into a bigger picture to support it.

In schools today pupils and parents are not informed that their personal confidential data are given to commercial third parties by the Department for Education at national level [5]. Preventing public engagement, hiding current practices, downplaying the risks of how data are misused, also prevents fair and transparent discussion of its benefits and how to do it better. Better, like making it accessible only in a secure setting not handing data out to Fleet Street.

For children this holds back public involvement in the discussion of the roles of technology in their own future. Fear of public backlash over poor practices must not hold back empowering our children’s understanding of digital skills and how their digital identity matters.

Digital skills are not shorthand for coding, but critical life skills

Skills our society will need must simultaneously manage the benefits to society and deal with great risks that will come with these advances in technology; advances in artificial intelligence, genomics, and autonomous robots, to select only three examples.

There is a glaring gap in their education how their own confidential personal data and digital footprint fit a globally connected society, and how they are used by commercial business and third parties.

There are concerns how apps could be misused by others too.

If we are to consider what is missing in our children’s preparations for life in which digital will no longer be a label but a way of life, then to identify the gap, we must first consider what we see as whole.

Rather than keeping children safe in education, as regards data sharing and digital privacy, the DfE seems happy to keep them ignorant. This is no way to treat our young people and develop their digital skills, just as giving their data away is not good cyber security.

What does a Dream for a  great ‘digital’ Society look like?

Had Martin Luther King lived to be 87 he would have continued to inspire hope and to challenge us to fulfill his dream for society – where everyone would have an equal opportunity for “life, liberty and the pursuit of happiness.”

Moving towards that goal, supported with technology, with ethical codes of practice, my dream is we see a more inclusive, fulfilled, sustainable and happier society. We must educate our children as fully rounded digital and data savvy individuals, who trust themselves and systems they use, and are well treated by others.

Sadly, introductions of these types of freedom limiting technologies for our children, risk instead that it may be a society in which many people do not feel comfortable, that lost sight of the value of privacy.

References:

[1] Digital Skills Inquiry: http://www.parliament.uk/business/committees/committees-a-z/commons-select/science-and-technology-committee/inquiries/parliament-2015/digital-skills-inquiry-15-16/

[2] UN Convention of the Rights of the Child

[3] Consultation: Keeping Children Safe in Education – closing Feb 16thThe “opportunities to teach safeguarding” section (para 77-78) has been updated and now says governing bodies and proprieties “should ensure” rather than “should consider” that children are taught about safeguarding, including online, through teaching and learning opportunities.

The Consultation Guidance: most relevant paragraphs 75 and 77 p 22

[4] Nicky Morgan’s full speech at BETT

[5] The defenddigitalme campaign to ask the Department forEducation to change practices and policy around The National Pupil Database

 

 

Parliament’s talking about Talk Talk and Big Data, like some parents talk about sex. Too little, too late.

Parliament’s talking about Talk Talk and Big Data, like some parents talk about sex ed. They should be discussing prevention and personal data protection for all our personal data, not just one company, after the event.

Everyone’s been talking about TalkTalk and for all the wrong reasons. Data loss and a 15-year-old combined with a reportedly reckless response to data protection, compounded by lack of care.

As Rory Cellan-Jones wrote [1] rebuilding its reputation with customers and security analysts is going to be a lengthy job.

In Parliament Chi Onwarah, Shadow Minister for Culture & the Digital Economy, summed up in her question, asking the Minister to acknowledge “that all the innovation has come from the criminals while the Government sit on their hands, leaving it to businesses and consumers to suffer the consequences?”  [Hansard 2]

MPs were concerned for the 4 million* customers’ loss of name, date of birth, email, and other sensitive data, and called for an inquiry. [It may now be fewer*.] [3] The SciTech committee got involved too.

I hope this means Parliament will talk about TalkTalk not as the problem to be solved, but as one case study in a review of contemporary policy and practices in personal data handling.

Government spends money in data protection work in the [4] “National Cyber Security Programme”. [NCSP] What is the measurable outcome – particularly for TalkTalk customers and public confidence – from its £860M budget?  If you look at the breakdown of those sums, with little going towards data protection and security compared with the Home Office and Defence, we should ask if government is spending our money in an appropriately balanced way on the different threats it perceives. Keith Vaz suggested British companies that lose £34 billion every year to cybercrime. Perhaps this question will come into the inquiry.

This all comes after things have gone wrong.  Again [5]. An organisation we trusted has abused that trust by not looking after data with the stringency that customers should be able to expect in the 21st century, and reportedly not making preventative changes, apparent a year ago. Will there be consequences this time?

The government now saying it is talking about data protection and consequences, is like saying they’re talking sex education with teens, but only giving out condoms to the boys.

It could be too little too late. And they want above all to avoid talking about their own practices. Let’s change that.

Will this mean a review to end risky behaviour, bring in change, and be wiser in future?

If MPs explore what the NCSP does, then we the public, should learn more about what government’s expectations of commercial companies is in regards modern practices.

In addition, any MPs’ inquiry should address government’s own role in its own handling of the public’s personal data. Will members of government act in a responsible manner or simply tell others how to do so?

Public discussion around both commercial and state use of our personal data, should mean genuine public engagement. It should involve a discussion of consent where necessary for purposes  beyond those we expect or have explained when we submit our data, and there needs to be a change in risky behaviour in terms of physical storage and release practices, or all the talk, is wasted.

Some say TalkTalk’s  practices mean they have broken their contract along with consumer trust. Government departments should also be asking whether their data handling would constitute a breach of the public’s trust and reasonable expectations.

Mr Vaizey should apply his same logic to government handling data as he does to commercial handling. He said he is open to suggestions for improvement. [6]

Let’s not just talk about TalkTalk.

    • Let’s Talk Consequences: organisations taking risk seriously and meaningful consequences if not [7]
    • Let’s Talk Education: the education of the public on personal data use by others and rights and responsibilities we have [8]
    • Let’s Talk Parliament’s Policies and Practices: about its own complementary lack of data  understanding in government and understand what good practice is in physical storage, good governance and transparent oversight
    • Let’s Talk Public Trust: and the question whether government can be trusted with public data it already has and whether its current handling makes it trustworthy to take more [9]

Vaizey said of the ICO now in his own department: “The Government take the UK’s cyber-security extremely seriously and we will continue to do everything in our power to protect organisations and individuals from attacks.”

“I will certainly meet the Information Commissioner to look at what further changes may be needed in the light of this data breach. [..] It has extensive powers to take action and, indeed, to levy significant fines. “

So what about consequences when data are used in ways the public would consider a loss, and not through an attack or a breach, but government policy? [10]

Let’s Talk Parliament’s Policies and Practices

Commercial companies are not alone in screwing up the use and processing [11] management of our personal data. The civil service under current policy seems perfectly capable of doing by itself. [12]

Government data policy has not kept up with 21st century practices and to me seems to work in the dark, as Chi Onwarah said,

‘illuminated by occasional flashes of incompetence.’

This incompetence can risk harm to people’s lives, to business and to public confidence.

And once given, trust would be undermined by changing the purposes or scope of use for which it was given, for example as care.data plans to do after the pilot. A most risky idea.

Trust in these systems, whether commercial or state, is crucial. Yet reviews which highlight this, and make suggestions to support trust such as ‘data should never be (and currently is never) released with personal identifiers‘ in The Shakespeare Review have been ignored by government.

Where our personal data are not used well in government departments by the department themselves, they seem content to date to rely on public ignorance to get away with current shoddy practices.

Practices such as not knowing who all your customers are, because they pass data on to others. Practices, such as giving individual level identifiable personal data to third parties without informing the public, or asking for consent. Practices, such as never auditing or measuring any benefit of giving away others personal data.

“It is very important that all businesses, particularly those handling significant amounts of sensitive customer data, have robust procedures in place to protect those data and to inform customers when there may have been a data breach.” Ed Vaizey, Oct 26th, HOC

If government departments prove to be unfit to handle the personal data we submit in trust to the state today, would we be right to trust them with even more?

While the government is busy wagging fingers at commercial data use poor practices, the care.data debacle is evidence that not all its MPs or civil service understand how data are used in commercial business or through government departments.

MPs calling for commercial companies to sharpen up their data protection must understand how commercial use of data often piggy-backs the public use of our personal data, or others getting access to it via government for purposes that were unintended.

Let’s Talk Education

If the public is to understand how personal data are to be kept securely with commercial organisations, why should they not equally ask to understand how the state secures their personal data? Educating the public could lead to better engagement with research, better understanding of how we can use digital services and a better educated society as a whole. It seems common sense.

At a recent public event [13],  I asked civil servants talking about big upcoming data plans they announced, linking school data with more further education and employment data, I asked how they planned to involve the people whose data they would use. There was no public engagement to mention. Why not? Inexcusable in this climate.

Public engagement is a matter of trust and developing understanding in a relationship. Organisations must get this right.[14]

If government is discussing risky practices by commercial companies, they also need to look closer to home and fix what is broken in government data handling where it exposes us to risk through loss of control of our personal data.

The National Pupil Database for example, stores and onwardly shares identifiable individual sensitive data of at least 8m children’s records from age 2 -19. That’s twice as big as the TalkTalk loss was first thought to be.

Prevention not protection is what we should champion. Rather than protection after the events,  MPs and public must demand emphasis on prevention measures in our personal data use.

This week sees more debate on how and why the government will legislate to have more powers to capture more data about all the people in the country. But are government policy, process and practices fit to handle our personal data, what they do with it and who they give it to?

Population-wide gathering of data surveillance in any of its many forms is not any less real just because you don’t see it. Children’s health, schools, increases in volume of tax data collection. We don’t discuss enough how these policies can be used every day without the right oversight. MPs are like the conservative parents not comfortable talking to their teens about sleeping with someone. Just because you don’t know, it doesn’t mean they’re not doing it. [15] It just means you don’t want to know because if you find out they’re not doing it safely, you’ll have to do something about it.

And it might be awkward. (Meanwhile in schools real, meaningful PHSE has been left off the curriculum.)

Mr. Vaizey asked in the Commons for suggestions for improvement.

My suggestion is this. How government manages data has many options. But the principle should be simple. Our personal data needs not only protected, but not exposed to unnecessary risk in the first place, by commercial or state bodies. Doing nothing, is not an option.

Let’s Talk about more than TalkTalk

Teens will be teens. If commercial companies can’t manage their systems better to prevent a child successfully hacking it, then it’s not enough to point at criminal behaviour. There is fault to learn from on all sides. In commercial and state uses of personal data.

There is talk of new, and bigger, data sharing plans. [16]

Will the government wait to see  and keep its fingers crossed each month to see if our data are used safely at unsecured settings with some of these unknown partners data might be onwardly shared with, hoping we won’t find out and they won’t need to talk about it, or have a grown up public debate based on public education?

Will it put preventative measures in place appropriate to the sensitivity and volume of the data it is itself responsible for?

Will moving forward with new plans mean safer practices?

If government genuinely wants our administrative data at the heart of digital government fit for the 21st century, it must first understand how all government departments collect and use public data. And it must educate the public in this and commercial data use.

We need a fundamental shift in the way the government respects public opinion and shift towards legal and privacy compliance – both of which are lacking.

Let’s not talk about TalkTalk. Let’s have meaningful grown up debate with genuine engagement. Let’s talk about prevention measures in our data protection. Let’s talk about consent. It’s personal.

******

[1] Questions for TalkTalk: http://www.bbc.co.uk/news/technology-34636308

[2] Hansard: http://www.publications.parliament.uk/pa/cm201516/cmhansrd/cm151026/debtext/151026-0001.htm#15102612000004

[3] TalkTalk update: http://www.talktalkgroup.com/press/press-releases/2015/cyber-attack-update-tuesday-october-30-2015.aspx

[4] The Cyber Security Programme: http://www.civilserviceworld.com/articles/feature/depth-look-national-cyber-security-programme

[5] Paul reviews TalkTalk; https://paul.reviews/value-security-avoid-talktalk/

[6] https://ico.org.uk/for-organisations/guide-to-data-protection/conditions-for-processing/

[7] Let’s talk Consequences: the consequences of current failures to meet customers’ reasonable expectations of acceptable risk, are low compared with elsewhere.  As John Nicolson (East Dunbartonshire) SNP pointed out in the debate, “In the United States, AT&T was fined £17 million for failing to protect customer data. In the United Kingdom, the ICO can only place fines of up to £500,000. For a company that received an annual revenue of nearly £1.8 billion, a fine that small will clearly not be terrifying. The regulation of telecoms must be strengthened to protect consumers.”

[8] Let’s talk education: FOI request revealing a samples of some individual level data released to members of the press: http://www.theyworkforyou.com/debates/?id=2015-10-26b.32.0

The CMA brought out a report in June, on the use of consumer data, the topic should be familiar in parliament, but little engagement has come about as a result. It suggested the benefit:

“will only be realised if consumers continue to provide data and this relies on them being able to trust the firms that collect and use it”, and that “consumers should know when and how their data is being collected and used and be able to decide whether and how to participate. They should have access to information from firms about how they are collecting, storing and using data.”

[9] Let’s Talk Public Trust – are the bodies involved Trustworthy? Government lacks an effective data policy and is resistant to change. Yet it wants to collect ever more personal and individual level for unknown purposes from the majority of 60m people, with an unprecedented PR campaign.  When I heard the words ‘we want a mature debate’ it was reminiscent of HSCIC’s ‘intelligent grown up debate’ requested by Kinglsey Manning, in a speech when he admitted lack of public knowledge was akin to a measure of past success, and effectively they would rather have kept the use of population wide health data ‘below the radar’.

Change: We need change, the old way after all, didn’t work, according to Minister Matt Hancock: “The old model of government has failed, so we will build a new one.” I’d like to see what that new one will look like. Does he mean to expand only data sharing policy, or the powers of the civil service?

[10] National Pupil Database detailed data releases to third parties https://www.whatdotheyknow.com/request/pupil_data_national_pupil_databa

[11] http://adrn.ac.uk/news-events/latest-news/adrn-rssevent

[12] https://jenpersson.com/public-trust-datasharing-nib-caredata-change/

[13] https://www.liberty-human-rights.org.uk/human-rights/privacy/state-surveillance

[14] http://www.computerweekly.com/news/4500256274/Government-will-tackle-barriers-to-sharing-and-linking-data-says-Cabinet-Office-minister-Hancock

George and the Chinese Dragon. Public spending and the cost of dignity.

In 2005 I sat early one morning in an enormous international hotel chain’s breakfast room, in Guangzhou.

Most of the men fetching two adult breakfasts from the vast buffet wore cream coloured chinos, and button down shirts. They sported standardised haircuts with hints of silver. Stylish women sat at  impeccable tables, cradling  babies in pink hats or spoon feeding small children.

On a busy downtown street, close to the Chinese embassy, the hotel was popular with American parents-to-be.

My local colleague explained to me later, that her sadness over thousands of Chinese daughters exported from a one-child policy nation in 2005 was countered by the hope that loving foreign families were found for them.

She repeated with dignity, party mantras and explanations drilled at school. She has good job, (but still she could not afford children). Too little land, too few schools, and healthcare too expensive. She sighed. Her eyes lit up as she looked at my bump and asked if I knew “girl or boy?” If it were a girl, she added, how beautiful she would be with large open eyes. We laughed about the contradictory artificial stereotypes of beauty, from East and West, each nation wanting what the other did not have.

Ten tears later in 2015, British Ministers have been drawing on China often recently, as a model for us to follow; in health, education and for the economy. Seeking something they think we do not have. Seeking to instill ‘discipline, hard-working, economy-first’ spin.

At the recent ResearchEd conference, Nick Gibb, [1] Minister of State at the Department for Education, talked about the BBC documentary “Are Our Kids Tough Enough” for several minutes and the positive values of the Chinese and its education system. It supposedly triggered ‘a global debate’ when British pupils experienced “the harsh discipline of a Chinese classroom”.

The Global Times praised the  First Minister Mr. Osborne as “the first Western official in recent years who focused on business potential rather than raising a magnifying glass to the ‘human rights issue” during his recent visit [2] when he put economic growth first.

Jeremy Hunt, Secretary of State for Health, was quoted at the political party conference  saying that he saw tax cut changes necessary as a cultural shift.  He suggested we should adopt the ‘hardworking’ character of the Chinese.

An attribute that is as artificial as it is inane.

Collective efforts over the last year or more, to project ‘hard-working’ as a measure of contribution to UK society into politics has become more concentrated, especially around the election. People who are not working, are undermined by statements inferring the less productive for the nation, the less value someone has as a person. Comments are repeated in a sustained drip feed, from Lord Freud’s remarks a year ago that disabled workers were not worth the full wage, to Hancock’s recent revelation that the decision to not apply the new minimum wage to the under 25s from 2016 “was an active policy choice.”  Mr. Hunt spoke about dignity being self-earned, not dependent on richness per se, but being self-made.

“If that £16,500 is either a high proportion or entirely through the benefit system you are trapped. It matters if you are earning that yourself, because if you are earning it yourself you are independent and that is the first step towards self-respect.”

This choice to value some people’s work less than others and acceptance of spin, is concerning.

What values are Ministers suggesting we adopt in the relentless drive for economic growth? [3] When our Ministers ignore human rights and laud Chinese values in a bid to be seen as an accepting trading partner, I wonder at what cost to our international integrity?

Simple things we take for granted such as unimpeded internet  access are not available in China. In Chinese society, hard working is not seen as such a positive value. It is a tolerated norm, and sometimes an imposed one at that, where parents leave their child with grandparents in the countryside and visit twice a year on leave from their city-based jobs. Our Ministers’ version of hardworking Chinese is idyllic spin compared with reality.

China is about to launch a scheme to measure sincerity and how each citizen compares with others in terms of compliance and dissent. Using people’s social media data to determine their ‘worth’ is an ominous prospect.

Mark Kitto from 2012 on why you’ll never be Chinese is a great read. I agree, “there are hundreds of well-rounded, wise Chinese people with a modern world view, people who could, and would willingly, help their motherland face the issues that are growing into state-shaking problems .”

Despite such institutional issues, Mr. Osborne appears to have an open door for deals with the Chinese state. Few people missed the announcements he made in China that HS2 will likely be built by Chinese investors, despite home grown opposition. Ministers and EDF have reportedly agreed on a controversial £25bn development of Hinkley Point C, nuclear plant, with most of upfront costs provided by Chinese companies, although “we are the builders.” [4]

Large parts of UK utilities’ infrastructure is founded on Chinese sourced spending in the UK it’s hard to see who ‘we’ are meant to be. [5] And that infrastructure is a two-way trade. Just as Chinese money has bought many of our previously publicly owned utilities, we have sold a staggeringly long list of security related items to the Chinese state. [6]

In July 2014 the four House of Commons Select Committees: “repeated their previous Recommendation that the Government should apply significantly more cautious judgements when considering arms export licence applications for goods to authoritarian regimes which might be used for internal repression.” 

UK to China exports
Chris Patten, former Hong Kong Governor,  criticised Osborne’s lax attitude to human rights but individual and collective  criticism appear to go unheard.

This perhaps is one measure of British economic growth at all costs. Not only is Britain supplying equipment that might be used for internal repression but the Minister appears to have adopted a singularly authoritarian attitude and democratic legitimacy of the Committees has been ignored. That is concerning.

The packaging of how upcoming cuts will be presented is clear.  We will find out what “hard working families” means to the Treasury. We need to work harder, like the Chinese, and through this approach, we will earn our dignity. No doubt rebuilding Britain, on great British values. Welfare will continue to be labelled as benefits, and with it, a value judgement on economic productivity equated with human worth. Cutting welfare, will be packaged as helping those people to help themselves out of self inflicted ‘bad’ situations, in which they have lost their self worth or found an easy ‘lifestyle choice’.

As welfare spending is reduced, its percentage spend with big service providers has risen after reforms, and private companies profit where money was once recycled in the state system. There is a glaring gap in evidence for some of these decisions taken.

What is next? If for example, universal benefits such as Universal Infant Free School Meals are cut, it will take food literally from the mouths of babes, in families who cannot afford to lose hot school dinners, living in poverty but not qualifying for welfare. The policy may be flawed because Free School Meals based on pupil premium entitlement does not cater for all who need it, but catering for none of them is not an improvement.

Ministers focus the arguments of worth and value around the individual. Doctors have been told to work harder. Schools have been told to offer more childcare to enable parents to work harder. How much harder can we really expect people to work? Is the Treasury’s vision is for us all to work more to pay more taxes? It is flawed if by adopting the political aim, the vast majority of people take home little more pay and sacrifice spare time with our friends and loved ones, running our health into the ground as a result.

The Chinese have a proverb that shows a wisdom missing from Ministers’ recent comments: “Time is money, and it is difficult for one to use money to get time.”

I often remember the hotel breakfast room, and wonder how many mothers, in how many in cities in China miss their daughters, whom they could not afford to keep, through fear of the potential effect. How many young men live without women in their lives who would want to, but find the gender imbalance a barrier to meeting someone. How many are struggling to care for elderly parents.

Not all costs can be measured in money.

The grandmother I met on the station platform last Wednesday had looked after her grandchild for half the day and has him overnight weekdays, so that Mum can first sleep and then work a night shift stacking shelves. That’s her daughter’s second shift of the day. She hardly sees her son.  The husband works the shelf-stacking third shift to supplement his income as a mechanic.

That is a real British family.

Those parents can’t work any harder. Their family is already at breaking point. They take no state welfare.  They don’t qualify for any support.

Must we be so driven to become ‘hard working families’ that our children will barely know their parents? Are hungry pupils to make do as best they can at lunchtime? Are these side effects children must be prepared to pay if their parents work ever harder to earn enough to live and earn their ‘dignity’ as defined by the Secretary of State for health?

Dignity is surely inherent in being human. Not something you earn by what you do. At the heart of human rights is the belief that everybody should be treated equally and with dignity – no matter what their circumstances.

If we adopt the Ministers’ be-like-the-Chinese mantra, and accept human dignity is something that must be earned, we should ask now what price have they put on it?

MPs must slay the dragon of spin and demand transparency of the total welfare budget and government spend with its delivery providers. There is a high public cost of further public spending cuts. In order to justify them, it is not the public who must work harder, but the Treasury, in order to deliver a transparent business case what the further sacrifices of ‘hard working families’ will achieve.

 

###

[1] ResearchEd conference, Nick Gibb, Minister of State at the Department for Education

[2] New Statesman

[3] https://www.opendemocracy.net/ournhs/jen-persson/why-is-government-putting-health-watchdogs-on-leash-of-%E2%80%98promoting-economic-growth

[4] The Sun: George Osborne party conference speech with 25 mentions of builders: “We are the builders”, said Mr. Osborne.

[5] The Drum: Li Ka Shing and British investment https://www.thedrum.com/opinion/2015/01/28/meet-li-ka-shing-man-o2-his-sights-has-quietly-become-one-britains-biggest

[6] Arms exports to authoritarian regimes and countries of concern worldwide The Committees http://www.publications.parliament.uk/pa/cm201415/cmselect/cmquad/608/60805.htm#a104

 

[image: Wassily Kandinsky ca 1911, George and the Dragon]

Digital revolution by design: infrastructures and the world we want

1. Digital revolution by design: building for change and people
2. Digital revolution by design: barriers by design
3. Digital revolution by design: infrastructures and the fruits of knowledge

This is Part 4.  Infrastructures and the world we want

At high level physical network infrastructures enable data transfer from one place to another and average users perceive little of it.

In the wider world Internet infrastructure today, this week might be looked back on as, to use a horrible cliché, a game changer. A two-tier Internet traffic system could be coming to Europe which would destroy a founding principle of equality – all traffic is created equal.

In other news, Facebook announced it will open an office in the toe of Africa, a foothold on a potential market of a billion people.

Facebook’s Internet.org initiative sees a further ‘magnificent seven’ companies working together. Two of whom Ericsson and Nokia will between them have “an effective lock down on the U.S market,” unless another viable network competitor emerges.  And massive reach worldwide.

In Africa today there is a hodge podge of operators and I’ll be interested to see how much effect the boys ganging up under the protection of everybody’s big brother ‘Facebook” will have on local markets.

And they’re not alone in wanting in on African action.

Whatever infrastructures China is building on and under the ground of the African continent, or donating ludicrous showcase gifts, how they are doing it has not gone unnoticed. The Chinese ethics of working and their environmental standards can provoke local disquiet.

Will Facebook’s decision makers shape up to offer Africa an ethical package that could include not only a social network, but physical one managing content delivery in the inner workings of tubes and pipes?

In Europe the data connections within and connecting the continent are shifting, as TTIP, CETA and TISA shape how our data and knowledge will be shared or reserved or copyrighted by multinational corporations.

I hope we will ensure transparency designed it these supra-national agreements on private ownership of public firms.

We don’t want to find commercial companies withhold information such as their cyber security planning, and infrastructure investments in the name of commercial protectionism, but at a public cost.

The public has opportunities now as these agreements are being drawn up, we may not get soon again.

Not only for the physical constructions, the often networked infrastructures, but intangible infrastructures of principles and power, co-dependencies around a physical system; the legal and ethical infrastructures of ownership, governance and accountability.

The Open Data institute has just launched a call for the promotion of understanding around our own data infrastructures:

“A strong data infrastructure will increase interoperability and collaboration, efficiency and productivity in public and private sectors, nationally and internationally.”

Sounds like something we want to get right in, and outside, the UK.

Governance of data is physically geographical through country unique legislation, as well as supra national such as European-wide data protection.

In some ways outdated legal concepts in a borderless digital age but one way at least over which there is manageable oversight and citizens should be able to call companies and State to account.

Yet that accountability is questionable when laws seem to be bypassed under the banner of surveillance.

As a result people have somewhat lost trust in national bodies to do the right thing. We want to share data for public good but not for commercial exploitation. And we’re not sure who to trust with it.

Data governance of contractual terms is part of the infrastructure needed to prevent exploitation and enable not restrict sharing. And it needs to catch up with apps whose terms and conditions can change after a user has enrolled.

That comes back down to the individual and some more  ideas on those personal infrastructures are in the previous post.

Can we build lasting foundations fit for a digital future?

Before launching into haphazard steps of a digital future towards 2020, the NIB/NHS decision makers need to consider the wider infrastructures in which it is set and understand under what ethical compass they are steering by.

Can there be oversight to make national and supra-national infrastructures legally regulated, bindingly interoperable and provider and developer Ts and Cs easily understood?

Is it possible to regulate only that which is offered or sold through UK based companies or web providers and what access should be enabled or barriers designed in?

Whose interests should data and knowledge created from data serve?

Any state paid initiative building a part of the digital future for our citizens must decide, is it to be for public good or for private profit?

NHS England’s digital health vision includes: “clinical decision support to be auto populated with existing healthcare information, to take real time feeds of biometric data, and to consider genomics data in the future.”  [NIB plans, Nov 2014]

In that 66 page document while it talks of data and trust and cyber security, ethics is not mentioned once.  The ambition is to create ‘health-as-a-platform’ and its focus is on tech, not on principles.

‘2020’ is the goal and it’s not a far away future at all if counted as 1175 working days from now.

By 2020 we may have moved on or away in a new digital direction entirely or to other new standards of network or technology. On what can we build?

Facebook’s founder sees a futuristic role for biometric data used in communication. Will he drive it? Should we want him to?

Detail will change, but ethical principles could better define the framework for development promoting the best of innovation long term and protect citizens from commercial exploitation. We need them now.

When Tim Berners-Lee called for a Magna Carta on the world wide web he asked for help to achieve the web he wants.

I think it’s about more than the web he wants. This fight is not only for net neutrality. It’s not only challenging the internet of things to have standards, ethics and quality that shape a fair future for all.

While we shape the web we want, we shape the world we want.

That’s pretty exciting, and we’d better try to get it right.

******

1. Digital revolution by design: building for change and people
2. Digital revolution by design: barriers by design
3. Digital revolution by design: infrastructures and the fruits of knowledge
4. Digital revolution by design: infrastructures and the world we want

 

Digital revolution by design: infrastructures and the fruits of knowledge

Since the beginning of time and the story of the Garden of Eden, man has found a way to share knowledge and its power.

Modern digital tools have become the everyday way to access knowledge for many across the world, giving quick access to information and sharing power more fairly.

In this third part of my thoughts on digital revolution by design, triggered by the #kfdigi15 event on June 16-17, I’ve been considering some of the constructs we have built; those we accept and those that could be changed, given the chance, to build a better digital future.

Not only the physical constructions, the often networked infrastructures, but intangible infrastructures of principles and power, co-dependencies around a physical system; the legal and ethical infrastructures of ownership, governance and accountability.

Our personal data flow in systems behind the screens, at the end of our fingertips. Controlled in frameworks designed by providers and manufacturers, government and commercial agencies.

Increasingly in digital discussions we hear that the data subject, the citizen, will control their own data.

But if it is on the terms and conditions set by others, how much control is real and how much is the talk of a consenting citizen only a fig leaf behind which any real control is still held by the developer or organisation providing the service?

When data are used, turned into knowledge as business intelligence that adds value to aid informed decision making. By human or machine.

How much knowledge is too much knowledge for the Internet of Things to build about its users? As Chris Matyszczyk wrote:

“We have all agreed to this. We click on ‘I agree’ with no thought of consequences, only of our convenience.”

Is not knowing what we have agreed to our fault, or the responsibility of the provider who’d rather we didn’t know?

Citizens’ rights are undermined in unethical interactions if we are exploited by easy one-click access and exchange our wealth of data at unseen cost. Can it be regulated to promote, not stifle innovation?

How can we get those rights back and how will ‘doing the right thing’ help shape and control the digital future we all want?

The infrastructures we live inside

As Andrew Chitty says in this HSJ article: “People live more mobile and transient lives and, as a result, expect more flexible, integrated, informed health services.

To manage that, do we need to know how systems work, how sharing works, and trust the functionality of what we are not being told and don’t see behind the screens?

At the personal level, whether we sign up for the social network, use a platform for free email, or connect our home and ourselves in the Internet of things, we each exchange our personal data with varying degrees of willingness. There there is often no alternative if we want to use the tool.

As more social and consensual ‘let the user decide’ models are being introduced, we hear it’s all about the user in control, but reality is that users still have to know what they sign up for.

In new models of platform identity sign on, and tools that track and mine our personal data to the nth degree that we share with the system, both the paternalistic models of the past and the new models of personal control and social sharing are merging.

Take a Fitbit as an example. It requires a named account and data sharing with the central app provider. You can choose whether or not to enable ‘social sharing’ with nominated friends whom you want to share your boasts or failures with. You can opt out of only that part.

I fear we are seeing the creation of a Leviathan sized monster that will be impossible to control and just as scary as today’s paternalistic data mis-management. Some data held by the provider and invisibly shared with third parties beyond our control, some we share with friends, and some stored only on our device.

While data are shared with third parties without our active knowledge, the same issue threatens to derail consumer products, as well as commercial ventures at national scale, and with them the public interest. Loss of trust in what is done behind the settings.

Society has somehow seen privacy lost as the default setting. It has become something to have to demand and defend.

“If there is one persistent concern about personal technology that nearly everybody expresses, it is privacy. In eleven of the twelve countries surveyed, with India the only exception, respondents say that technology’s effect on privacy was mostly negative.” [Microsoft survey 2015, of  12,002 internet users]

There’s one part of that I disagree with. It’s not the effect of technology itself, but the designer or developers’ decision making that affects privacy. People have a choice how to design and regulate how privacy is affected, not technology.

Citizens have vastly differing knowledge bases of how data are used and how to best interact with technology. But if they are told they own it, then all the decision making framework should be theirs too.

By giving consumers the impression of control, the shock is going to be all the greater if a breach should ever reveal where fitness wearable users slept and with whom, at what address, and were active for how long. Could a divorce case demand it?

Fitbit users have already found their data used by police and in the courtroom – probably not what they expected when they signed up to a better health tool.  Others may see benefits that could harm others by default who are excluded from accessing the tool.

Some at org level still seem to find this hard to understand but it is simple:
No trust = no data = no knowledge for commercial, public or personal use and it will restrict the very innovation you want to drive.

Google gmail users have to make 10+ clicks to restrict all ads and information sharing based on their privacy and ad account settings. The default is ad tailoring and data mining. Many don’t even know it is possible to change the settings and it’s not intuitive how to.

Firms need to consider their own reputational risk if users feel these policies are not explicit and are exploitation. Those caught ‘cheating’ users can get a very public slap on the wrist.

Let the data subjects rule, but on whose terms and conditions?

The question every citizen signing up to digital agreements should ask, is what are the small print  and how will I know if they change? Fair processing should offer data protection, but isn’t effective.

If you don’t have access to information, you make decisions based on a lack of information or misinformation. Decisions which may not be in your own best interest or that of others. Others can exploit that.

And realistically and fairly, organisations can’t expect citizens to read pages and pages of Ts&Cs. In addition, we don’t know what we don’t know. Information that is missing can be as vital to understand as that provided. ‘Third parties’ sharing – who exactly does that mean?

The concept of an informed citizenry is crucial to informed decision making but it must be within a framework of reasonable expectation.

How do we grow the fruits of knowledge in a digital future?

Real cash investment is needed now for a well-designed digital future, robust for cybersecurity, supporting enforceable governance and oversight. Collaboration on standards and thorough change plans. I’m sure there is much more, but this is a start.

Figurative investment is needed in educating citizens about the technology that should serve us, not imprison us in constructs we do not understand but cannot live without.

We must avoid the chaos and harm and wasted opportunity of designing massive state-run programmes in which people do not want to participate or cannot participate due to barriers of access to tools. Avoid a Babel of digital blasphemy in which the only wise solution might be to knock it down and start again.

Our legislators and regulators must take up their roles to get data use, and digital contract terms and conditions right for citizens, with simplicity and oversight. In doing so they will enable better protection against risks for commercial and non-profit orgs, while putting data subjects first.

To achieve greatness in a digital future we need: ‘people speaking the same language, then nothing they plan to do will be impossible for them’.

Ethics. It’s more than just a county east of London.

Let’s challenge decision makers to plant the best of what is human at the heart of the technology revolution: doing the right thing.

And from data, we will see the fruits of knowledge flourish.

******

1. Digital revolution by design: building for change and people
2. Digital revolution by design: barriers by design
3
. Digital revolution by design: infrastructures and the fruits of knowledge
4. Digital revolution by design: infrastructures and the world we want