Tag Archives: trust

Can new datasharing laws win social legitimacy, public trust and support without public engagement?

I’ve been struck by stories I’ve heard on the datasharing consultation, on data science, and on data infrastructures as part of ‘government as a platform’ (#GaaPFuture) in recent weeks. The audio recorded by the Royal Statistical Society on March 17th is excellent, and there were some good questions asked.

There were even questions from insurance backed panels to open up more data for commercial users, and calls for journalists to be seen as accredited researchers, as well as to include health data sharing. Three things that some stakeholders, all users of data, feel are  missing from consultation, and possibly some of those with the most widespread public concern and lowest levels of public trust. [1]

What I feel is missing in consultation discussions are:

  1. a representative range of independent public voice
  2. a compelling story of needs – why tailored public services benefits citizens from whom data is taken, not only benefits data users
  3. the impacts we expect to see in local government
  4. any cost/risk/benefit assessment of those impacts, or for citizens
  5. how the changes will be independently evaluated – as some are to be reviewed

The Royal Statistical Society and ODI have good summaries here of their thoughts, more geared towards the statistical and research aspects of data,  infrastructure and the consultation.

I focus on the other strands that use identifiable data for targeted interventions. Tailored public services, Debt, Fraud, Energy Companies’ use. I think we talk too little of people, and real needs.

Why the State wants more datasharing is not yet a compelling story and public need and benefit seem weak.

So far the creation of new data intermediaries, giving copies of our personal data to other public bodies  – and let’s be clear that this often means through commercial representatives like G4S, Atos, Management consultancies and more –  is yet to convince me of true public needs for the people, versus wants from parts of the State.

What the consultation hopes to achieve, is new powers of law, to give increased data sharing increased legal authority. However this alone will not bring about the social legitimacy of datasharing that the consultation appears to seek through ‘open policy making’.

Legitimacy is badly needed if there is to be public and professional support for change and increased use of our personal data as held by the State, which is missing today,  as care.data starkly exposed. [2]

The gap between Social Legitimacy and the Law

Almost 8 months ago now, before I knew about the datasharing consultation work-in-progress, I suggested to BIS that there was an opportunity for the UK to drive excellence in public involvement in the use of public data by getting real engagement, through pro-active consent.

The carrot for this, is achieving the goal that government wants – greater legal clarity, the use of a significant number of consented people’s personal data for complex range of secondary uses as a secondary benefit.

It was ignored.

If some feel entitled to the right to infringe on citizens’ privacy through a new legal gateway because they believe the public benefit outweighs private rights, then they must also take on the increased balance of risk of doing so, and a responsibility to  do so safely. It is in principle a slippery slope. Any new safeguards and ethics for how this will be done are however unclear in those data strands which are for targeted individual interventions. Especially if predictive.

Upcoming discussions on codes of practice [which have still to be shared] should demonstrate how this is to happen in practice, but codes are not sufficient. Laws which enable will be pushed to their borderline of legal and beyond that of ethical.

In England who would have thought that the 2013 changes that permitted individual children’s data to be given to third parties [3] for educational purposes, would mean giving highly sensitive, identifiable data to journalists without pupils or parental consent? The wording allows it. It is legal. However it fails the DPA Act legal requirement of fair processing.  Above all, it lacks social legitimacy and common sense.

In Scotland, there is current anger over the intrusive ‘named person’ laws which lack both professional and public support and intrude on privacy. Concerns raised should be lessons to learn from in England.

Common sense says laws must take into account social legitimacy.

We have been told at the open policy meetings that this change will not remove the need for informed consent. To be informed, means creating the opportunity for proper communications, and also knowing how you can use the service without coercion, i.e. not having to consent to secondary data uses in order to get the service, and knowing to withdraw consent at any later date. How will that be offered with ways of achieving the removal of data after sharing?

The stick for change, is the legal duty that the recent 2015 CJEU ruling reiterating the legal duty to fair processing [4] waved about. Not just a nice to have, but State bodies’ responsibility to inform citizens when their personal data are used for purposes other than those for which those data had initially been consented and given. New legislation will not  remove this legal duty.

How will it be achieved without public engagement?

Engagement is not PR

Failure to act on what you hear from listening to the public is costly.

Engagement is not done *to* people, don’t think explain why we need the data and its public benefit’ will work. Policy makers must engage with fears and not seek to dismiss or diminish them, but acknowledge and mitigate them by designing technically acceptable solutions. Solutions that enable data sharing in a strong framework of privacy and ethics, not that sees these concepts as barriers. Solutions that have social legitimacy because people support them.

Mr Hunt’s promised February 2014 opt out of anonymised data being used in health research, has yet to be put in place and has had immeasurable costs for delayed public research, and public trust.

How long before people consider suing the DH as data controller for misuse? From where does the arrogance stem that decides to ignore legal rights, moral rights and public opinion of more people than those who voted for the Minister responsible for its delay?

 

This attitude is what fails care.data and the harm is ongoing to public trust and to confidence for researchers’ continued access to data.

The same failure was pointed out by the public members of the tiny Genomics England public engagement meeting two years ago in March 2014, called to respond to concerns over the lack of engagement and potential harm for existing research. The comms lead made a suggestion that the new model of the commercialisation of the human genome in England, to be embedded in the NHS by 2017 as standard clinical practice, was like steam trains in Victorian England opening up the country to new commercial markets. The analogy was felt by the lay attendees to be, and I quote, ‘ridiculous.’

Exploiting confidential personal data for public good must have support and good two-way engagement if it is to get that support, and what is said and agreed must be acted on to be trustworthy.

Policy makers must take into account broad public opinion, and that is unlikely to be submitted to a Parliamentary consultation. (Personally, I first knew such  processes existed only when care.data was brought before the Select Committee in 2014.) We already know what many in the public think about sharing their confidential data from the work with care.data and objections to third party access, to lack of consent. Just because some policy makers don’t like what was said, doesn’t make that public opinion any less valid.

We must bring to the table the public voice from past but recent public engagement work on administrative datasharing [5], the voice of the non-research community, and from those who are not stakeholders who will use the data but the ‘data subjects’, the public  whose data are to be used.

Policy Making must be built on Public Trust

Open policy making is not open just because it says it is. Who has been invited, participated, and how their views actually make a difference on content and implementation is what matters.

Adding controversial ideas at the last minute is terrible engagement, its makes the process less trustworthy and diminishes its legitimacy.

This last minute change suggests some datasharing will be dictated despite critical views in the policy making and without any public engagement. If so, we should ask policy makers on what mandate?

Democracy depends on social legitimacy. Once you lose public trust, it is not easy to restore.

Can new datasharing laws win social legitimacy, public trust and support without public engagement?

In my next post I’ll post look at some of the public engagement work done on datasharing to date, and think about ethics in how data are applied.

*************

References:

[1] The Royal Statistical Society data trust deficit

[2] “The social licence for research: why care.data ran into trouble,” by Carter et al.

[3] FAQs: Campaign for safe and ethical National Pupil Data

[4] CJEU Bara 2015 Ruling – fair processing between public bodies

[5] Public Dialogues using Administrative data (ESRC / ADRN)

img credit: flickr.com/photos/internetarchivebookimages/

Destination smart-cities: design, desire and democracy (Part four)

Who is using all this Big Data? What decisions are being made on the back of it that we never see?

In the everyday and press it often seems that the general public does not understand data, and can easily be told things which we misinterpret.

There are tools in social media influencing public discussions and leading conversations in a different direction from that it had taken, and they operate without regulation.

It is perhaps meaningful that pro-reform Wellington School last week opted out of some of the greatest uses of Big Data sharing in the UK. League tables. Citing their failures. Deciding they werein fact, a key driver for poor educational practice.”

Most often we cannot tell from the data provided what we are told those Big Data should be telling us. And we can’t tell if the data are accurate, genuine and reliable.

Yet big companies are making big money selling the dream that Big Data is the key to decision making. Cumulatively through lack of skills to spot inaccuracy, and inability to do necessary interpretation, we’re being misled by what we find in Big Data.

Being misled is devastating for public trust, as the botched beginnings of care.data found in 2014. Trust has come to be understood as vital for future based on datasharing. Public involvement in how we are used in Big Data in the future, needs to include how our data are used in order to trust they are used well. And interpreting those data well is vital. Those lessons of the past and present must be learned, and not forgotten.

It’s time to invest some time in thinking about safeguarding trust in the future, in the unknown, and the unseen.

We need to be told which private companies like Cinven and FFT have copies of datasets like HES, the entire 62m national hospital records, or the NPD, our entire schools database population of 20 million, or even just its current cohort of 8+ million.

If the public is to trust the government and public bodies to use our data well, we need to know exactly how those data are used today and all these future plans that others have for our personal data.

When we talk about public bodies sharing data they hold for administrative purposes, do we know which private companies this may mean in reality?

The UK government has big plans for big data sharing, sharing across all public bodies, some tailored for individual interventions.

While there are interesting opportunities for public benefit from at-scale systems, the public benefit is at risk not only from lack of trust in how systems gather data and use them, but that interoperability gets lost in market competition.

Openness and transparency can be absent in public-private partnerships until things go wrong. Given the scale of smart-cities, we must have more than hope that data management and security will not be one of those things.

But how will we know if new plans design well, or not?

Who exactly holds and manages those data and where is the oversight of how they are being used?

Using Big Data to be predictive and personal

How do we definde “best use of data” in “public services” right across the board in a world in which boundaries between private and public in the provision of services have become increasingly blurred?

UK researchers and police are already analysing big data for predictive factors at postcode level for those at risk or harm, for example in combining health and education data.

What has grown across the Atlantic is now spreading here. When I lived there I could already see some of what is deeply flawed.

When your system has been as racist in its policing and equity of punishment as institutionally systemic as it is in the US, years of cumulative data bias translates into ‘heat lists’ and means “communities of color will be systematically penalized by any risk assessment tool that uses criminal history as a legitimate criterion.”

How can we ensure British policing does not pursue flawed predictive policies and methodologies, without seeing them?

What transparency have our use of predictive prisons and justice data?

What oversight will the planned new increase in use of satellite tags, and biometrics access in prisons have?

What policies can we have in place to hold data-driven decision-making processes accountable?<

What tools do we need to seek redress for decisions made using flawed algorithms that are apparently indisputable?

Is government truly committed to being open and talking about how far the nudge unit work is incorporated into any government predictive data use? If not, why not?

There is a need for a broad debate on the direction of big data and predictive technology and whether the public understands and wants it.If we don’t understand, it’s time someone explained it.

If I can’t opt out of O2 picking up my travel data ad infinitum on the Tube, I will opt out of their business model and try to find a less invasive provider. If I can’t opt out of EE picking up my personal data as I move around Hyde park, it won’t be them.

Most people just want to be left alone and their space is personal.

A public consultation on smart-technology, and its growth into public space and effect on privacy could be insightful.

Feed me Seymour?

With the encroachment of integrated smart technology over our cities – our roads, our parking, our shopping, our parks, our classrooms, our TV and our entertainment, even our children’s toys – surveillance and sharing information from systems we cannot see  start defining what others may view, or decide about us, behind the scenes in everything we do.

As it expands city wide, it will be watched closely if data are to be open for public benefit, but not invade privacy if “The data stored in this infrastructure won’t be confidential.”

If the destination of digital in all parts of our lives is smart-cities then we have to collectively decide, what do we want, what do we design, and how do we keep it democratic?

What price is our freedom to decide how far its growth should reach into public space and private lives?

The cost of smart cities to individuals and the public is not what it costs in investment made by private conglomerates.

Already the cost of smart technology is privacy inside our homes, our finances, and autonomy of decision making.

Facebook and social media may run algorithms we never see that influence our mood or decision making. Influencing that decision making is significant enough when it’s done through advertising encouraging us to decide which sausages to buy for your kids tea.

It is even more significant when you’re talking about influencing voting.

Who influences most voters wins an election. If we can’t see the technology behind the influence, have we also lost sight of how democracy is decided? The power behind the mechanics of the cogs of Whitehall may weaken inexplicably as computer driven decision from the tech companies’ hidden tools takes hold.

What opportunity and risk to “every part of government” does ever expanding digital bring?

The design and development of smart technology that makes decisions for us and about us, lies in in the hands of large private corporations, not government.

The means the public-interest values that could be built by design and their protection and oversight are currently outside our control.

There is no disincentive for companies that have taken private information that is none of their business, and quite literally, made it their business to not want to collect ever more data about us. It is outside our control.

We must plan by-design for the values we hope for, for ethics, to be embedded in systems, in policies, embedded in public planning and oversight of service provision by all providers. And that the a fair framework of values is used when giving permission to private providers who operate in public spaces.

We must plan for transparency and interoperability.

We must plan by-design for the safe use of data that does not choke creativity and innovation but both protects and champions privacy as a fundamental building block of trust for these new relationships between providers of private and public services, private and public things, in private and public space.

If “digital is changing how we deliver every part of government,” and we want to “harness the best of digital and technology, and the best use of data to improve public services right across the board” then we must see integration in the planning of policy and its application.

Across the board “the best use of data” must truly value privacy, and enable us to keep our autonomy as individuals.

Without this, the cost of smart cities growing unchecked, will be an ever growing transfer of power to the funders behind corporations and campaign politics.

The ultimate price of this loss of privacy, will be democracy itself.

****

This is the conclusion to a four part set of thoughts: On smart technology and data from the Sprint16 session (part one). I thought about this more in depth on “Smart systems and Public Services” here (part two), and the design and development of smart technology making “The Best Use of Data” here looking at today in a UK company case study (part three) and this part four, “The Best Use of Data” used in predictions and the Future.

The front door to our children’s personal data in schools

“EdTech UK will be a pro-active organisation building and accelerating a vibrant education and learning technology sector and leading new developments with our founding partners. It will also be a front door to government, educators, companies and investors from Britain and globally.”

Ian Fordham, CEO, EdTech UK

This front door is a gateway to access our children’s personal data and through it some companies are coming into our schools and homes and taking our data without asking.  And with that, our children lose control over their safeguarded digital identity. Forever.

Companies are all “committed to customer privacy” in those privacy policies which exist at all. However, typically this means they also share your information with ‘our affiliates, our licensors, our agents, our distributors and our suppliers’ and their circles are wide and often in perpetuity. Many simply don’t have a published policy.

Where do they store any data produced in the web session? Who may access it and use it for what purposes? Or how may they use the personal data associated with staff signing up with payment details?

According to research from London & Partners, championed by Boris Johnson, Martha Lane-Fox and others in EdTech, education is one of the fastest growing tech sectors in Britain and is worth £45bn globally; a number set to reach a staggering £129bn by 2020. And perhaps the EdTech diagrams in US dollars shows where the UK plan to draw companies from. If you build it, they will come.

The enthusiasm that some US EdTech type entrepreneurs I have met or listened to speak, is akin to religious fervour. Such is their drive for tech however, that they appear to forget that education is all about the child. Individual children. Not cohorts, or workforces. And even when they do it can be sincerely said, but lacks substance when you examine policies in practice.

How is the DfE measuring the cost and benefit of tech and its applications in education?

Is anyone willing to say not all tech is good tech, not every application is a wise application? Because every child is unique, not every app is one size fits all?

My 7-yo got so caught up in the game and in the mastery of the app their class was prescribed for homework in the past, that she couldn’t master the maths and harmed her confidence. (Imagine something like this, clicking on the two correct sheep with numbers stamped on them, that together add up to 12, for example, before they fall off and die.)

She has no problem with maths. Nor doing sums under pressure. She told me happily today she’d come joint second in a speed tables test. That particular app style simply doesn’t suit her.

I wonder if other children and parents find the same and if so, how would we know if these apps do more harm than good?

Nearly 300,000 young people in Britain have an anxiety disorder according to the Royal College of Psychiatrists. Feeling watched all the time on-and offline is unlikely to make anxiety any better.

How can the public and parents know that edTech which comes into the home with their children, is behaviourally sound?

How can the public and parents know that edTech which affects their children, is ethically sound in both security and application?

Where is the measured realism in the providers’ and policy makers fervour when both seek to marketise edTech and our personal data for the good of the economy, and ‘in the public interest’.

Just because we can, does not always mean we should. Simply because data linkage is feasible, even if it brings public benefit, cannot point blank mean it will always be in our best interest.

In whose best Interest is it anyway?

Right now, I’m not convinced that the digital policies at the heart of the Department for Education, the EdTech drivers or many providers have our children’s best interests at heart at all. It’s all about the economy; when talking if at all about children using the technology, many talk only of ‘preparing the workforce’.

Are children and parents asked to consent at individual level to the terms and conditions of the company and told what data will be extracted from the school systems about their child? Or do schools simply sign up their children and parents en masse, seeing it as part of their homework management system?

How much ‘real’ personal data they use varies. Some use only pseudo-IDs assigned by the teacher. Others log, store and share everything they do assigned to their ID or real email address , store performance over time and provide personalised reports of results.

Teachers and schools have a vital role to play in understanding data ethics and privacy to get this right and speaking to many, it doesn’t seem something they feel well equipped to do. Parents aren’t always asked. But should schools not always have to ask before giving data to a commercial third party or when not in an ’emergency’ situation?

I love tech. My children love making lego robots move with code. Or driving drones with bananas. Or animation. Technology offers opportunity for application in and outside schools for children that are fascinating, and worthy, and of benefit.

If however all parents are to protect children’s digital identity for future, and to be able to hand over any control and integrity over their personal data to them as adults,  we must better accommodate children’s data privacy in this 2016 gold rush for EdTech.

Pupils and parents need to be assured their software is both educationally and ethically sound.  Who defines those standards?

Who is in charge of Driving, Miss Morgan?

Microsoft’s vice-president of worldwide education, recently opened the BETT exhibition and praised teachers for using technology to achieve amazing things in the classroom, and urged innovators to  “join hands as a global community in driving this change”.

While there is a case to say no exposure to technology in today’s teaching would be neglectful, there is a stronger duty to ensure exposure to technology is positive and inclusive, not harmful.

Who regulates that?

We are on the edge of an explosion of tech and children’s personal data ‘sharing’ with third parties in education.

Where is its oversight?

The community of parents and children are at real risk of being completely left out these decisions, and exploited.

The upcoming “safeguarding” policies online are a joke if the DfE tells us loudly to safeguard children’s identity out front, and quietly gives their personal data away for cash round the back.

The front door to our children’s data “for government, educators, companies and investors from Britain and globally” is wide open.

Behind the scenes  in pupil data privacy, it’s a bit of a mess. And these policy makers and providers forgot to ask first,  if they could come in.

If we build it, would you come?

My question now is, if we could build something better on pupil data privacy AND better data use, what would it look like?

Could we build an assessment model of the collection, use and release of data in schools that could benefit pupils and parents, AND educational establishments and providers?

This could be a step towards future-proofing public trust which will be vital for companies who want a foot-in-the door of EdTech. Design an ethical framework for digital decision making and a practical data model for use in Education.

Educationally and ethically sound.

If together providers, policy makers, schools at group Trust level, could meet with Data Protection and Privacy civil society experts to shape a tool kit of how to assess privacy impact, to ensure safeguarding and freedoms, enable safe data flow and help design cybersecurity that works for them and protects children’s privacy that is lacking today, designing for tomorrow, would you come?

Which door will we choose?

*******

image credit: @ Ben Buschfeld Wikipedia

*added February 13th: Oftsed Chair sought from US

Monitoring software in schools: the Department for Education’s digital dream or nightmare? (2)

“Children do not lose their human rights by virtue of passing through the school gates” (UN Committee on the Rights of the Child, General Comment on ‘The aims of education’, 2001).

The Digital Skills in Schools inquiry [1] is examining the gap in education of our children to enable them to be citizens fit for the future.

We have an “educational gap” in digital skills and I have suggested it should not be seen only as functional or analytical, but should also address a gap in ethical skills and framework to equip our young people to understand their digital rights, as well as responsibilities.

Children must be enabled in education with opportunity to understand how they can grow “to develop physically, mentally, morally, spiritually and socially in a healthy and normal manner and in conditions of freedom and dignity”. [2]

Freedom to use the internet in privacy does not mean having to expose children to risks, but we should ask, are there ways of implementing practices which are more proportionate, and less intrusive than monitoring and logging keywords [3] for every child in the country? What problem is the DfE trying to solve and how?

Nicky Morgan’s “fantastic” GPS tracking App

The second technology tool Nicky Morgan mentioned in her BETT speech on January 22nd, is an app with GPS tracking and alerts creation. Her app verdict was “excellent” and “fantastic”:

“There are excellent examples at the moment such as the Family First app by Group Call. It uses GPS in mobile phones to help parents keep track of their children’s whereabouts, allowing them to check that they have arrived safely to school, alerting them if they stray from their usual schedule.” [4]

I’m not convinced tracking every child’s every move is either excellent or fantastic. Primarily because it will foster a nation of young people who feel untrusted, and I see a risk it could create a lower sense of self-reliance, self-confidence and self-responsibility.

Just as with the school software monitoring [see part one], there will be a chilling effect on children’s freedom if these technologies become the norm. If you fear misusing a word in an online search, or worry over stigma what others think, would you not change your behaviour? Our young people need to feel both secure and trusted at school.

How we use digital in schools shapes our future society

A population that trusts one another and trusts its government and organisations and press, is vital to a well functioning society.

If we want the benefits of a global society, datasharing for example to contribute to medical advance, people must understand how their own data and digital footprint fits into a bigger picture to support it.

In schools today pupils and parents are not informed that their personal confidential data are given to commercial third parties by the Department for Education at national level [5]. Preventing public engagement, hiding current practices, downplaying the risks of how data are misused, also prevents fair and transparent discussion of its benefits and how to do it better. Better, like making it accessible only in a secure setting not handing data out to Fleet Street.

For children this holds back public involvement in the discussion of the roles of technology in their own future. Fear of public backlash over poor practices must not hold back empowering our children’s understanding of digital skills and how their digital identity matters.

Digital skills are not shorthand for coding, but critical life skills

Skills our society will need must simultaneously manage the benefits to society and deal with great risks that will come with these advances in technology; advances in artificial intelligence, genomics, and autonomous robots, to select only three examples.

There is a glaring gap in their education how their own confidential personal data and digital footprint fit a globally connected society, and how they are used by commercial business and third parties.

There are concerns how apps could be misused by others too.

If we are to consider what is missing in our children’s preparations for life in which digital will no longer be a label but a way of life, then to identify the gap, we must first consider what we see as whole.

Rather than keeping children safe in education, as regards data sharing and digital privacy, the DfE seems happy to keep them ignorant. This is no way to treat our young people and develop their digital skills, just as giving their data away is not good cyber security.

What does a Dream for a  great ‘digital’ Society look like?

Had Martin Luther King lived to be 87 he would have continued to inspire hope and to challenge us to fulfill his dream for society – where everyone would have an equal opportunity for “life, liberty and the pursuit of happiness.”

Moving towards that goal, supported with technology, with ethical codes of practice, my dream is we see a more inclusive, fulfilled, sustainable and happier society. We must educate our children as fully rounded digital and data savvy individuals, who trust themselves and systems they use, and are well treated by others.

Sadly, introductions of these types of freedom limiting technologies for our children, risk instead that it may be a society in which many people do not feel comfortable, that lost sight of the value of privacy.

References:

[1] Digital Skills Inquiry: http://www.parliament.uk/business/committees/committees-a-z/commons-select/science-and-technology-committee/inquiries/parliament-2015/digital-skills-inquiry-15-16/

[2] UN Convention of the Rights of the Child

[3] Consultation: Keeping Children Safe in Education – closing Feb 16thThe “opportunities to teach safeguarding” section (para 77-78) has been updated and now says governing bodies and proprieties “should ensure” rather than “should consider” that children are taught about safeguarding, including online, through teaching and learning opportunities.

The Consultation Guidance: most relevant paragraphs 75 and 77 p 22

[4] Nicky Morgan’s full speech at BETT

[5] The defenddigitalme campaign to ask the Department forEducation to change practices and policy around The National Pupil Database

 

 

Monitoring software in schools: the Department for Education’s digital dream or nightmare? (1)

Nicky Morgan, the Education Secretary,  gave a speech [1] this week and shared her dream of the benefits technology for pupils.

It mentioned two initiatives to log children’s individual actions; one is included in a consultation on new statutory guidance, and the other she praised, is a GPS based mobile monitoring app.

As with many new applications of technology, how the concept is to be implemented in practice is important to help understand how intrusive any new use of data is going to be.

Unfortunately for this consultation there is no supporting code of practice what the change will mean, and questions need asked.

The most significant aspects in terms of changes to data collection through required monitoring are in the areas of statutory monitoring, systems, and mandatory teaching of ‘safeguarding’:

Consultation p11/14: “We believe including the requirement to ensure appropriate filtering and monitoring are in place, in statutory guidance, is proportional and reasonable in order to ensure all schools and colleges are meeting this requirement. We don’t think including this requirement will create addition burdens for the vast majority of schools, as they are already doing this, but we are keen to test this assumption.”

Consultation:  paragraph 75 on page 22 introduces this guidance section and ends with a link to “Buying advice for schools.” “Governing bodies and proprietors should be confident that systems are in place that will identify children accessing or trying to access harmful and inappropriate content online. Guidance on e-security is available from the National Education Network.

Guidance: para 78 “Whilst it is essential that governing bodies and proprietors ensure that appropriate filters and monitoring systems are in place they should be careful  that “over blocking” does not lead to unreasonable restrictions as to what children can be taught with regards to online teaching  and safeguarding.” —

Consultation: “The Opportunities to teach safeguarding” section (para 77-78) has been updated and now says governing bodies and  “should ensure” rather than “should consider” that children are taught about safeguarding, including online, through teaching and learning opportunities. This is an important topic and the assumption is the vast majority of governing bodies and proprietors will already be ensuring the children in their school are suitably equipped with regards to safeguarding. But we are keen to hear views as to the change in emphasis.”

Paragraph 88 on p24  is oddly phrased: “Governing bodies and proprietors should ensure that staff members do not agree confidentiality and always act in the best interests of the child.”

What if confidentiality may sometimes be in the best interests of the child? What would that mean in practice?

 

Keeping Children Safe in Education – Questions on the Consultation and Use in practice

The consultation [2] is open until February 16th, and includes a new requirement to have web filtering and monitoring systems.

Remembering that 85% of children’s waking hours are spent outside school, and in a wide range of schools our children aged 2 -19, not every moment is spent unsupervised and on-screen, is it appropriate that this 24/7 monitoring would be applied to all types of school?

This provider software is reportedly being used in nearly 1,400 secondary schools in the UK.  We hear little about its applied use.

The cases of cyber bullying or sexting in schools I hear of locally, or read in the press, are through smartphones. Unless the school snoops on individual devices I wonder therefore if the cost, implementation and impact is proportionate to the benefit?

  1. Necessary and proportionate? How does this type of monitoring compare with other alternatives?
  2. Privacy impact assessment? Has any been done – surely required as a minimum measure?
  3. Cost benefit risk assessment of the new guidance in practice?
  4. Problem vs Solution: What problem is it trying to solve and how will they measure if it is successful, or stop its use if it is not?  Are other methods on offer?
  5. Due diligence: how do parents know that the providers have undergone thorough vetting and understand who they are? After all, these providers have access to millions of our children’s  online interactions.
  6. Evidence: If it has been used for years in school, how has it been assessed against other methods of supervision?
  7. The national cash cost: this must be enormous when added up for every school in the country, how is cost balanced against risk?
  8. Intangible costs – has anyone asked our children’s feeling on this? Where is the boundary between what is constructive and creepy? Is scope change documented if they decide to collect more data?

Are we Creating a Solution that Solves or creates a Problem?

The private providers would have no incentive to say their reports don’t work and schools, legally required to be risk averse, would be unlikely to say stop if there is no outcome at all.

Some providers  include “review of all incidents by child protection and forensic experts; freeing up time for teachers to focus on intervention” and “trends such as top users can be viewed.” How involved are staff who know the child as a first point of information sharing?

Most tools are multipurposed and I understand the reasons given behind them, but how it is implemented concerns me.

If the extent of these issues really justify this mass monitoring in every school – what are we doing to fix the causes, not simply spy on every child’s every online action in school? (I look at how it extends outside in part two.)

Questions on Public engagement: How are children and families involved in the implementation and with what oversight?

  1. Privacy and consent: Has anyone asked pupils and parents what they think and what rights they have to say no to sharing data?
  2. Involvement: Are parents to be involved and informed in software purchasing and in all data sharing decisions at local or regional level? Is there consistency of message if providers vary?
  3. Transparency: Where are the data created through the child’s actions stored, and for how long? Who has access to the data? What actions may result from it? And with what oversight?
  4. Understanding: How will children and parents be told what is “harmful and inappropriate content” as loosely defined by the consultation, and what they may or may not research? Children’s slang changes often, and “safeguarding” terms are subjective.
  5. Recourse: Will it include assessment of unintended consequences from misinterpretation of information gathered?
  6. Consent: And can I opt my child out from data collection by these unknown and ‘faceless’ third parties?

If children and parents are told their web use is monitored, what chilling effect may that have on their trust of the system, of teaching staff, and their ability to look for content to support their own sensitive concerns or development  that they may not be able to safe to look for at home? What limitation will that put on their creativity?

These are all questions that should be asked to thoroughly understand the consultation and requires wide public examination.

Since key logging is already common practice (according to provider websites) and will effectively in practice become statutory, where is the public discussion? If it’s not explicitly statutory, should pupils be subject to spying on their web searches in a postcode lottery?

What exactly might this part of the new guidance mean for pupils?

In part two, I include the other part of her speech, the GPS app and ask whether if we track every child in and outside school, are we moving closer to the digital dream, or nightmare, in the search to close the digital skills gap?

****

References:

[1] Nicky Morgan’s full speech at BETT

[2] Consultation: Keeping Children Safe in Education – closing Feb 16thThe “opportunities to teach safeguarding” section (para 77-78) has been updated and now says governing bodies and proprieties “should ensure” rather than “should consider” that children are taught about safeguarding, including online, through teaching and learning opportunities.

The Consultation Guidance: most relevant paragraphs 75 and 77 p 22

“Governing bodies and proprietors should be confident that systems are in place that will identify children accessing or trying to access harmful and inappropriate content online. [Proposed statutory guidance]

Since “guidance on procuring appropriate ICT” from the National Education Network NEN* is offered, it is clearly intended that this ‘system’ to be ‘in place’, should be computer based. How will it be applied in practice? A number of the software providers for schools already provide services that include key logging, using “keyword detection libraries” that “provides a complete log of all online activity”.

(*It’s hard to read more about as many of NEN’s links are dead.)  

Act now: Stand up and speak out for your rights to finding out the facts #saveFOI

The Freedom of Information Act has enabled me to stand up for my children’s rights. It really matters to me. And we might lose it.

For every member of the public, who has ever or who has never used their rights under the Freedom of Information Act laws, the government consultation on changing them that closes today is worth caring about. If you haven’t yet had your say, go and take action here >> now.  If it is all you have time for before the end of today, you can sign 38 degrees petition or write an email to your MP.

Or by the end of today you can reply to the call for evidence. There is further guidance on the Campaign For Freedom of Information’s website. 38 Degrees have also got this plain English version.

Please do. Now. It closes today, on November 20th.

If you need convinced why it matters to me and it should to you, read on.

What will happen

If the proposed changes come to pass, information about public accountability will be lost. Political engagement will not be open to all equally. It will promote an unfair society in which individuals are not only prevented from taking part in full public life, but prevented from understanding decisions made about them or that affect them. Campaign groups will be constrained from standing up for human rights by cost.  The press will be restrained in what they can ask.

MySociety has a brilliant summary.  Michael Sheen spoke up calling it “nothing short of a full frontal attack” on the principles of democratic government. And Tom Watson spoke of three serious instances where facts would have stayed hidden, were it not for access made using the law of Freedom of Information:

1. death rates in cardiac patient care
2. cases when the police use Tasers on children
3. the existence of cracks in the nuclear power station at Hinckley

Why does FOI matter to me personally? In Education.

Because it’s enabled me to start a conversation to get the Department for Education to start to improve their handling of our 8 million children’s personal and sensitive data they hold in the National Pupil Database for England and Wales. Through FOI I asked for unpublished facts how many releases of identifiable personal data of school pupils have been fast-tracked at the Department of Education without panel oversight. And to see the panel terms of reference which are still not on their website.

The request: whatdotheykknow.com
The outcome:
National Pupil Database FOI case study summary here.

I’m now coordinating calls for changes on behalf of the 8m children whose records they hold and parents across the country.

******

Why does FOI matter to me personally? In Health.

Because Freedom of Information law has enabled public transparency of decision making and accountability of the care.data programme board decision making that was kept secret for over a year. NHS England refused to publish them. Their internal review declined appeal. The Information Commissioner’s Office upheld it.

The current protection afforded to the internal deliberations of public bodies are sufficient given section 35 and 36 exemptions. In fact my case study, while highlighting that NHS England refused to release information, also shows that only a handful of genuine redactions were necessary, using Section 36 to keep them hidden, when the minutes were finally released.

In October 2014 I simply wanted to see the meeting minutes form part of the public record of care.data planning. I wanted to see the cost-benefit business case and scrutinise it against the benefits case that the public were told of at every public engagement event I had been to.  When at every turn the public is told how little money the NHS can afford to spend I wanted scrutiny of what the programme would cost at national and local levels. It was in the public interest to better inform public debate about the merits of the national programme. And I strongly believe that it is in the public interest to be informed and fully understand the intention and programme that demands the use of sensitive personal data.

The request: whatdotheyknow.com
The outcome: care.data FOI case study summary here.

Others could use this information I hoped, to ask the right questions about missing meeting minutes and transparency, and for everyone to question why there was no cost-benefit business plan at all in private; while the public kept being told of the benefits.  And it shows that data collection is further set to expand, without public debate.

Since then the programme has been postoned again and work is in progress on improved public engagement to enable public and professional confidence.

What has Freedom of Information achieved?

One of the most memorable results of Freedom of Information was the MPs expenses scandal. Who knows how much this Freedom of Information Request saved the taxpayers in immeasurable amounts of future spending on duck houses since MPs have been required to publish expenses since 2010? Four MPs were jailed for false accounting. Peers were expelled. Second homes and what appeared to the public as silly spending on sundries were revealed. Mr. Cameron apologized in 2009, saying he was “appalled” by the expenses. The majority of MPs had done nothing illegal but the Freedom of Information request enabled the start of a process of increased transparency to the public which showed where activities, while permitted by law, were simply unethical or unreasonable.

Historical record

Information published under the Freedom of Information Act can help to ensure that important records of decision-making processes are retained as part of the historic background to government.

Increased trust

The right information at the right time helps make better decisions, make spending more transparent and makes policies and practices more trustworthy.

Access to official information can also improve public confidence where public sector bodies are seen as being open. In a 2011 survey carried out on behalf of the Information Commissioner’s Office, 81% of public bodies questioned agreed that the Act had increased the public’s trust in their organisation.

A key argument made by the commission is that those in public office need private space for decision making. The Information Commissioner’s Office countered this in their submission to the consultation saying,

“there is a distinction between a need for a private space, depending on the circumstances and a desire for secrecy across a broad area of public sector activity. It was the latter tendency that FOIA was intended to correct.”

So how much more “private space” do public servants need?

Holding back information

When information that are judged should not be released in the public interest, there are already exemptions that can be applied to prevent disclosure of information under the Freedom of Information Act. [1]

The exemptions include:

  • if the information can easily be accessed by other means – e.g. the internet or published documents
  • if the information is personal information
  • if the information is provided in confidence (but only if legally enforceable)
  • when there is a legal reason not to disclose
  • if the information is about national security, defence, the economy, law enforcement, formulation of Government policy, health and safety, communications with Her Majesty or other royalty, international relations, intended for future publication and commercial interests. (All exemptions in this group must be tested to see if disclosure is in the public interest.)

In addition to these exemptions, organisations can withhold information if it will take more than two-and-a-half days to provide it, or they cannot identify what information is needed (although they have to work with the requester to clarify what is being requested).

They can also withhold information if they decide the request is vexatious.

Does it cost us too much to administer?

Some people who are supportive of these changes say they are concerned about costs in answering requests but have perhaps not considered the savings in exceptional cases (like the Expenses Scandal outcome). And as mySociety has reported [2], money spent responding to Freedom of Information requests also needs to be considered fairly in the context of wider public spending. In 2012 it was reported that Staffordshire County Council had spent £38,000 in a year responding to Freedom of Information requests. The then Director of mySociety, Tom Steinberg, commented:

“From this I can see that oversight by citizens and journalists cost only £38,000 from a yearly total budget of £1.3bn. I think it is fantastic that Staffordshire County Council can provide such information for only 0.002 per cent of its operating budget.”

Why does the government want to make itself less transparent? Even the Information Commissioner’s office has replied to the consultation to say that the Commissioner does not consider that significant changes to the core principles of the legislation are needed. This is a good law, that gives the public rights in our favour and transparency into how we are governed and tax money spent.

How will the value of FOI be measured of what would be lost if the changes are made?

What can you do?

The call for evidence is here and there is further guidance on the Campaign For Freedom of Information’s website. 38 Degrees have also put together this super-easy Plain English version.

To have your say in the consultation closing on November 20th go online.

Or simply call or write to your MP.  Today. This really matters.


References:

[1] Requests can be refused https://ico.org.uk/for-organisations/guide-to-freedom-of-information/refusing-a-request/

[2] MySociety opposes restrictions https://www.mysociety.org/2015/11/11/voices-from-whatdotheyknow-why-we-oppose-foi-act-restrictions/

[3] National Pupil Database FOI case study summary here

[4] My care.data programme board FOI case study summary here

Parliament’s talking about Talk Talk and Big Data, like some parents talk about sex. Too little, too late.

Parliament’s talking about Talk Talk and Big Data, like some parents talk about sex ed. They should be discussing prevention and personal data protection for all our personal data, not just one company, after the event.

Everyone’s been talking about TalkTalk and for all the wrong reasons. Data loss and a 15-year-old combined with a reportedly reckless response to data protection, compounded by lack of care.

As Rory Cellan-Jones wrote [1] rebuilding its reputation with customers and security analysts is going to be a lengthy job.

In Parliament Chi Onwarah, Shadow Minister for Culture & the Digital Economy, summed up in her question, asking the Minister to acknowledge “that all the innovation has come from the criminals while the Government sit on their hands, leaving it to businesses and consumers to suffer the consequences?”  [Hansard 2]

MPs were concerned for the 4 million* customers’ loss of name, date of birth, email, and other sensitive data, and called for an inquiry. [It may now be fewer*.] [3] The SciTech committee got involved too.

I hope this means Parliament will talk about TalkTalk not as the problem to be solved, but as one case study in a review of contemporary policy and practices in personal data handling.

Government spends money in data protection work in the [4] “National Cyber Security Programme”. [NCSP] What is the measurable outcome – particularly for TalkTalk customers and public confidence – from its £860M budget?  If you look at the breakdown of those sums, with little going towards data protection and security compared with the Home Office and Defence, we should ask if government is spending our money in an appropriately balanced way on the different threats it perceives. Keith Vaz suggested British companies that lose £34 billion every year to cybercrime. Perhaps this question will come into the inquiry.

This all comes after things have gone wrong.  Again [5]. An organisation we trusted has abused that trust by not looking after data with the stringency that customers should be able to expect in the 21st century, and reportedly not making preventative changes, apparent a year ago. Will there be consequences this time?

The government now saying it is talking about data protection and consequences, is like saying they’re talking sex education with teens, but only giving out condoms to the boys.

It could be too little too late. And they want above all to avoid talking about their own practices. Let’s change that.

Will this mean a review to end risky behaviour, bring in change, and be wiser in future?

If MPs explore what the NCSP does, then we the public, should learn more about what government’s expectations of commercial companies is in regards modern practices.

In addition, any MPs’ inquiry should address government’s own role in its own handling of the public’s personal data. Will members of government act in a responsible manner or simply tell others how to do so?

Public discussion around both commercial and state use of our personal data, should mean genuine public engagement. It should involve a discussion of consent where necessary for purposes  beyond those we expect or have explained when we submit our data, and there needs to be a change in risky behaviour in terms of physical storage and release practices, or all the talk, is wasted.

Some say TalkTalk’s  practices mean they have broken their contract along with consumer trust. Government departments should also be asking whether their data handling would constitute a breach of the public’s trust and reasonable expectations.

Mr Vaizey should apply his same logic to government handling data as he does to commercial handling. He said he is open to suggestions for improvement. [6]

Let’s not just talk about TalkTalk.

    • Let’s Talk Consequences: organisations taking risk seriously and meaningful consequences if not [7]
    • Let’s Talk Education: the education of the public on personal data use by others and rights and responsibilities we have [8]
    • Let’s Talk Parliament’s Policies and Practices: about its own complementary lack of data  understanding in government and understand what good practice is in physical storage, good governance and transparent oversight
    • Let’s Talk Public Trust: and the question whether government can be trusted with public data it already has and whether its current handling makes it trustworthy to take more [9]

Vaizey said of the ICO now in his own department: “The Government take the UK’s cyber-security extremely seriously and we will continue to do everything in our power to protect organisations and individuals from attacks.”

“I will certainly meet the Information Commissioner to look at what further changes may be needed in the light of this data breach. [..] It has extensive powers to take action and, indeed, to levy significant fines. “

So what about consequences when data are used in ways the public would consider a loss, and not through an attack or a breach, but government policy? [10]

Let’s Talk Parliament’s Policies and Practices

Commercial companies are not alone in screwing up the use and processing [11] management of our personal data. The civil service under current policy seems perfectly capable of doing by itself. [12]

Government data policy has not kept up with 21st century practices and to me seems to work in the dark, as Chi Onwarah said,

‘illuminated by occasional flashes of incompetence.’

This incompetence can risk harm to people’s lives, to business and to public confidence.

And once given, trust would be undermined by changing the purposes or scope of use for which it was given, for example as care.data plans to do after the pilot. A most risky idea.

Trust in these systems, whether commercial or state, is crucial. Yet reviews which highlight this, and make suggestions to support trust such as ‘data should never be (and currently is never) released with personal identifiers‘ in The Shakespeare Review have been ignored by government.

Where our personal data are not used well in government departments by the department themselves, they seem content to date to rely on public ignorance to get away with current shoddy practices.

Practices such as not knowing who all your customers are, because they pass data on to others. Practices, such as giving individual level identifiable personal data to third parties without informing the public, or asking for consent. Practices, such as never auditing or measuring any benefit of giving away others personal data.

“It is very important that all businesses, particularly those handling significant amounts of sensitive customer data, have robust procedures in place to protect those data and to inform customers when there may have been a data breach.” Ed Vaizey, Oct 26th, HOC

If government departments prove to be unfit to handle the personal data we submit in trust to the state today, would we be right to trust them with even more?

While the government is busy wagging fingers at commercial data use poor practices, the care.data debacle is evidence that not all its MPs or civil service understand how data are used in commercial business or through government departments.

MPs calling for commercial companies to sharpen up their data protection must understand how commercial use of data often piggy-backs the public use of our personal data, or others getting access to it via government for purposes that were unintended.

Let’s Talk Education

If the public is to understand how personal data are to be kept securely with commercial organisations, why should they not equally ask to understand how the state secures their personal data? Educating the public could lead to better engagement with research, better understanding of how we can use digital services and a better educated society as a whole. It seems common sense.

At a recent public event [13],  I asked civil servants talking about big upcoming data plans they announced, linking school data with more further education and employment data, I asked how they planned to involve the people whose data they would use. There was no public engagement to mention. Why not? Inexcusable in this climate.

Public engagement is a matter of trust and developing understanding in a relationship. Organisations must get this right.[14]

If government is discussing risky practices by commercial companies, they also need to look closer to home and fix what is broken in government data handling where it exposes us to risk through loss of control of our personal data.

The National Pupil Database for example, stores and onwardly shares identifiable individual sensitive data of at least 8m children’s records from age 2 -19. That’s twice as big as the TalkTalk loss was first thought to be.

Prevention not protection is what we should champion. Rather than protection after the events,  MPs and public must demand emphasis on prevention measures in our personal data use.

This week sees more debate on how and why the government will legislate to have more powers to capture more data about all the people in the country. But are government policy, process and practices fit to handle our personal data, what they do with it and who they give it to?

Population-wide gathering of data surveillance in any of its many forms is not any less real just because you don’t see it. Children’s health, schools, increases in volume of tax data collection. We don’t discuss enough how these policies can be used every day without the right oversight. MPs are like the conservative parents not comfortable talking to their teens about sleeping with someone. Just because you don’t know, it doesn’t mean they’re not doing it. [15] It just means you don’t want to know because if you find out they’re not doing it safely, you’ll have to do something about it.

And it might be awkward. (Meanwhile in schools real, meaningful PHSE has been left off the curriculum.)

Mr. Vaizey asked in the Commons for suggestions for improvement.

My suggestion is this. How government manages data has many options. But the principle should be simple. Our personal data needs not only protected, but not exposed to unnecessary risk in the first place, by commercial or state bodies. Doing nothing, is not an option.

Let’s Talk about more than TalkTalk

Teens will be teens. If commercial companies can’t manage their systems better to prevent a child successfully hacking it, then it’s not enough to point at criminal behaviour. There is fault to learn from on all sides. In commercial and state uses of personal data.

There is talk of new, and bigger, data sharing plans. [16]

Will the government wait to see  and keep its fingers crossed each month to see if our data are used safely at unsecured settings with some of these unknown partners data might be onwardly shared with, hoping we won’t find out and they won’t need to talk about it, or have a grown up public debate based on public education?

Will it put preventative measures in place appropriate to the sensitivity and volume of the data it is itself responsible for?

Will moving forward with new plans mean safer practices?

If government genuinely wants our administrative data at the heart of digital government fit for the 21st century, it must first understand how all government departments collect and use public data. And it must educate the public in this and commercial data use.

We need a fundamental shift in the way the government respects public opinion and shift towards legal and privacy compliance – both of which are lacking.

Let’s not talk about TalkTalk. Let’s have meaningful grown up debate with genuine engagement. Let’s talk about prevention measures in our data protection. Let’s talk about consent. It’s personal.

******

[1] Questions for TalkTalk: http://www.bbc.co.uk/news/technology-34636308

[2] Hansard: http://www.publications.parliament.uk/pa/cm201516/cmhansrd/cm151026/debtext/151026-0001.htm#15102612000004

[3] TalkTalk update: http://www.talktalkgroup.com/press/press-releases/2015/cyber-attack-update-tuesday-october-30-2015.aspx

[4] The Cyber Security Programme: http://www.civilserviceworld.com/articles/feature/depth-look-national-cyber-security-programme

[5] Paul reviews TalkTalk; https://paul.reviews/value-security-avoid-talktalk/

[6] https://ico.org.uk/for-organisations/guide-to-data-protection/conditions-for-processing/

[7] Let’s talk Consequences: the consequences of current failures to meet customers’ reasonable expectations of acceptable risk, are low compared with elsewhere.  As John Nicolson (East Dunbartonshire) SNP pointed out in the debate, “In the United States, AT&T was fined £17 million for failing to protect customer data. In the United Kingdom, the ICO can only place fines of up to £500,000. For a company that received an annual revenue of nearly £1.8 billion, a fine that small will clearly not be terrifying. The regulation of telecoms must be strengthened to protect consumers.”

[8] Let’s talk education: FOI request revealing a samples of some individual level data released to members of the press: http://www.theyworkforyou.com/debates/?id=2015-10-26b.32.0

The CMA brought out a report in June, on the use of consumer data, the topic should be familiar in parliament, but little engagement has come about as a result. It suggested the benefit:

“will only be realised if consumers continue to provide data and this relies on them being able to trust the firms that collect and use it”, and that “consumers should know when and how their data is being collected and used and be able to decide whether and how to participate. They should have access to information from firms about how they are collecting, storing and using data.”

[9] Let’s Talk Public Trust – are the bodies involved Trustworthy? Government lacks an effective data policy and is resistant to change. Yet it wants to collect ever more personal and individual level for unknown purposes from the majority of 60m people, with an unprecedented PR campaign.  When I heard the words ‘we want a mature debate’ it was reminiscent of HSCIC’s ‘intelligent grown up debate’ requested by Kinglsey Manning, in a speech when he admitted lack of public knowledge was akin to a measure of past success, and effectively they would rather have kept the use of population wide health data ‘below the radar’.

Change: We need change, the old way after all, didn’t work, according to Minister Matt Hancock: “The old model of government has failed, so we will build a new one.” I’d like to see what that new one will look like. Does he mean to expand only data sharing policy, or the powers of the civil service?

[10] National Pupil Database detailed data releases to third parties https://www.whatdotheyknow.com/request/pupil_data_national_pupil_databa

[11] http://adrn.ac.uk/news-events/latest-news/adrn-rssevent

[12] https://jenpersson.com/public-trust-datasharing-nib-caredata-change/

[13] https://www.liberty-human-rights.org.uk/human-rights/privacy/state-surveillance

[14] http://www.computerweekly.com/news/4500256274/Government-will-tackle-barriers-to-sharing-and-linking-data-says-Cabinet-Office-minister-Hancock

care.data: delayed or not delayed? The train wreck that is always on time

If you cancel a train does it still show up in the delayed trains statistics?

care.data plans are not delayed (just don’t ask Healthwatch)

Somerset CCG’s announcement [1] of the delay in their care.data plans came as no surprise, except perhaps to NHS England who effectively denied it, reportedly saying work continues. [2] Both public statements may be true but it would have been good professional practice to publicly recognise that a top down delay affects others who are working hard on the ground to contribute to the effective rollout of the project. Causing confusion and delay is hard to work with. Change and technology projects run on timelines. Deadlines mean that different teams can each do their part and the whole gets done. Or not.

Healthwatch [3] has cancelled their planned public meetings.  Given that one of the reasons stated in the care.data CCG selection process was support from local patient groups including Healthwatch, this appears poor public relations. It almost wouldn’t matter, but in addition to the practicalities, the organisation and leadership are trying to prove it is trustworthy. [4]


HW_cancels


Somerset’s statement is straightforward and says it is applies to all pathfinders: 

“Following a speech by Jeremy Hunt, the Secretary of State for Health this week (3-9-15), in which he outlined his vision for the future use of technology across NHS, NHS England has asked the four care.data pathfinder pilots areas in England (Leeds, Blackburn and Derwent, West Hampshire and Somerset) to temporarily pause their activities.” [Sept 4, Somerset statement]


somerset


From when I first read of the GPES IAG concerns [5] I have seen the care.data programme hurtle from one crisis to another. But this is now a train wreck. A very quiet train wreck. No one has cried out much.[6] And yet I think the project,  professionals, and the public should be shouting from the top of the carriages that this programme needs help if it is ever to reach its destination.

care.data plans are not late against its business plan (there is none)

Where’s the business case? Why can’t it define deadlines that it can achieve?  In February 2015, I suggested the mentality that allows these unaccountable monster programmes to grow unchecked must die out.

I can’t even buy an Oyster card if I don’t know if there is money in my pocket. How can a programme which has already spent multi millions of pounds keep driving on without a budget? There is no transparency of what financial and non-financial benefits are to be expected to justify the cost. There is no accountable public measure of success checking it stays on track.

While it may be more comfortable for the organisation to deny problems, I do not believe it serves the public interest to hide information. This is supported by the very reason for being of the MPA process and its ‘challenge to Whitehall secrecy‘ [7] who rated the care.data rollout red [8] in last years audit. This requires scrutiny to find solutions.

care.data plans do not need to use lessons learned (do they?)

I hope at least there are lessons learned here in the pathfinder on what not to do before the communications rollout to 60m people.  In the words of Richard Feynman, “For successful technology, reality must take precedence over public relations.”

NHS England is using the public interest test to withhold information: “the particular public interest in preserving confidential communications between NHS England and its sponsoring department [the DH].”  I do not believe this serves the public interest if it is used to hide issues and critical external opinion. The argument made is that there is “stronger public interest in maintaining the exemption where it allows the effective development of policy and operational matters on an ongoing basis.”  The Public Accounts Committee in 2013 called for early transparency and intervention which prevents the ongoing waste of “billions of pounds of taxpayers’ money” in their report into the NPfIT. [9] It showed that a lack of transparency and oversight contributed to public harm, not benefit, in that project, under the watch of the Department of Health. The report said:

“Parliament needs to be kept informed not only of what additional costs are being incurred, but also of exactly what has been delivered so far for the billions of pounds spent on the National Programme. The benefits flowing from the National Programme to date are extremely disappointing. The Department estimates £3.7 billion of benefits to March 2012, just half of the costs incurred. This saga [NPfIT] is one of the worst and most expensive contracting fiascos in the history of the public sector.”

And the Public Accounts Committee made a recommendation in 2013:

“If the Department is to deliver a paperless NHS, it needs to draw on the lessons from the National Programme and develop a clear plan, including estimates of costs and benefits and a realistic timetable.” [PAC 2013][9]

Can we see any lessons drawn on today in care.data? Or any in Jeremy Hunt’s speech or his refusal to comment on costs for the paperless NHS plans reported by HSJ journal at NHSExpo15?

While history repeats itself and “estimates of costs and benefits and a realistic timetable” continue to be absent in the care.data programme, the only reason given by Somerset for delay is to fix the specific issue of opt out:

“The National Data Guardian for health and care, Dame Fiona Caldicott, will… provide advice on the wording for a new model of consents and opt-outs to be used by the care.data programme that is so vital for the future of the NHS. The work will be completed by January [2016]…”

Perhaps delay will buy NHS England some time to get itself on track and not only respect public choice on consent, but also deliver a data usage report to shore up trust, and tell us what benefits the programme will deliver that cannot already be delivered today (through existing means, like the CPRD for research [10]).

Perhaps.

care.data plans will only deliver benefits (if you don’t measure costs)

I’ve been told “the realisation of the benefits, which serve the public interest, is dependent on the care.data programme going ahead.” We should be able to see this programme’s costs AND benefits. It is we collectively after all who are paying for it, and for whom we are told the benefits are to be delivered. DH should release the business plan and all cost/benefit/savings  plans. This is a reasonable thing to ask. What is there to hide?

The risk has been repeatedly documented in 2014-15 board meetings that “the project continues without an approved business case”.

The public and medical profession are directly affected by the lack of money given by the Department of Health as the reason for the reductions in service in health and social care. What are we missing out on to deliver what benefit that we do not already get elsewhere today?

On the pilot work continuing, the statement from NHS England reads: “The public interest is best served by a proper debate about the nature of a person’s right to opt out of data sharing and we will now have clarity on the wording for the next steps in the programme,” 

I’d like to see that ‘proper debate’ at public events. The NIB leadership avoids answering hard questions even if asked in advance, as requested. Questions such as mine go unanswered::

“How does NHS England plan to future proof trust and deliver a process of communications for the planned future changes in scope, users or uses?”

We’re expected to jump on for the benefits, but not ask about the cost.

care.data plans have no future costs (just as long as they’re unknown)

care.data isn’t only an IT infrastructure enhancement and the world’s first population wide database of 60m primary care records. It’s a massive change platform through which the NHS England Commissioning Board will use individual level business intelligence to reshape the health service. A massive change programme  that commodifies patient confidentiality as a kick-starter for economic growth.  This is often packaged together with improvements for patients, requirements for patient safety, often meaning explanations talk about use of records in direct care conflated with secondary uses.

“Without interoperable digital data, high quality effective local services cannot be delivered; nor can we achieve a transformation in patient access to new online services and ‘apps’; nor will the NHS maximise its opportunity to be a world centre in medical science and research.” [NHS England, September 1 2015] 

So who will this transformation benefit? Who and what are all its drivers? Change is expensive. It costs time and effort and needs investment.

Blackburn and Darwen’s Healthwatch appear to have received £10K for care.data engagement as stated in their annual report. Somerset’s less clear. We can only assume that Hampshire, expecting a go live ‘later in 2015’ has also had costs. Were any of their patient facing materials already printed for distribution, their ‘allocated-under-austerity’ budgets spent?

care.data is not a single destination but a long journey with a roadmap of plans for incremental new datasets and expansion of new users.

The programme should already know and be able to communicate the process behind informing the public of future changes to ensure future use will meet public expectations in advance of any change taking place. And we should know who is going to pay for that project lifetime process, and ongoing change management. I keep asking what that process will be and how it will be managed:

June 17 2015, NIB meeting at the King’s Fund Digital Conference on Health & Social Care:

june17

September 2 2015, NIB Meeting at NHS Expo 15:

NIBQ_Sept

It goes unanswered time and time again despite all the plans and roadmaps and plans for change.

These projects are too costly to fail. They are too costly to justify only having transparency applied after the event, when forced to do so.

care.data plans are never late (just as long as there is no artificial deadline)

So back to my original question. If you cancel a train does it still show up in the delayed trains statistics? I suppose if the care.data programme claims there is no artificial deadline, it can never be late. If you stop setting measurable deadlines to deliver against, the programme can never be delayed. If there is no budget set, it can never be over it. The programme will only deliver benefits, if you never measure costs.

The programme can claim it is in the public interest for as long as we are prepared to pay with an open public purse and wait for it to be on track.  Wait until data are ready to be extracted, which the notice said:

…” is thought to remain a long way off.” 

All I can say to that, is I sure hope so. Right now, it’s not fit for purpose. There must be decisions on content and process arrived at first. But we also deserve to know what we are expecting of the long journey ahead.

On time, under budget, and in the public interest?

As long as NHS England is the body both applying and measuring the criteria, it fulfils them all.

*******

[1] Somerset CCG announces delay to care.data plans https://www.somersetlmc.co.uk/caredatapaused

[2] NHS England reply to Somerset announcement reported in Government Computing http://healthcare.governmentcomputing.com/news/ccg-caredata-pilot-work-continues-4668290

[3] Healthwatch bulletin: care.data meetings cancelled http://us7.campaign-archive1.com/?u=16b067dc44422096602892350&id=5dbdfc924c

[4] Building public trust: after the NIB public engagement in Bristol https://jenpersson.com/public-trust-datasharing-nib-caredata-change/

[5] GPES IAG http://www.hscic.gov.uk/media/12911/GPES-IAG-Minutes-12-September-2013/pdf/GPES_IAG_Minutes_12.09.13.pdf

[6] The Register – Right, opt out everybody! hated care.data programme paused again http://www.theregister.co.uk/2015/09/08/hated_caredata_paused_again_opt_out/

[7] Pulse Today care.data MPA rating http://www.pulsetoday.co.uk/your-practice/practice-topics/it/caredata-looks-unachievable-says-whitehall-watchdog/20010381.article#.VfMXYlbtiyM

[8] Major Projects Authority https://engage.cabinetoffice.gov.uk/major-projects-authority/

[9] The PAC 2013 ttp://www.parliament.uk/business/committees/committees-a-z/commons-select/public-accounts-committee/news/npfit-report/

[10] Clinical Practice Research Datalink (CPRD)

***

image source: http://glaconservatives.co.uk/news/london-commuters-owed-56million-in-unclaimed-refunds-by-rail-operators/

 

Building Public Trust [2]: a detailed approach to understanding Public Trust in data sharing

Enabling public trust in data sharing is not about ‘communicating benefits’. For those interested in nitty gritty, some practical suggestions for progress in Building Public Trust in data sharing follows on from my summary after the NIB Bristol event 24/7/15.

Trust is an important if invisible currency used in the two-way transactions between an organisation and people.

So far, there have been many interactions and listening events but much of what professionals and the public called for, remains undone and public trust in the programme remains unchanged since 2014.

If you accept that it is not public trust that needs built, but the tangible trusthworthiness of an organisation, then you should also ask  what needs done by the organisation to make that demonstrable change?

What’s today’s position on Public Trust of data storage and use

Trust in the data sharing process is layered and dependent on a number of factors. Mostly [based on polls and public event feedback from 2014] “who will access my data and what will they use it for?”

I’m going to look more closely below at planned purposes: research and commissioning.

It’s also important to remember that trust is not homogeneous. Trust  is nuanced even within the single relationship between one individual and another. Trust, like consent, is stratified – you may trust the Post Office to deliver a letter or postcard, but sign up to recorded delivery for something valuable.

So for example when it comes to secondary uses data sharing, I might trust HSCIC with storing and using my health records for anonymous statistics, for analysis of immunisation and illness patterns for example. But as long as they continue to share with the Home Office, police or other loosely defined third parties [5], do I want them to have fully  identifiable data at all?

Those bodies have their own public trust issues at an all time low.

Mixing the legitimate users of health data with these Back Office punitive  uses will invite some people’s opt out who would otherwise not. Some of the very groups who need the most health and social care understanding, research and care, will be the very groups who opt out if there is a possibility of police and Home Office access by the back door. Telling traveller communities what benefits care.data will bring them is wasted effort when they see NHS health data is a police accessible register. I know. I’ve talked to some about it.

That position on data storage and use should be reconsidered if NHS England is serious that this is about health and for the benefit of individuals and communities’ well being.

What has HSCIC changed to demonstrate that  it is trustworthy?

A new physical secure setting is being built that will enable researchers to view research data but not take raw data away.

That is something they can control, and have changed, and it demonstrates they take the public seriously and we reciprocate.

That is great – demonstrable change by the organisation, inviting change in the public.

That’s practical, so what can be done on policy by NHS England/DH?

What else should be done to demonstrate policy is trustworthy?

Act on what the public and professionals asked for in 2014. [8]

Right now it feels as though in public communications that the only kind of relationship that is wanted on the part of the leadership is a one night stand.

It’s all about what the programme wants. Minimise the objections, get the data, and sneak out. Even when its leaders talk about some sort of ongoing consent model, the focus is still about ‘how to enable sharing data.’

This focus is the wrong one. If you want to encourage people to share they need to know why, what’s in it for them, and why do you want it? What collecting the data is about is still important to explain and specifically, each time the scope changes if you are doing it fairly.

Remember. Data-sharing is not vital to future-proof the NHS. Using knowledge wisely is. 

What is the policy for the future of primary care research?

The CPRD already enables primary care GP data to be linked with secondary data for research. In fact it already links more items from GP held data than current are.data plans to extract. So what benefit will care.data offer to research that is not already available today?

Simply having ever more data, stored in more places will not make us wiser. Before it’s collected repeatedly, it is right to question why.

What do we have collected already? How is it used? Where are the gaps in what we want to achieve through the knowledge we could gain. It’s NOT simply about filling in what gaps exist in what data we could gather. Understand the purposes and what will be gained to see if it’s worth the efforts. Prioritise. Collect it all, is not a solution.

I had thought that the types of data to be collected in care.data were clear, and how it differs from direct care was clear. But the Bristol NIB meeting demonstrated a wide range of understanding in NHS and CCG staff, Local Authority staff, IT staff, IG professionals, data providers and other third parties.  Data for secondary purposes are not to be conflated with direct care.

But that’s not what care.data sharing is about. So where to start with public trust, asked the NIB Bristol #health2020 meeting?

Do you ignore the starting point or tailor your approach to it?

“The NHS is at a crossroads and needs to change and improve as it moves forward. That was the message from NHS England’s Chief Executive Simon Stevens as a Five Year Forward View for the NHS was launched.”  [1] [NHS England, Oct 2014]

As the public is told over and over again that change is vital to the health of a sustainable NHS, a parallel public debate rages, whether the policy-making organisations behind the NHS – the commissioning body NHS England, the Department of Health and Cabinet Office – are serious about the survival of universal health and care provision, and about supporting its clinicians.

It is against this backdrop, and under the premise that obtaining patient data for centralised secondary uses is do or die for the NHS, that the NIB #health2020 has set out [2] work stream 4: “Build and sustain public trust: Deliver roadmap to consent based information sharing and assurance of safeguards”

“Without the care.data programme, the health service will not have a future, said Tim Kelsey, national director for patients and information, NHS England.” [3]

 

Polls say [A] nearly all institutions suffer from a ‘trust in data deficit’. Trust in them to use data appropriately, is lower than trust in the organisation generally.

Public trust in what the Prime Minister says on health is low.

Trust in the Secretary of State for Health is possibly at an all time low, with: “a bitter divide, a growing rift between the Secretary of State for Health and the medical profession.” [New Statesman, July 2015]

This matters. care.data needs the support of professionals and public.

ADRN research showed multiple contributing factors: “Participants were also worried about personal data being leaked, lost, shared or sold by government departments to third parties, particularly commercial companies. Low trust in government more generally seemed to be driving these views.” [Dialogue on data]

It was interesting to see all the same issues as reflected by the public in care.data listening events, asked from the opposite perspective from data users.

But it was frustrating to sit ay the Bristol NIB #health2020 event and discuss questions around the same issues on data sharing already discussed at care.data events through the last 18 months.

Nothing substantial has changed other then HSCIC’s physical security for data storage.

It is frustrating knowing that these change and communications issues will keep coming back again and again if not addressed.

Personally, I’m starting to lose trust there is any real intention for change, if senior leadership is unwilling to address this properly and change themselves.

To see a change in Public Trust do what the public asked to see change: On Choice

At every care.data meeting I attended in 2014, people asked for choice.

They asked for boundaries between the purposes of data uses, real choice.

Willingness for their information to be used by academic researchers in the public interest does not equate to being willing for it to be used by a pharmaceutical company for their own market research and profit.

The public understand these separations well. To say they do not, underestimates people and does not reflect public feeling. Anyone attending 2014 care.data events, has heard many people discuss this. They want a granular consent model.

This would offer a red line between how data are used for what purposes.

Of the data-sharing organisations today some are trusted and others are not. Offering a granular consent approach would offer a choice of a red line between who gets access to data.

This choice of selective use, would encourage fewer people to opt out from all purposes, allowing more data to be available for research for example.

To see a change in Public Trust do what the public asked to see: Explain your purposes more robustly

Primarily this data is to be used and kept indefinitely for commissioning purposes. Research wasn’t included as purposes for care.data gathering  in the planned specifications for well over a year. [After research outcry]

Yet specific to commissioning, the Caldicott recommendations [3] were very clear; commissioning purposes were insufficient and illegal grounds for sharing fully identifiable data which was opposed by NHS England’s Commissioning Board:

“The NHS Commissioning Board suggested that the use of personal confidential data for commissioning purposes would be legitimate because it would form part of a ‘consent deal’ between the NHS and service users. The Review Panel does not support such a proposition. There is no evidence that the public is more likely to trust commissioners to handle personal confidential data than other groups of professionals who have learned how to work within the existing law.”

NHS England seems unwilling to change this position, despite the professionals bodies and the public’s opposition to sharing fully identifiable data for commissioning purposes [care.data listening events 2014]. Is it any wonder that they keep hitting the same barrier? More people don’t want that to happen than you do. Something’s gotta give.

Ref the GPES Customer Requirements specification from March 2013 v2.1 which states on page 11: “…for commissioning purposes, it is important to understand activity undertaken (or not undertaken) in all care settings. The “delta load” approach (by which only new events are uploaded) requires such data to be retained, to enable subsequent linkage.”

The public has asked for red lines to differentiate between the purposes of data uses. NHS England and the Department of Health policy seems unwilling to do so.  Why?

To see a change in Public Trust do what the public asked to see: Red lines on policy of commercial use – and its impact on opt out

The public has asked for red lines outlawing commercial exploitation of their data. Though it was said it was changed, in practice it is hard to see. Department of Health policy seems unwilling to be clear, because the Care Act 2012 purposes remained loose.  Why?

As second best, the public has asked for choice not to have their data used at all for secondary purposes and were offered an opt out.

NHS England leaflet and the Department of Health, Secretary of State publicly promised this but has been unable to implement it and to date has made no public announcement on when it will be respected.  Why?

Trust does not exist in a vacuum.  What you say and what you actually do, matter. Policy and practice are co-dependent. Public trust depends on your organisations being trustworthy.

Creating public trust is not the government, the DH or NIB’s task ahead. They must instead focus on improving their own competency, honesty and reliability and through those, they will demonstrate that they can be trusted.

That the secondary purposes opt out has not been respected does not demonstrate those qualities.

“Trust is not about the public. Public trust is about the organisation being trustworthy.”

How will they do that?

Let the DH/NHS England and organisations in policy and practice address what they themselves will stop and start doing to bring about change in their own actions and behaviours.

Communications change request: Start by addressing the current position NOT what the change will bring. You must move people along the curve , not dump them with a fait accomplice and wonder why the reaction is so dire.

changecurve

Vital for this is the current opt out; what was promised and what was done.

The secondary uses opt out must be implemented with urgency.

To see a change in Public Trust you need to take action. the Programme needs to do what the public asked to see change: on granular consent, on commercial use and defined purposes.

And to gather suggested actions, start asking the right questions.

Not ‘how do we rebuild public trust?’ but “how can we demonstrate that we are trustworthy to the public?”

1. How can a [data-sharing] org demonstrate it is trustworthy?
2. Identify: why people feel confident their trust is well placed?
3. Why do clinical professionals feel confident in any org?
4. What would harm the organisational-trust-chain in future?
5. How will the org-trust-chain be positively maintained in future?
6. What opportunities will be missed if that does not happen?
(identify value)

Yes the concepts are close,  but how it is worded defines what is done.

These apparent small differences make all the difference in how people provide you ideas, how you harness them into real change and improvement.

Only then can you start understanding why “communicating the benefits” has not worked and how it should affect future communications  materials.

From this you will find it much easier to target actual tasks, and short and long term do-able solutions than an open discussion will deliver. Doing should  include thinking/attitudes as well as actions.

This will lead to communications messages that are concrete not wooly. More about that in the next posts.

####

To follow, for those interested in nitty gritty, some practical suggestions for progress in Building Public Trust in data sharing:

Part one: A seven step top line summary – What I’d like to see change addressing public trust in health data sharing for secondary purposes.

This is Part two: a New Approach is needed to understanding Public Trust For those interested in a detailed approach on Trust. What Practical and Policy steps influence trust. On Reserach and Commissioning. Trust is not homogeneous. Trust  is nuanced even within the single relationship between one individual and another. It doesn’t exist in a vacuum.

Part three: Know where you’re starting from. What behaviours influence trust and how can we begin to see them demonstrated. Mr.Kelsey discusses  consent and opt out. Fixing what has already been communicated is vital before new communications get rolled out. Vital to tailor the content of public communications, for public trust and credibility the programme must be clear what is missing and what needs filled in. #Health2020 Bristol NIB meeting.

Part four: “Communicate the Benefits” won’t work – How Communications influence trust. For those interested in more in-depth reasons, I outline in part two why the communications approach is not working, why the focus on ‘benefits’ is wrong, and fixes.

Part five: Future solutions – why a new approach may work better for future trust – not to attempt to rebuild trust where there is now none, but strengthen what is already trusted and fix today’s flawed behaviours; honesty and reliability, that  are vital to future proofing public trust.

 

####

References:

[1] NHS England October 2014 http://www.england.nhs.uk/2014/10/23/nhs-leaders-vision/

[2] Workstream 4: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/442829/Work_Stream_4.pdf

[3] Caldicott Review 2: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/192572/2900774_InfoGovernance_accv2.pdf

[4] Missing Programme Board documents: 2015 and June-October 2014

[5] HSCIC Data release register

[6] Telegraph article on Type 2 opt out http://www.telegraph.co.uk/news/health/news/11655777/Nearly-1million-patients-could-be-having-confidential-data-shared-against-their-wishes.html

[7] Why Wanting a Better Care.Data is not Luddite: http://davidg-flatout.blogspot.co.uk/2014/04/why-wanting-better-caredata-is-not.html

[8] Talking to the public about using their data is crucial- David Walker, StatsLife http://www.statslife.org.uk/opinion/1316-talking-to-the-public-about-using-their-data-is-crucial

[9] Dame Fiona Caldicott appointed in new role as National Data Guardian

[10] Without care.data health service has no future says director http://www.computerweekly.com/news/2240216402/Without-Caredata-we-wont-have-a-health-service-for-much-longer-says-NHS

[11] Coin Street, care.data advisory meeting, September 6th 2014: https://storify.com/ruth_beattie/care-data-advisory-group-open-meeting-6th-septembe

[12] Public questions unanswered: https://jenpersson.com/pathfinder/

Digital revolution by design: infrastructures and the fruits of knowledge

Since the beginning of time and the story of the Garden of Eden, man has found a way to share knowledge and its power.

Modern digital tools have become the everyday way to access knowledge for many across the world, giving quick access to information and sharing power more fairly.

In this third part of my thoughts on digital revolution by design, triggered by the #kfdigi15 event on June 16-17, I’ve been considering some of the constructs we have built; those we accept and those that could be changed, given the chance, to build a better digital future.

Not only the physical constructions, the often networked infrastructures, but intangible infrastructures of principles and power, co-dependencies around a physical system; the legal and ethical infrastructures of ownership, governance and accountability.

Our personal data flow in systems behind the screens, at the end of our fingertips. Controlled in frameworks designed by providers and manufacturers, government and commercial agencies.

Increasingly in digital discussions we hear that the data subject, the citizen, will control their own data.

But if it is on the terms and conditions set by others, how much control is real and how much is the talk of a consenting citizen only a fig leaf behind which any real control is still held by the developer or organisation providing the service?

When data are used, turned into knowledge as business intelligence that adds value to aid informed decision making. By human or machine.

How much knowledge is too much knowledge for the Internet of Things to build about its users? As Chris Matyszczyk wrote:

“We have all agreed to this. We click on ‘I agree’ with no thought of consequences, only of our convenience.”

Is not knowing what we have agreed to our fault, or the responsibility of the provider who’d rather we didn’t know?

Citizens’ rights are undermined in unethical interactions if we are exploited by easy one-click access and exchange our wealth of data at unseen cost. Can it be regulated to promote, not stifle innovation?

How can we get those rights back and how will ‘doing the right thing’ help shape and control the digital future we all want?

The infrastructures we live inside

As Andrew Chitty says in this HSJ article: “People live more mobile and transient lives and, as a result, expect more flexible, integrated, informed health services.

To manage that, do we need to know how systems work, how sharing works, and trust the functionality of what we are not being told and don’t see behind the screens?

At the personal level, whether we sign up for the social network, use a platform for free email, or connect our home and ourselves in the Internet of things, we each exchange our personal data with varying degrees of willingness. There there is often no alternative if we want to use the tool.

As more social and consensual ‘let the user decide’ models are being introduced, we hear it’s all about the user in control, but reality is that users still have to know what they sign up for.

In new models of platform identity sign on, and tools that track and mine our personal data to the nth degree that we share with the system, both the paternalistic models of the past and the new models of personal control and social sharing are merging.

Take a Fitbit as an example. It requires a named account and data sharing with the central app provider. You can choose whether or not to enable ‘social sharing’ with nominated friends whom you want to share your boasts or failures with. You can opt out of only that part.

I fear we are seeing the creation of a Leviathan sized monster that will be impossible to control and just as scary as today’s paternalistic data mis-management. Some data held by the provider and invisibly shared with third parties beyond our control, some we share with friends, and some stored only on our device.

While data are shared with third parties without our active knowledge, the same issue threatens to derail consumer products, as well as commercial ventures at national scale, and with them the public interest. Loss of trust in what is done behind the settings.

Society has somehow seen privacy lost as the default setting. It has become something to have to demand and defend.

“If there is one persistent concern about personal technology that nearly everybody expresses, it is privacy. In eleven of the twelve countries surveyed, with India the only exception, respondents say that technology’s effect on privacy was mostly negative.” [Microsoft survey 2015, of  12,002 internet users]

There’s one part of that I disagree with. It’s not the effect of technology itself, but the designer or developers’ decision making that affects privacy. People have a choice how to design and regulate how privacy is affected, not technology.

Citizens have vastly differing knowledge bases of how data are used and how to best interact with technology. But if they are told they own it, then all the decision making framework should be theirs too.

By giving consumers the impression of control, the shock is going to be all the greater if a breach should ever reveal where fitness wearable users slept and with whom, at what address, and were active for how long. Could a divorce case demand it?

Fitbit users have already found their data used by police and in the courtroom – probably not what they expected when they signed up to a better health tool.  Others may see benefits that could harm others by default who are excluded from accessing the tool.

Some at org level still seem to find this hard to understand but it is simple:
No trust = no data = no knowledge for commercial, public or personal use and it will restrict the very innovation you want to drive.

Google gmail users have to make 10+ clicks to restrict all ads and information sharing based on their privacy and ad account settings. The default is ad tailoring and data mining. Many don’t even know it is possible to change the settings and it’s not intuitive how to.

Firms need to consider their own reputational risk if users feel these policies are not explicit and are exploitation. Those caught ‘cheating’ users can get a very public slap on the wrist.

Let the data subjects rule, but on whose terms and conditions?

The question every citizen signing up to digital agreements should ask, is what are the small print  and how will I know if they change? Fair processing should offer data protection, but isn’t effective.

If you don’t have access to information, you make decisions based on a lack of information or misinformation. Decisions which may not be in your own best interest or that of others. Others can exploit that.

And realistically and fairly, organisations can’t expect citizens to read pages and pages of Ts&Cs. In addition, we don’t know what we don’t know. Information that is missing can be as vital to understand as that provided. ‘Third parties’ sharing – who exactly does that mean?

The concept of an informed citizenry is crucial to informed decision making but it must be within a framework of reasonable expectation.

How do we grow the fruits of knowledge in a digital future?

Real cash investment is needed now for a well-designed digital future, robust for cybersecurity, supporting enforceable governance and oversight. Collaboration on standards and thorough change plans. I’m sure there is much more, but this is a start.

Figurative investment is needed in educating citizens about the technology that should serve us, not imprison us in constructs we do not understand but cannot live without.

We must avoid the chaos and harm and wasted opportunity of designing massive state-run programmes in which people do not want to participate or cannot participate due to barriers of access to tools. Avoid a Babel of digital blasphemy in which the only wise solution might be to knock it down and start again.

Our legislators and regulators must take up their roles to get data use, and digital contract terms and conditions right for citizens, with simplicity and oversight. In doing so they will enable better protection against risks for commercial and non-profit orgs, while putting data subjects first.

To achieve greatness in a digital future we need: ‘people speaking the same language, then nothing they plan to do will be impossible for them’.

Ethics. It’s more than just a county east of London.

Let’s challenge decision makers to plant the best of what is human at the heart of the technology revolution: doing the right thing.

And from data, we will see the fruits of knowledge flourish.

******

1. Digital revolution by design: building for change and people
2. Digital revolution by design: barriers by design
3
. Digital revolution by design: infrastructures and the fruits of knowledge
4. Digital revolution by design: infrastructures and the world we want