Category Archives: confidentiality

Thoughts on the Online Harms White Paper (I)

“Whatever the social issue we want to grasp – the answer should always begin with family.”

Not my words, but David Cameron’s. Just five years ago, Conservative policy was all about “putting families at the centre of domestic policy-making.”

Debate on the Online Harms White Paper, thanks in part to media framing of its own departmental making, is almost all about children. But I struggle with the debate that leaves out our role as parents almost entirely, other than as bereft or helpless victims ourselves.

I am conscious wearing my other hat of defenddigitalme, that not all families are the same, and not all children have families. Yet it seems counter to conservative values,  for a party that places the family traditionally at the centre of policy, to leave out or abdicate parents of responsibility for their children’s actions and care online.

Parental responsibility cannot be outsourced to tech companies, or accept it’s too hard to police our children’s phones. If we as parents are concerned about harms, it is our responsibility to enable access to that which is not, and be aware and educate ourselves and our children on what is. We are aware of what they read in books. I cast an eye over what they borrow or buy. I play a supervisory role.

Brutal as it may be, the Internet is not responsible for suicide. It’s just not that simple. We cannot bring children back from the dead. We certainly can as society and policy makers, try and create the conditions that harms are not normalised, and do not become more common.  And seek to reduce risk. But few would suggest social media is a single source of children’s mental health issues.

What policy makers are trying to regulate is in essence, not a single source of online harms but 2.1 billion users’ online behaviours.

It follows that to see social media as a single source of attributable fault per se, is equally misplaced. A one-size-fits-all solution is going to be flawed, but everyone seems to have accepted its inevitability.

So how will we make the least bad law?

If we are to have sound law that can be applied around what is lawful,  we must reduce the substance of debate by removing what is already unlawful and has appropriate remedy and enforcement.

Debate must also try to be free from emotive content and language.

I strongly suspect the language around ‘our way of life’ and ‘values’ in the White Paper comes from the Home Office. So while it sounds fair and just, we must remember reality in the background of TOEIC, of Windrush, of children removed from school because their national records are being misused beyond educational purposes. The Home Office is no friend of child rights, and does not foster the societal values that break down discrimination and harm. It instead creates harms of its own making, and division by design.

I’m going to quote Graham Smith, for I cannot word it better.

“Harms to society, feature heavily in the White Paper, for example: content or activity that:

“threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration.”

Similarly:

“undermine our democratic values and debate”;

“encouraging us to make decisions that could damage our health, undermining our respect and tolerance for each other and confusing our understanding of what is happening in the wider world.”

This kind of prose may befit the soapbox or an election manifesto, but has no place in or near legislation.”

[Cyberleagle, April 18, 2019,Users Behaving Badly – the Online Harms White Paper]

My key concern in this area is that through a feeling of ‘it is all awful’ stems the sense that ‘all regulation will be better than now’, and  comes with a real risk of increasing current practices that would not be better than now, and in fact need fixing.

More monitoring

The first, is today’s general monitoring of school children’s Internet content for risk and harms, which creates unintended consequences and very real harms of its own — at the moment, without oversight.

In yesterday’s House of Lords debate, Lord Haskel, said,

“This is the practicality of monitoring the internet. When the duty of care required by the White Paper becomes law, companies and regulators will have to do a lot more of it. ” [April 30, HOL]

The Brennan Centre yesterday published its research on the spend by US schools purchasing social media monitoring software from 2013-18, and highlighted some of the issues:

Aside from anecdotes promoted by the companies that sell this software, there is no proof that these surveillance tools work [compared with other practices]. But there are plenty of risks. In any context, social media is ripe for misinterpretation and misuse.” [Brennan Centre for Justice, April 30, 209]

That monitoring software focuses on two things —

a) seeing children through the lens of terrorism and extremism, and b) harms caused by them to others, or as victims of harms by others, or self-harm.

It is the near same list of ‘harms’ topics that the White Paper covers. Co-driven by the same department interested in it in schools — the Home Office.

These concerns are set in the context of the direction of travel of law and policy making, its own loosening of accountability and process.

It was preceded by a House of Commons discussion on Social Media and Health, lead by the former Minister for Digital, Culture, Media and Sport who seems to feel more at home in that sphere, than in health.

His unilateral award of funds to the Samaritans for work with Google and Facebook on a duty of care, while the very same is still under public consultation, is surprising to say the least.

But it was his response to this question, which points to the slippery slope such regulations may lead. The Freedom of Speech champions should be most concerned not even by what is potentially in any legislation ahead, but in the direction of travel and debate around it.

“Will he look at whether tech giants such as Amazon can be brought into the remit of the Online Harms White Paper?

He replied, that “Amazon sells physical goods for the most part and surely has a duty of care to those who buy them, in the same way that a shop has a responsibility for what it sells. My hon. Friend makes an important point, which I will follow up.”

Mixed messages

The Center for Democracy and Technology recommended in its 2017 report, Mixed Messages? The Limits of Automated Social Media Content Analysis, that the use of automated content analysis tools to detect or remove illegal content should never be mandated in law.

Debate so far has demonstrated broad gaps between what is wanted, in knowledge, and what is possible. If behaviours are to be stopped because they are undesirable rather than unlawful, we open up a whole can of worms if not done with the greatest attention to  detail.

Lord Stevenson and Lord McNally both suggested that pre-legislative scrutiny of the Bill, and more discussion would be positive. Let’s hope it happens.

Here’s my personal first reflections on the Online Harms White Paper discussion so far.

Six suggestions:

Suggestion one: 

The Law Commission Review, mentioned in the House of Lords debate,  may provide what I have been thinking of crowd sourcing and now may not need to. A list of laws that the Online Harms White Paper related discussion reaches into, so that we can compare what is needed in debate versus what is being sucked in. We should aim to curtail emotive discussion of broad risk and threat that people experience online. This would enable the themes which are already covered in law to be avoided, and focus on the gaps.  It would make for much tighter and more effective legislation. For example, the Crown Prosecution Service offers Guidelines on prosecuting cases involving communications sent via social media, but a wider list of law is needed.

Suggestion two:
After (1) defining what legislation is lacking, definitions must be very clear, narrow, and consistent across other legislation. Not for the regulator to determine ad-hoc and alone.

Suggestion three:
If children’s rights are at to be so central in discussion on this paper, then their wider rights must including privacy and participation, access to information and freedom of speech must be included in debate. This should include academic research-based evidence of children’s experience online when making the regulations.

Suggestion four:
Internet surveillance software in schools should be publicly scrutinised. A review should establish the efficacy, boundaries and oversight of policy and practice regards Internet monitoring for harms and not embed even more, without it. Boundaries should be put into legislation for clarity and consistency.

Suggestion five:
Terrorist activity or child sexual exploitation and abuse (CSEA) online are already unlawful and should not need additional Home Office powers. Great caution must be exercised here.

Suggestion six: 
Legislation could and should encapsulate accountability and oversight for micro-targeting and algorithmic abuse.


More detail behind my thinking, follows below, after the break. [Structure rearranged on May 14, 2019]


Continue reading Thoughts on the Online Harms White Paper (I)

Can Data Trusts be trustworthy?

The Lords Select Committee report on AI in the UK in March 2018, suggested that,“the Government plans to adopt the Hall-Pesenti Review recommendation that ‘data trusts’ be established to facilitate the ethical sharing of data between organisations.”

Since data distribution already happens, what difference would a Data Trust model make to ‘ethical sharing‘?

A ‘set of relationships underpinned by a repeatable framework, compliant with parties’ obligations’ seems little better than what we have today, with all its problems including deeply unethical policy and practice.

The ODI set out some of the characteristics Data Trusts might have or share. As importantly, we should define what Data Trusts are not. They should not simply be a new name for pooling content and a new single distribution point. Click and collect.

But is a Data Trust little more than a new description for what goes on already? Either a physical space or legal agreements for data users to pass around the personal data from the unsuspecting, and sometimes unwilling, public. Friends-with-benefits who each bring something to the party to share with the others?

As with any communal risk, it is the standards of the weakest link, the least ethical, the one that pees in the pool, that will increase reputational risk for all who take part, and spoil it for everyone.

Importantly, the Lords AI Committee report recognised that there is an inherent risk how the public would react to Data Trusts, because there is no social license for this new data sharing.

“Under the current proposals, individuals who have their personal data contained within these trusts would have no means by which they could make their views heard, or shape the decisions of these trusts.

Views those keen on Data Trusts seem keen to ignore.

When the Administrative Data Research Network was set up in 2013, a new infrastructure for “deidentified” data linkage, extensive public dialogue was carried across across the UK. It concluded in a report with very similar findings as was apparent at dozens of care.data engagement events in 2014-15;

There is not public support for

  • “Creating large databases containing many variables/data from a large number of public sector sources,
  • Establishing greater permanency of datasets,
  • Allowing administrative data to be linked with business data, or
  • Linking of passively collected administrative data, in particular geo-location data”

The other ‘red-line’ for some participants was allowing “researchers for private companies to access data, either to deliver a public service or in order to make profit. Trust in private companies’ motivations were low.”

All of the above could be central to Data Trusts. All of the above highlight that in any new push to exploit personal data, the public must not be the last to know. And until all of the above are resolved, that social-license underpinning the work will always be missing.

Take the National Pupil Database (NPD) as a case study in a Data Trust done wrong.

It is a mega-database of over 20 other datasets. Raw data has been farmed out for years under terms and conditions to third parties, including users who hold an entire copy of the database, such as the somewhat secretive and unaccountable Fischer Family Trust, and others, who don’t answer to Freedom-of-Information, and whose terms are hidden under commercial confidentilaity. Buying and benchmarking data from schools and selling it back to some, profiling is hidden from parents and pupils, yet the FFT predictive risk scoring can shape a child’s school experience from age 2. They don’t really want to answer how staff tell if a child’s FFT profile and risk score predictions are accurate, or of they can spot errors or a wrong data input somewhere.

Even as the NPD moves towards risk reduction, its issues remain. When will children be told how data about them are used?

Is it any wonder that many people in the UK feel a resentment of institutions and orgs who feel entitled to exploit them, or nudge their behaviour, and a need to ‘take back control’?

It is naïve for those working in data policy and research to think that it does not apply to them.

We already have safe infrastructures in the UK for excellent data access. What users are missing, is the social license to do so.

Some of today’s data uses are ethically problematic.

No one should be talking about increasing access to public data, before delivering increased public understanding. Data users must get over their fear of what if the public found out.

If your data use being on the front pages would make you nervous, maybe it’s a clue you should be doing something differently. If you don’t trust the public would support it, then perhaps it doesn’t deserve to be trusted. Respect individuals’ dignity and human rights. Stop doing stupid things that undermine everything.

Build the social license that care.data was missing. Be honest. Respect our right to know, and right to object. Build them into a public UK data strategy to be understood and be proud of.


Part 1. Ethically problematic
Ethics is dissolving into little more than a buzzword. Can we find solutions underpinned by law, and ethics, and put the person first?

Part 2. Can Data Trusts be trustworthy?
As long as data users ignore data subjects rights, Data Trusts have no social license.



Datasharing, lawmaking and ethics: power, practice and public policy

“Lawmaking is the Wire, not Schoolhouse Rock. It’s about blood and war and power, not evidence and argument and policy.”

"We can't trust the regulators," they say. "We need to be able to investigate the data for ourselves." Technology seems to provide the perfect solution. Just put it all online - people can go through the data while trusting no one.  There's just one problem. If you can't trust the regulators, what makes you think you can trust the data?" 

Extracts from The Boy Who Could Change the World: The Writings of Aaron Swartz. Chapter: ‘When is Technology Useful? ‘ June 2009.

The question keeps getting asked, is the concept of ethics obsolete in Big Data?

I’ve come to some conclusions why ‘Big Data’ use keeps pushing the boundaries of what many people find acceptable, and yet the people doing the research, the regulators and lawmakers often express surprise at negative reactions. Some even express disdain for public opinion, dismissing it as ignorant, not ‘understanding the benefits’, yet to be convinced. I’ve decided why I think what is considered ‘ethical’ in data science does not meet public expectation.

It’s not about people.

Researchers using large datasets, often have a foundation in data science, applied computing, maths, and don’t see data as people. It’s only data. Creating patterns, correlations, and analysis of individual level data are not seen as research involving human subjects.

This is embodied in the nth number of research ethics reviews I have read in the last year in which the question is asked, does the research involve people? The answer given is invariably ‘no’.

And these data analysts using, let’s say health data, are not working in a subject that is founded on any ethical principle, contrasting with the medical world the data come from.

The public feels differently about the information that is about them, and may be known, only to them or select professionals. The values that we as the public attach to our data  and expectations of its handling may reflect the expectation we have of handling of us as people who are connected to it. We see our data as all about us.

The values that are therefore put on data, and on how it can and should be used, can be at odds with one another, the public perception is not reciprocated by the researchers. This may be especially true if researchers are using data which has been de-identified, although it may not be anonymous.

New legislation on the horizon, the Better Use of Data in Government,  intends to fill the [loop]hole between what was legal to share in the past and what some want to exploit today, and emphasises a gap in the uses of data by public interest, academic researchers, and uses by government actors. The first incorporate by-and-large privacy and anonymisation techniques by design, versus the second designed for applied use of identifiable data.

Government departments and public bodies want to identify and track people who are somehow misaligned with the values of the system; either through fraud, debt, Troubled Families, or owing Student Loans. All highly sensitive subjects. But their ethical data science framework will not treat them as individuals, but only as data subjects. Or as groups who share certain characteristics.

The system again intrinsically fails to see these uses of data as being about individuals, but sees them as categories of people – “fraud” “debt” “Troubled families.” It is designed to profile people.

Services that weren’t built for people, but for government processes, result in datasets used in research, that aren’t well designed for research. So we now see attempts to shoehorn historical practices into data use  by modern data science practitioners, with policy that is shortsighted.

We can’t afford for these things to be so off axis, if civil service thinking is exploring “potential game-changers such as virtual reality for citizens in the autism spectrum, biometrics to reduce fraud, and data science and machine-learning to automate decisions.”

In an organisation such as DWP this must be really well designed since “the scale at which we operate is unprecedented: with 800 locations and 85,000  colleagues, we’re larger than most retail operations.”

The power to affect individual lives through poor technology is vast and some impacts seem to be being badly ignored. The ‘‘real time earnings’ database improved accuracy of benefit payments was widely agreed to have been harmful to some individuals through the Universal Credit scheme, with delayed payments meaning families at foodbanks, and contributing to worse.

“We believe execution is the major job of every business leader,” perhaps not the best wording in on DWP data uses.

What accountability will be built-by design?

I’ve been thinking recently about drawing a social ecological model of personal data empowerment or control. Thinking about visualisation of wants, gaps and consent models, to show rather than tell policy makers where these gaps exist in public perception and expectations, policy and practice. If anyone knows of one on data, please shout. I think it might be helpful.

But the data *is* all about people

Regardless whether they are in front of you or numbers on a screen, big or small datasets using data about real lives are data about people. And that triggers a need to treat the data with an ethical approach as you would people involved face-to-face.

Researchers need to stop treating data about people as meaningless data because that’s not how people think about their own data being used. Not only that, but if the whole point of your big data research is to have impact, your data outcomes, will change lives.

Tosh, I know some say. But, I have argued, the reason being is that the applications of the data science/ research/ policy findings / impact of immigration in education review / [insert purposes of the data user’s choosing] are designed to have impact on people. Often the people about whom the research is done without their knowledge or consent. And while most people say that is OK, where it’s public interest research, the possibilities are outstripping what the public has expressed as acceptable, and few seem to care.

Evidence from public engagement and ethics all say, hidden pigeon-holing, profiling, is unacceptable. Data Protection law has special requirements for it, on autonomous decisions. ‘Profiling’ is now clearly defined under article 4 of the GDPR as ” any form of automated processing of personal data consisting of using those data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.”

Using big datasets for research that ‘isn’t interested in individuals’ may still intend to create results profiling groups for applied policing, or discriminate, to make knowledge available by location. The data may have been deidentified, but in application becomes no longer anonymous.

Big Data research that results in profiling groups with the intent for applied health policy impacts for good, may by the very point of research, with the intent of improving a particular ethnic minority access to services, for example.

Then look at the voting process changes in North Carolina and see how that same data, the same research knowledge might be applied to exclude, to restrict rights, and to disempower.

Is it possible to have ethical oversight that can protect good data use and protect people’s rights if they conflict with the policy purposes?

The “clear legal basis”is not enough for public trust

Data use can be legal and can still be unethical, harmful and shortsighted in many ways, for both the impacts on research – in terms of withholding data and falsifying data and avoiding the system to avoid giving in data – and the lives it will touch.

What education has to learn from health is whether it will permit the uses by ‘others’ outside education to jeopardise the collection of school data intended in the best interests of children, not the system. In England it must start to analyse what is needed vs wanted. What is necessary and proportionate and justifies maintaining named data indefinitely, exposed to changing scope.

In health, the most recent Caldicott review suggests scope change by design – that is a red line for many: “For that reason the Review recommends that, in due course, the opt-out should not apply to all flows of information into the HSCIC. This requires careful consideration with the primary care community.”

The community spoke out already, and strongly in Spring and Summer 2014 that there must be an absolute right to confidentiality to protect patients’ trust in the system. Scope that ‘sounds’ like it might sneakily change in future, will be a death knell to public interest research, because repeated trust erosion will be fatal.

Laws change to allow scope change without informing people whose data are being used for different purposes

Regulators must be seen to be trusted, if the data they regulate is to be trustworthy. Laws and regulators that plan scope for the future watering down of public protection, water down public trust from today. Unethical policy and practice, will not be saved by pseudo-data-science ethics.

Will those decisions in private political rooms be worth the public cost to research, to policy, and to the lives it will ultimately affect?

What happens when the ethical black holes in policy, lawmaking and practice collide?

At the last UK HealthCamp towards the end of the day, when we discussed the hard things, the topic inevitably moved swiftly to consent, to building big databases, public perception, and why anyone would think there is potential for abuse, when clearly the intended use is good.

The answer came back from one of the participants, “OK now it’s the time to say. Because, Nazis.” Meaning, let’s learn from history.

Given the state of UK politics, Go Home van policies, restaurant raids, the possibility of Trump getting access to UK sensitive data of all sorts from across the Atlantic, given recent policy effects on the rights of the disabled and others, I wonder if we would hear the gentle laughter in the room in answer to the same question today.

With what is reported as Whitehall’s digital leadership sharp change today, the future of digital in government services and policy and lawmaking does indeed seem to be more “about blood and war and power,” than “evidence and argument and policy“.

The concept of ethics in datasharing using public data in the UK is far from becoming obsolete. It has yet to begin.

We have ethical black holes in big data research, in big data policy, and big data practices in England. The conflicts between public interest research and government uses of population wide datasets, how the public perceive the use of our data and how they are used, gaps and tensions in policy and practice are there.

We are simply waiting for the Big Bang. Whether it will be creative, or destructive we are yet to feel.

*****

image credit: LIGO – graphical visualisation of black holes on the discovery of gravitational waves

References:

Report: Caldicott review – National Data Guardian for Health and Care Review of Data Security, Consent and Opt-Outs 2016

Report: The OneWay Mirror: Public attitudes to commercial access to health data

Royal Statistical Society Survey carried out by Ipsos MORI: The Data Trust Deficit

The illusion that might cheat us: ethical data science vision and practice

This blog post is also available as an audio file on soundcloud.


Anais Nin, wrote in her 1946 diary of the dangers she saw in the growth of technology to expand our potential for connectivity through machines, but diminish our genuine connectedness as people. She could hardly have been more contemporary for today:

“This is the illusion that might cheat us of being in touch deeply with the one breathing next to us. The dangerous time when mechanical voices, radios, telephone, take the place of human intimacies, and the concept of being in touch with millions brings a greater and greater poverty in intimacy and human vision.”
[Extract from volume IV 1944-1947]

Echoes from over 70 years ago, can be heard in the more recent comments of entrepreneur Elon Musk. Both are concerned with simulation, a lack of connection between the perceived, and reality, and the jeopardy this presents for humanity. But both also have a dream. A dream based on the positive potential society has.

How will we use our potential?

Data is the connection we all have between us as humans and what machines and their masters know about us. The values that masters underpin their machine design with, will determine the effect the machines and knowledge they deliver, have on society.

In seeking ever greater personalisation, a wider dragnet of data is putting together ever more detailed pieces of information about an individual person. At the same time data science is becoming ever more impersonal in how we treat people as individuals. We risk losing sight of how we respect and treat the very people whom the work should benefit.

Nin grasped the risk that a wider reach, can mean more superficial depth. Facebook might be a model today for the large circle of friends you might gather, but how few you trust with confidences, with personal knowledge about your own personal life, and the privilege it is when someone chooses to entrust that knowledge to you. Machine data mining increasingly tries to get an understanding of depth, and may also add new layers of meaning through profiling, comparing our characteristics with others in risk stratification.
Data science, research using data, is often talked about as if it is something separate from using information from individual people. Yet it is all about exploiting those confidences.

Today as the reach has grown in what is possible for a few people in institutions to gather about most people in the public, whether in scientific research, or in surveillance of different kinds, we hear experts repeatedly talk of the risk of losing the valuable part, the knowledge, the insights that benefit us as society if we can act upon them.

We might know more, but do we know any better? To use a well known quote from her contemporary, T S Eliot, ‘Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?’

What can humans achieve? We don’t yet know our own limits. What don’t we yet know?  We have future priorities we aren’t yet aware of.

To be able to explore the best of what Nin saw as ‘human vision’ and Musk sees in technology, the benefits we have from our connectivity; our collaboration, shared learning; need to be driven with an element of humility, accepting values that shape  boundaries of what we should do, while constantly evolving with what we could do.

The essence of this applied risk is that technology could harm you, more than it helps you. How do we avoid this and develop instead the best of what human vision makes possible? Can we also exceed our own expectations of today, to advance in moral progress?

Continue reading The illusion that might cheat us: ethical data science vision and practice

Destination smart-cities: design, desire and democracy (Part three)

Smart Technology we have now: A UK Case Study

In places today, where climate surveillance sensors are used to predict and decide which smog-days cars should be banned from cities, automatic number-plate recognition (ANPR) can identify cars driving on the wrong days and send automatic penalties.

Similarly ANPR technology is used in our UK tunnels and congestion charging systems. One British company encouraging installation of ANPR in India is the same provider of a most significant part of our British public administrative data and surveillance softwares in a range of sectors.

About themselves that company says:

“Northgate Public Services has a unique experience of delivering ANPR software to all Home Office police forces. We developed and managed the NADC, the mission critical solution providing continuous surveillance of the UK’s road network.  The NADC is integrated with other databases, including the Police National Computer, and supports more than 30 million reads a day across the country.”

30 million snapshots from ‘continuous surveillance of the UK’s road network‘. That’s surprised me. That’s half the population in England, not all of whom drive. 30 million every day. It’s massive, unreasonable, and risks backlash.

Northgate Public Services’ clients also include 80% of UK water companies, as well as many other energy and utility suppliers.

And in the social housing market they stretch to debt collection, or ‘income management’.

So who I wondered, who is this company that owns all this data-driven access to our homes, our roads, our utilities, life insurance, hospital records and registeries, half of all UK calls to emergency services?

Northgate Information Solutions announced the sale of its Public Services division in December 2014 to venture capital firm Cinven. Cinven that also owns a 62% shareholding in the UK private healthcare provider Spire with all sorts of influence given their active share of services and markets. 

Not only does this private equity firm hold these vast range of data systems across a wide range of sectors, but it’s making decisions about how our public policies and money are being driven.

Using health screening data they’re even making decisions that affect our future and our behaviour and affect our private lives: software provides the information and tools that housing officers need to proactively support residents, such as sending emails, letters or rent reminders by SMS and freeing up time for face-to-face support.”

Of their ANPR systems, Northgate says the data should be even more widely used “to turn CONNECT: ANPR into a critical source of intelligence for proactive policing.”

If the company were to start to ‘proactively’ use all the data it owns across the sectors we should be asking, is ‘smart’ sensible and safe?

Where is the boundary between proactive and predictive? Or public and private?

Where do companies draw the line between public and personal space?

The public services provided by the company seem to encroach into our private lives in many ways, In Northgate’s own words, “It’s also deeply personal.”

Who’s driving decision making is clear. The source of their decision making is data. And it’s data about us.

Today already whether collected by companies proactively like ANPR or through managing data we give them with consent for direct administrative purpose, private companies are the guardians of massive amounts of our personal and public data.

What is shocking to me, is how collected data in one area of public services are also used for entirely different secondary purposes without informed consent or an FYI, for example in schools.

If we don’t know which companies manage our data, how can we trust that it is looked after well and that we are told if things go wrong?

Steps must be taken in administrative personal data security, transparency and public engagement to shore up public trust as the foundation for future datasharing as part of the critical infrastructure for any future strategy, for public or commercial application. Strategy must include more transparency of the processing of our data and public involvement, not the minimum, if ‘digital citizenship’ is to be meaningful.

How would our understanding of data improve if anyone using personal data were required to put in place clear public statements about their collection, use and analysis of data?  If the principles of data protection were actually upheld, in particular that individuals should be informed? How would our understanding of data improve especially regards automated decision making and monitoring technology? Not ninety page privacy policies. Plain English. If you need ninety pages, you’re doing too much with my data.

Independent privacy impact assessments should be mandatory and published before data are collected and shared with any party other than that to which it was given for a specific purpose. Extensions broadening that purpose should require consultation and consent. If that’s a street, then make it public in plain sight.

Above all, planning committees in local government, in policy making and practical application, need to think of data in every public decision they make and its ethical implications. We need some more robust decision-making in the face of corporate data grabs, to defend data collected in public space safe, and to keep some private.

How much less fun is a summer’s picnic spent smooching, if you feel watched? How much more anxious will we make our children if they’re not allowed to ever have their own time to themselves, and every word they type in a school computer is monitored?

How much individual creativity and innovation does that stifle? We are effectively censoring children before they have written a word.

Large corporations have played historically significant and often shadowy roles in surveillance that retrospectively were seen as unethical.

We should consider sooner rather than later, if corporations such as BAE systems, Siemens and the IMSs of the world act in ways worthy of our trust in such massive reach into our lives, with little transparency and oversight.

“Big data is big opportunity but Government should tackle misuse”

The Select Committee warned in its recent report on Big Data that distrust arising from concerns about privacy and security is often well-founded and must be resolved by industry and Government.

If ‘digital’ means smart technology in the future is used in “every part of government” as announced at #Sprint16, what will its effects be on the involvement and influence these massive corporations on democracy itself?

******

I thought about this more in depth on Part one here,  “Smart systems and Public Services” here (part two), and continue after this by looking at “The Best Use of Data” used in predictions and the Future (part four).

Parliament’s talking about Talk Talk and Big Data, like some parents talk about sex. Too little, too late.

Parliament’s talking about Talk Talk and Big Data, like some parents talk about sex ed. They should be discussing prevention and personal data protection for all our personal data, not just one company, after the event.

Everyone’s been talking about TalkTalk and for all the wrong reasons. Data loss and a 15-year-old combined with a reportedly reckless response to data protection, compounded by lack of care.

As Rory Cellan-Jones wrote [1] rebuilding its reputation with customers and security analysts is going to be a lengthy job.

In Parliament Chi Onwarah, Shadow Minister for Culture & the Digital Economy, summed up in her question, asking the Minister to acknowledge “that all the innovation has come from the criminals while the Government sit on their hands, leaving it to businesses and consumers to suffer the consequences?”  [Hansard 2]

MPs were concerned for the 4 million* customers’ loss of name, date of birth, email, and other sensitive data, and called for an inquiry. [It may now be fewer*.] [3] The SciTech committee got involved too.

I hope this means Parliament will talk about TalkTalk not as the problem to be solved, but as one case study in a review of contemporary policy and practices in personal data handling.

Government spends money in data protection work in the [4] “National Cyber Security Programme”. [NCSP] What is the measurable outcome – particularly for TalkTalk customers and public confidence – from its £860M budget?  If you look at the breakdown of those sums, with little going towards data protection and security compared with the Home Office and Defence, we should ask if government is spending our money in an appropriately balanced way on the different threats it perceives. Keith Vaz suggested British companies that lose £34 billion every year to cybercrime. Perhaps this question will come into the inquiry.

This all comes after things have gone wrong.  Again [5]. An organisation we trusted has abused that trust by not looking after data with the stringency that customers should be able to expect in the 21st century, and reportedly not making preventative changes, apparent a year ago. Will there be consequences this time?

The government now saying it is talking about data protection and consequences, is like saying they’re talking sex education with teens, but only giving out condoms to the boys.

It could be too little too late. And they want above all to avoid talking about their own practices. Let’s change that.

Will this mean a review to end risky behaviour, bring in change, and be wiser in future?

If MPs explore what the NCSP does, then we the public, should learn more about what government’s expectations of commercial companies is in regards modern practices.

In addition, any MPs’ inquiry should address government’s own role in its own handling of the public’s personal data. Will members of government act in a responsible manner or simply tell others how to do so?

Public discussion around both commercial and state use of our personal data, should mean genuine public engagement. It should involve a discussion of consent where necessary for purposes  beyond those we expect or have explained when we submit our data, and there needs to be a change in risky behaviour in terms of physical storage and release practices, or all the talk, is wasted.

Some say TalkTalk’s  practices mean they have broken their contract along with consumer trust. Government departments should also be asking whether their data handling would constitute a breach of the public’s trust and reasonable expectations.

Mr Vaizey should apply his same logic to government handling data as he does to commercial handling. He said he is open to suggestions for improvement. [6]

Let’s not just talk about TalkTalk.

    • Let’s Talk Consequences: organisations taking risk seriously and meaningful consequences if not [7]
    • Let’s Talk Education: the education of the public on personal data use by others and rights and responsibilities we have [8]
    • Let’s Talk Parliament’s Policies and Practices: about its own complementary lack of data  understanding in government and understand what good practice is in physical storage, good governance and transparent oversight
    • Let’s Talk Public Trust: and the question whether government can be trusted with public data it already has and whether its current handling makes it trustworthy to take more [9]

Vaizey said of the ICO now in his own department: “The Government take the UK’s cyber-security extremely seriously and we will continue to do everything in our power to protect organisations and individuals from attacks.”

“I will certainly meet the Information Commissioner to look at what further changes may be needed in the light of this data breach. [..] It has extensive powers to take action and, indeed, to levy significant fines. “

So what about consequences when data are used in ways the public would consider a loss, and not through an attack or a breach, but government policy? [10]

Let’s Talk Parliament’s Policies and Practices

Commercial companies are not alone in screwing up the use and processing [11] management of our personal data. The civil service under current policy seems perfectly capable of doing by itself. [12]

Government data policy has not kept up with 21st century practices and to me seems to work in the dark, as Chi Onwarah said,

‘illuminated by occasional flashes of incompetence.’

This incompetence can risk harm to people’s lives, to business and to public confidence.

And once given, trust would be undermined by changing the purposes or scope of use for which it was given, for example as care.data plans to do after the pilot. A most risky idea.

Trust in these systems, whether commercial or state, is crucial. Yet reviews which highlight this, and make suggestions to support trust such as ‘data should never be (and currently is never) released with personal identifiers‘ in The Shakespeare Review have been ignored by government.

Where our personal data are not used well in government departments by the department themselves, they seem content to date to rely on public ignorance to get away with current shoddy practices.

Practices such as not knowing who all your customers are, because they pass data on to others. Practices, such as giving individual level identifiable personal data to third parties without informing the public, or asking for consent. Practices, such as never auditing or measuring any benefit of giving away others personal data.

“It is very important that all businesses, particularly those handling significant amounts of sensitive customer data, have robust procedures in place to protect those data and to inform customers when there may have been a data breach.” Ed Vaizey, Oct 26th, HOC

If government departments prove to be unfit to handle the personal data we submit in trust to the state today, would we be right to trust them with even more?

While the government is busy wagging fingers at commercial data use poor practices, the care.data debacle is evidence that not all its MPs or civil service understand how data are used in commercial business or through government departments.

MPs calling for commercial companies to sharpen up their data protection must understand how commercial use of data often piggy-backs the public use of our personal data, or others getting access to it via government for purposes that were unintended.

Let’s Talk Education

If the public is to understand how personal data are to be kept securely with commercial organisations, why should they not equally ask to understand how the state secures their personal data? Educating the public could lead to better engagement with research, better understanding of how we can use digital services and a better educated society as a whole. It seems common sense.

At a recent public event [13],  I asked civil servants talking about big upcoming data plans they announced, linking school data with more further education and employment data, I asked how they planned to involve the people whose data they would use. There was no public engagement to mention. Why not? Inexcusable in this climate.

Public engagement is a matter of trust and developing understanding in a relationship. Organisations must get this right.[14]

If government is discussing risky practices by commercial companies, they also need to look closer to home and fix what is broken in government data handling where it exposes us to risk through loss of control of our personal data.

The National Pupil Database for example, stores and onwardly shares identifiable individual sensitive data of at least 8m children’s records from age 2 -19. That’s twice as big as the TalkTalk loss was first thought to be.

Prevention not protection is what we should champion. Rather than protection after the events,  MPs and public must demand emphasis on prevention measures in our personal data use.

This week sees more debate on how and why the government will legislate to have more powers to capture more data about all the people in the country. But are government policy, process and practices fit to handle our personal data, what they do with it and who they give it to?

Population-wide gathering of data surveillance in any of its many forms is not any less real just because you don’t see it. Children’s health, schools, increases in volume of tax data collection. We don’t discuss enough how these policies can be used every day without the right oversight. MPs are like the conservative parents not comfortable talking to their teens about sleeping with someone. Just because you don’t know, it doesn’t mean they’re not doing it. [15] It just means you don’t want to know because if you find out they’re not doing it safely, you’ll have to do something about it.

And it might be awkward. (Meanwhile in schools real, meaningful PHSE has been left off the curriculum.)

Mr. Vaizey asked in the Commons for suggestions for improvement.

My suggestion is this. How government manages data has many options. But the principle should be simple. Our personal data needs not only protected, but not exposed to unnecessary risk in the first place, by commercial or state bodies. Doing nothing, is not an option.

Let’s Talk about more than TalkTalk

Teens will be teens. If commercial companies can’t manage their systems better to prevent a child successfully hacking it, then it’s not enough to point at criminal behaviour. There is fault to learn from on all sides. In commercial and state uses of personal data.

There is talk of new, and bigger, data sharing plans. [16]

Will the government wait to see  and keep its fingers crossed each month to see if our data are used safely at unsecured settings with some of these unknown partners data might be onwardly shared with, hoping we won’t find out and they won’t need to talk about it, or have a grown up public debate based on public education?

Will it put preventative measures in place appropriate to the sensitivity and volume of the data it is itself responsible for?

Will moving forward with new plans mean safer practices?

If government genuinely wants our administrative data at the heart of digital government fit for the 21st century, it must first understand how all government departments collect and use public data. And it must educate the public in this and commercial data use.

We need a fundamental shift in the way the government respects public opinion and shift towards legal and privacy compliance – both of which are lacking.

Let’s not talk about TalkTalk. Let’s have meaningful grown up debate with genuine engagement. Let’s talk about prevention measures in our data protection. Let’s talk about consent. It’s personal.

******

[1] Questions for TalkTalk: http://www.bbc.co.uk/news/technology-34636308

[2] Hansard: http://www.publications.parliament.uk/pa/cm201516/cmhansrd/cm151026/debtext/151026-0001.htm#15102612000004

[3] TalkTalk update: http://www.talktalkgroup.com/press/press-releases/2015/cyber-attack-update-tuesday-october-30-2015.aspx

[4] The Cyber Security Programme: http://www.civilserviceworld.com/articles/feature/depth-look-national-cyber-security-programme

[5] Paul reviews TalkTalk; https://paul.reviews/value-security-avoid-talktalk/

[6] https://ico.org.uk/for-organisations/guide-to-data-protection/conditions-for-processing/

[7] Let’s talk Consequences: the consequences of current failures to meet customers’ reasonable expectations of acceptable risk, are low compared with elsewhere.  As John Nicolson (East Dunbartonshire) SNP pointed out in the debate, “In the United States, AT&T was fined £17 million for failing to protect customer data. In the United Kingdom, the ICO can only place fines of up to £500,000. For a company that received an annual revenue of nearly £1.8 billion, a fine that small will clearly not be terrifying. The regulation of telecoms must be strengthened to protect consumers.”

[8] Let’s talk education: FOI request revealing a samples of some individual level data released to members of the press: http://www.theyworkforyou.com/debates/?id=2015-10-26b.32.0

The CMA brought out a report in June, on the use of consumer data, the topic should be familiar in parliament, but little engagement has come about as a result. It suggested the benefit:

“will only be realised if consumers continue to provide data and this relies on them being able to trust the firms that collect and use it”, and that “consumers should know when and how their data is being collected and used and be able to decide whether and how to participate. They should have access to information from firms about how they are collecting, storing and using data.”

[9] Let’s Talk Public Trust – are the bodies involved Trustworthy? Government lacks an effective data policy and is resistant to change. Yet it wants to collect ever more personal and individual level for unknown purposes from the majority of 60m people, with an unprecedented PR campaign.  When I heard the words ‘we want a mature debate’ it was reminiscent of HSCIC’s ‘intelligent grown up debate’ requested by Kinglsey Manning, in a speech when he admitted lack of public knowledge was akin to a measure of past success, and effectively they would rather have kept the use of population wide health data ‘below the radar’.

Change: We need change, the old way after all, didn’t work, according to Minister Matt Hancock: “The old model of government has failed, so we will build a new one.” I’d like to see what that new one will look like. Does he mean to expand only data sharing policy, or the powers of the civil service?

[10] National Pupil Database detailed data releases to third parties https://www.whatdotheyknow.com/request/pupil_data_national_pupil_databa

[11] http://adrn.ac.uk/news-events/latest-news/adrn-rssevent

[12] https://jenpersson.com/public-trust-datasharing-nib-caredata-change/

[13] https://www.liberty-human-rights.org.uk/human-rights/privacy/state-surveillance

[14] http://www.computerweekly.com/news/4500256274/Government-will-tackle-barriers-to-sharing-and-linking-data-says-Cabinet-Office-minister-Hancock

Building Public Trust in care.data sharing [1]: Seven step summary to a new approach

Here’s my opinion after taking part in the NIB #health2020 Bristol event 24/7/2015 and presentation of plans at the June King’s Fund hosted event. Data sharing includes plans for extraction and uses of primary care data by third parties, charging ahead under the care.data banner.

Wearing my hat from a previous role in change management and communications, I share my thoughts in the hope the current approach can adapt and benefit from outside perspectives.

The aim of “Rebuilding and sustaining Public trust” [1] needs refocused to treat the cause, not only the symptoms of the damage done in 2014.  Here’s why:

A Seven Step Top Line Summary

1. Abstract ‘public trust’ is not vital to the future of data sharing. Being demonstrably worthy of public trust is.

2. Data-sharing is not vital to future-proof the NHS. Using knowledge wisely is.

3. A timed target to ‘get the public’s data’, is not what is needed. Having a stable, long term future-proofed and governable model is.

4. Tech solutions do not create trust. Enable the positive human response to what the org wants from people, enabling their confident ‘yes to data-sharing.’ [It might be supported by technology-based tools.]

5. Communications that tell the public ‘we know best, trust us’ fail.  While professional bodies [BMA [2], GPES advisory group, APPG report calling for a public benefits plan, ICO, and expert advice such as Caldicott] are ignored or remain to be acted upon, it remains challenging for the public to see how the programme’s needs, motives and methods are trustworthy. The [Caldicott 2] Review Panel found that commissioners do not need dispensation from confidentiality, human rights & data protection law.” [3] Something’s gotta give. What will it be?

6. care.data consistency. Relationships must be reliable and have integrity.
“Trust us – see the benefits” [But we won’t share the business cost/benefit plan.]
“Trust us – we’re transparent” [But there is nothing published in 2015 at all from the programme board minutes] [4]
“Trust us – we’ll only use your data wisely, with the patient in control” [Ignore that we didn’t before [5] and that we still share your data for secondary uses even if you opted out [6] and no, we can’t tell you when it will be fixed…]

7. Voices do not exist in a vacuum. Being trustworthy on care.data  does not stand alone but is part of the NHS ‘big picture’.
Department of Health to GPs: “Trust us about data sharing.’  [And ignore that we haven’t respected many of  your judgement or opinions.]
NHS England to GPs: “Trust us about data sharing.’  
[And ignore our lack of general GP support: MPIG withdrawal, misrepresentation in CQC reports] NHS England and Department of Health to professionals and public: “The NHS is safe in our hands.’ Everyone: “We see no evidence that plans for cost savings, 7 day working, closures and the 5YFV integration will bring the promised benefits. Let us ‘see the holes’, so that we can trust you based on evidence.”

See the differences?

Target the cause not Symptom:

The focus in the first half, the language used by NHS England/NIB/ DH, sets out their expectations of the public. “You must trust us and how you give us your data.”

The focus should instead to be on the second half, a shift to the organisation, the NHS England/NIB/ DH, and set out expectations from the public point-of-view. ” Enable the public to trust the organisation. Enable individual citizens to trust what is said by individual leaders. This will enable citizens to be consensual sharers in the activity your organisation imposes – the demand for care.data through a statutory gateway, obliging GPs to disclose patient data.

The fact that trust is broken, and specifically to data-sharing that there is the deficit [A] between how much the public trusts the organisation and how the organisation handles data, is not the fault of the public, or “1.4 M NHS staff”, or the media, or patient groups’ pressure. It’s based on proven experience.

It’s based on how organisations have handled data in the past. [5] Specifically on the decisions made by DH, and the Information Centre and leaders in between. Those who chose to sell patient data without asking the public.

The fact that trust is broken is based on how leadership individuals in those organisations have responded to that. Often taking no responsibility for loss.

No matter how often we hear “commissioners will get a better joined up picture of care needs and benefit you”, it does not compensate for past failings.

Only demonstrable actions to show why it will not happen in future can start that healing process.

Target the timing to the solution, not a shipping deadline

“Building trust to enable data sharing” aims at quick fixes, when what is needed is a healing process and ongoing relationship maintenance.

Timing has to be tailored to what needs done, not an ‘artificial deadline’. Despite that being said it doesn’t seem to match reality.

Addressing the Symptoms and not the Cause, will not find a Cure

What needs done?

Lack of public trust, the data trust deficit [A] are symptoms in the public to be understood. But it is the causes in the organisations that must be treated.

So far many NHS England staff I have met in relation to care.data, appear to have a “them and us” mentality. It’s almost tangibly wrapped up in the language used at these meetings or in defensive derision of public concerns: “Tin foil hat wearers”, “Luddites” [7] and my personal favourite, ‘Consent fetishists.’ [8] It’s counter productive and seems borne from either a lack of understanding, or frustration.

The NIB/DH/NHS England/ P&I Directorate must accept they cannot force any consensual change in an emotion-based belief based on past experiences, held by the public.

Those people each have different starting points of knowledge and beliefs.  As one attendee said, “There is no single patient replicated 60 million times.”

The NIB/DH/NHS England/ P&I Directorate can only change what they themselves can control. They have to model and be seen to model change that is trustworthy.

How can an organisation demonstrate it is trustworthy?

This means shifting the focus of the responsibility for change from public and professionals, to leadership organisation.

There is a start in this work stream, but there is little new that is concrete.

The National Data Guardian (NDG) role has been going to be put on a legal footing “at the earliest opportunity” since November 2014. [9] Nine months.

Updated information governance guidance is on the way.

Then there’s two really strong new items that would underpin public trust, to be planned in a ‘roadmap’: the first a system that can record and share consent decisions and the second, to provide information on the use to which an individual’s data has been put.

How and when those two keystones to public trust will be actually offered appear unknown. They will  encourage public trust by enabling choice and control of our data. So I would ask, if we’re not there yet on the roadmap, how can consent options be explained to the public in care.data communications, if there is as yet no mechanism to record and effect them? More on that later.

Secondly, when will a usage report be available? That will be the proof to demonstrate that what was offered, was honoured. It is one of the few tools the organisation(s) can offer to demonstrate they are trustworthy: you said, we did. So again, why jeopardise public trust by rolling out data extractions into the existing, less trustworthy environment?

How well this is done will determine whether it can realise its hoped for benefits. How the driving leadership influences that outcome, will be about the organisational approach to opt out, communicating care.data content decisions, the way and the channels in which they are communicated, accepting what has not worked to date and planning long-term approaches to communicating change before you start the pathfinders. [Detailed steps on this follows.]

Considering the programme’s importance we have been told, it’s vital to get right. [10]

i believe changing the approach from explaining benefits and focus on public trust, to demonstrating why the public should trust demonstrable changes made, will make all the difference.

So before rolling out next data sharing steps think hard what the possible benefits and risks will be, versus waiting for a better environment to do it in.

Conclusion: Trust is not about the public. Public trust is about the organisation being trustworthy. Over to you, orgs.

####

To follow, for those interested in nitty gritty, some practical suggestions for progress in Building Public Trust in data sharing:

This is Part one: A seven step top line summary – What I’d like to see change addressing public trust in health data sharing for secondary purposes.

Part two: a New Approach is needed to understanding Public Trust For those interested in a detailed approach on Trust. What Practical and Policy steps influence trust. On Research and Commissioning. Trust is not homogeneous. Trust  is nuanced even within the single relationship between one individual and another. It doesn’t exist in a vacuum.

Part three: Know where you’re starting from. What behaviours influence trust. Fixing what has already been communicated is vital before new communications get rolled out. Vital to content of your communications and vital for public trust and credibility.

Part four: Communicate the Benefits won’t work – How Communications influence trust. For those interested in more in-depth reasons, I outline in part two why the communications approach is not working, why the focus on ‘benefits’ is wrong, and fixes.

Part five: Future solutions – why a new approach may work better for future trust – not to attempt to rebuild trust where there is now none, but strengthen what is already trusted and fix today’s flawed behaviours; honesty and reliability, that  are vital to future proofing trust

####

Background References:

I’m passionate about people using technology to make their jobs and lives better, simpler, and about living well. So much so, that this became over 5000 words. To solve that, I’ve assumed a baseline knowledge and I will follow up with separate posts on why a new approach is needed to understanding “Public Trust”, to “Communicating the benefits” and “Being trustworthy and other future solutions”.

If this is all new, welcome, and I suggest you look over some of the past 18 months posts that include  public voice captured from eight care.data  events in 2014. care.data is about data sharing for secondary purposes not direct care.

[1] NHS England October 2014 http://www.england.nhs.uk/2014/10/23/nhs-leaders-vision/

[2] BMA LMC Vote 2014 http://bma.org.uk/news-views-analysis/news/2014/june/patients-medical-data-sacrosanct-declares–bma

[3] Caldicott Review 2: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/192572/2900774_InfoGovernance_accv2.pdf

[4] Missing Programme Board documents: 2015 and June-October 2014

[5] HSCIC Data release register

[6] Telegraph article on Type 2 opt out http://www.telegraph.co.uk/news/health/news/11655777/Nearly-1million-patients-could-be-having-confidential-data-shared-against-their-wishes.html

[7] Why Wanting a Better Care.Data is not Luddite: http://davidg-flatout.blogspot.co.uk/2014/04/why-wanting-better-caredata-is-not.html

[8] Talking to the public about using their data is crucial- David Walker, StatsLife http://www.statslife.org.uk/opinion/1316-talking-to-the-public-about-using-their-data-is-crucial

[9] Dame Fiona Caldicott appointed in new role as National Data Guardian

[10] Without care.data health service has no future says director http://www.computerweekly.com/news/2240216402/Without-Caredata-we-wont-have-a-health-service-for-much-longer-says-NHS

Polls of public feeling:

[A] Royal Statistical Society Data Trust Deficit http://www.statslife.org.uk/news/1672-new-rss-research-finds-data-trust-deficit-with-lessons-for-policymakers

(B] Dialogue on data – work carried out through the ADRN

 

 

Are care.data pilots heading for a breech delivery?

Call the midwife [if you can find one free, the underpaid overworked miracle workers that they are], the care.data ‘pathfinder’ pilots are on their way! [This is under a five minute read, so there should be time to get the hot water on – and make a cup of tea.]

I’d like to be able to say I’m looking forward to a happy new arrival, but I worry care.data is set for a breech birth. Is there still time to have it turned around? I’d like to say yes, but it might need help.

The pause appears to be over as the NHS England board delegated the final approval of directions to their Chair, Sir Malcolm Grant and Chief Executive, Simon Stevens, on Thursday May 28.

Directions from NHS England which will enable the HSCIC to proceed with their pathfinder pilots’ next stage of delivery.

“this is a programme in which we have invested a great deal, of time and thought in its development.” [Sir Malcolm Grant, May 28, 2015, NHS England Board meeting]

And yet. After years of work and planning, and a 16 month pause, as long as it takes for the gestation of a walrus, it appears the directions had flaws. Technical fixes are also needed before the plan could proceed to extract GP care.data and merge it with our hospital data at HSCIC.

And there’s lots of unknowns what this will deliver.**

Groundhog Day?

The public and MPs were surprised in 2014. They may be even more surprised if 2015 sees a repeat of the same again.

We have yet to hear case studies of who received data in the past and would now be no longer eligible. Commercial data intermediaries? Can still get data, the NHS Open Day was told. And they do, as the HSCIC DARS meeting minutes in 2015 confirm.

By the time the pilots launch, the objection must actually work, communications must include an opt out form and must clearly give the programme a name.

I hope that those lessons have been learned, but I fear they have not been. There is still lack of transparency. NHS England’s communications materials and May-Oct 2014 and any 2015 programme board minutes have not been published.

We have been here before.

Back to September 2013: the GPES Advisory Committee, the ICO and Dame Fiona Caldicott, as well as campaigners and individuals could see the issues in the patient leaflet and asked for fixes.The programme went ahead anyway in February 2014 and although foreseen, failed to deliver. [For some, quite literally.]

These voices aren’t critical for fun, they call for fixes to get it right.

I would suggest that all of the issues raised since April 2014, were broadly known in February 2014 before the pause began. From the public listening exercise,  the high level summary captures some issues raised by patients, but doesn’t address their range or depth.

Some of the difficult and unwanted  issues, are still there, still the same and still being ignored, at least in the public domain. [4]

A Healthy New Arrival?

How is the approach better now and what happens next to proceed?

“It seems a shame,” the Walrus said, “To play them such a trick, After we’ve brought them out so far, And made them trot so quick!” [Lewis Carroll]

When asked by a board member: What is it we seek to learn from the pathfinder approach that will guide us in the decision later if this will become a national approach? it wasn’t very clear. [full detail end of post]

First they must pass the tests asked of them by Dame Fiona [her criteria and 27 questions from before Christmas.] At least that was what the verbal background given at the board meeting explained.

If the pilots should be a dip in the water of how national rollouts will proceed, then they need to test not just for today, but at least for the known future of changing content scope and expanding users – who will pay for the communication materials’ costs each time?

If policy keeps pressing forward, will it not make these complications worse under pressure? There may be external pressure ahead as potential changes to EU data protection are expected this year as well, for which the pilot must be prepared and design in advance for the expectations of best practice.

Pushing out the pathfinder directions, before knowing the answers to these practical things and patient questions open for over 16 months, is surely backwards. A breech birth, with predictable complications.

If in Sir Malcolm Grant’s words:

“we would only do this  if we believed it was absolutely critical in the interests of patients.” [Malcom Grant, May 28, 2015, NHS England Board meeting]

then I’d like to see the critical interest of patients put first. Address the full range of patient questions from the ‘listening pause’.

In the rush to just fix the best of a bad job, we’ve not even asked are we even doing the right thing? Is the system designed to best support doctor patient needs especially with the integration “blurring the lines” that Simon Stevens seems set on.

If  focus is on the success of the programme and not the patient, consider this: there’s a real risk too many opt out due to these unknowns. And lack of real choice on how their data gets used. It could be done better to reduce that risk.

What’s the percentage of opt out that the programme deems a success to make it worthwhile?

In March 2014, at a London event, a GP told me all her patients who were opting out were the newspaper reading informed, white, middle class. She was worried that the data that would be included, would be misleading and unrepresentative of her practice in CCG decision making.

medConfidential has written a current status for pathfinder areas that make great sense to focus first on fixing care.data’s big post-election question the opt out that hasn’t been put into effect. Of course in February 2014 we had to choose between two opt outs -so how will that work for pathfinders?

In the public interest we need collectively to see this done well. Another mis-delivery will be fatal. “No artificial timelines?”

Right now, my expectations are that the result won’t be as cute as a baby walrus.

******

Notes from the NHS England Board Meeting, May 28, 2015:

TK said:  “These directions [1] relate only to the pathfinder programme and specify for the HSCIC what data we want to be extracted in the event that Dame Fiona, this board and the Secretary of State have given their approval for the extraction to proceed.

“We will be testing in this process a public opt out, a citizen’s right to opt out, which means that, and to be absolutely clear if someone does exercise their right to opt out, no clinical data will be extracted from their general practice,  just to make that point absolutely clearly.

“We have limited access to the data, should it be extracted at the end of the pathfinder phase, in the pathfinder context to just four organisations: NHS England, Public Health England, the HSCIC and CQC.”

“Those four organisations will only be able to access it for analytic purposes in a safe, a secure environment developed by the Information Centre [HSCIC], so there will be no third party hosting of the data that flows from the extraction.

“In the event that Dame Fiona, this board, the Secretary of State, the board of the Information Centre, are persuaded that there is merit in the data analysis that proceeds from the extraction, and that we’ve achieved an appropriate standard of what’s called fair processing, essentially have explained to people their rights, it may well be that we proceed to a programme of national rollout, in that case this board will have to agree a separate set of directions.”

“This is not signing off anything other than a process to test communications, and for a conditional approval on extracting data subject to the conditions I’ve just described.”

CD said: “This is new territory, precedent, this is something we have to get right, not only for the pathfinders but generically as well.”

“One of the consequences of having a pathfinder approach, is as Tim was describing, is that directions will change in the future. So if we are going to have a truly fair process , one of the things we have to get right, is that for the pathfinders, people understand that the set of data that is extracted and who can use it in the pathfinders, will both be a subset of, the data that is extracted and who can use it in the future. If we are going to be true to this fair process, we have to make sure in the pathfinders that we do that.

“For example, at the advisory group last week, is that in the communication going forward we have to make sure that we flag the fact there will be further directions, and they will be changed, that we are overt in saying, subject to what Fiona Caldicott decides, that process itself will be transparent.”

Questions from Board members:
Q: What is it we seek to learn from the pathfinder approach that will guide us in the decision later if this will become a national approach?
What are the top three objectives we seek to achieve?

TK: So, Dame Fiona has set a series of standards she expects the pathfinders to demonstrate, in supporting GPs to be able to discharge this rather complex communication responsibility, that they have under the law  in any case.

“On another level how we can demonstrate that people have adequately understood their right to opt out [..]

“and how do we make sure that populations who are relatively hard to reach, although listed with GPs, are also made aware of their opportunity to opt out.

Perhaps it may help if I forward this to the board, It is in the public domain. But I will forward the letter to the board.”

“So that lays out quite a number of specific tangible objectives that we then have to evaluate in light of the pathfinder experience. “

Chair: “this is a programme in which we have invested a great deal, of time and thought in its development, we would only do this  if we believed it was absolutely critical in the interests of patients, it was something that would give us the information the intelligence that we need to more finely attune our commissioning practice, but also to get real time intelligence about how patients lives are lived, how treatments work and how we can better provide for their care.

“I don’t think this is any longer a matter of huge controversy, but how do we sensitively attune ourselves to patient confidentiality.”

“I propose that […] you will approve in principle the directions before you and also delegate to the Chief Executive and to myself to do final approval on behalf of the board, once we have taken into account the comments from medConfidential and any other issues, but the substance will remain unchanged.”

******

[4] request for the release of June 2014 Open House feedback still to be published in the hope that the range and depth of public questions can be addressed.

care.data comms letter

******
“The time has come,” the walrus said, “to talk of many things.”
[From ‘The Walrus* and the Carpenter’ in Through the Looking-Glass by Lewis Carroll]

*A walrus has a gestation period of about 16 months.
The same amount of time which the pause in the care.data programme has taken to give birth to the pathfinder sites.

references:
[1] NHS England Directions to HSCIC: May 28 2015 – http://www.england.nhs.uk/wp-content/uploads/2015/05/item6-board-280515.pdf
[2] Notes from care.data advisory group meeting on 27th February 2015
[3] Patient questions: https://jenpersson.com/pathfinder/
[4] Letter from NHS England in response to request from September, and November 2014 to request that public questions be released and addressed


15 Jan 2024: Image section in header replaced at the request of likely image tracing scammers who don’t own the rights and since it and this blog is non-commercial would fall under fair use anyway. However not worth the hassle. All other artwork on this site is mine.

care.data programme questions remain unanswered – what should patients do now?

care.data programme questions remain unanswered [1] and opportunities to demonstrate better transparency have to date, been turned down.

For anyone interested in the care.data rollout, professionals, patients and public alike, it is worrying to see the continued secrecy which shrouds the programme. We’ve been told online (but most in the public will still not know) an initial rollout in 4 CCG areas is now planned [2], but at which GP practices remains unclear.

On October 12th I asked that the care.data programme board minutes should be made public. The request is still open.[3]

“They seem hell bent on going ahead. I know they listened, but what did they hear?”

Questions asked by hundreds of people at multiple listening events remain unpublished and unanswered. Risks need resolved.

It is ironic that for a programme whose stated aim is to gather patient information in order to answer open questions about care,  it is so unwilling to give information back to answer the questions we, the ‘data subjects’ have about the programme.

I believe it is important to ensure that the questions are transparent, criticisms addressed and clarified, open issues solved and questions answered ahead of the pathfinder rollout to ensure the greatest success of the programme.

If the programme proceeds on an opt out basis, the risk is increased that it will not meet Data Protection regulation[4], which requires informed use of personal data. This puts GPs at risk. [5]

All the people who made the effort to attend these events for the benefit of the programme and the public good deserve answers. This would minimise the risks the public raised, which remain unresolved.

It is also important for maintaining trust in the integrity and value of user participation and engagement at other NHS events, and in this programme in particular.

Public and Transparent Feedback was Promised

I wrote to Mr. Tim Kelsey, Director, at the Patients and Information Directorate, NHS England today to ask, once again, for the release of public feedback.

Now two months ago, when I spoke with him after the NHS AGM in London on September 18th about care.data, the public questions have still not been put into the public domain.

He agreed that the raw feedback from all the care.data listening events, which included all the open questions asked by participants, would be published, “Shortly.”

This feedback includes questions from the NHS Open Days on June 17th  (4 locations), the stand-alone care.data events since, and those from the care.data advisory sessions hosted in Peterborough and Coin Street, London [6].

NHS England claims there have been hundreds of events. The website says some took place in my county, though I haven’t heard of any and neither has my CCG. Those of which I am aware and six attended, all generated a huge number of participant questions on paper, post-its and electronically, which participants were told would be published and answered, including put on the Open Day website ‘later in the summer'[7]:

“Feedback from this session is being incorporated into the overall report from the care.data listening phase which will be published later in the summer and linked to from this site.”

This is still to happen, and now nearing the end of November, is somewhat overdue.

My own questions at four events were on process and I believe it is important to get these clarified BEFORE the pathfinder:

  • How will you communicate with Gillick competent children [8], whose records may contain information about which their parents are not aware? [note also RCGP online roadmap p.15][9]
  • How will you manage this for elderly or vulnerable patients in care homes and with diminished awareness or responsibility?
  • When things change in scope or use, how will we be informed of changing plans for use or users, on an ongoing basis? [Data protection principle 2] [10]
  • For any future changes, how will we be given the choice to change our opt out or opt in? Consent is not a one-time agreement  but needs managed on a continual, rolling basis – how will this be achieved?

Campaigners have also raised remaining, unresolved issues.

Key legal questions remain, including on Opt Out

I am starting to become concerned that the opt out is STILL not on a statutory footing. Will the Secretary of State make good his verbal agreement in law?

What legal changes will be made that back up the verbal guarantees given since February? If none are forthcoming, then were the statements made to Parliament untrue? [11]

“people should be able to opt out from having their anonymised data used for the purposes of scientific research.”

I am yet to see this legal change and to date, the only publicly stated choice is only for identifiable data [12], not all data, as stated by the Minister.

So too the promised extra governance on a legal basis has not yet happened.

It is worth a note that although the Health and Social Care Act 2012 may have steamrollered the legal position of the patient and GP, and that confidentiality no longer comes first, informed consent even if assumed, is still in other circumstances to be obtained fairly:

“Consent obtained under duress or on the basis of misleading information does not adequately satisfy the condition for processing.” [ICO]

Should this principle not also apply even if GPs are legally obliged to release data without patient consent? [I feel that needs more discussion, so will write about consent in my next post.]

There is much made of ‘new legal protection’ of our data but in fact it is impossible to see it provides any such thing, and yes, I have read it. The Care Act 2014 did not get amended with any binding or truly clear provisions to make data more confidential or secure.

Concerns of many people centre on commercial use, and re-use of data, and these are not addressed by the loose terms for the benefit of adult health and social care’ or the ‘promotion of health’. [part 4 p.120] Data sold all year may have met this criteria, but is this how we expect our health records to be used without our express permission?

“We will use Mosiac, appended to the ICD10 code diagnoses, to create national Mosaic profiles. These estimates and propensities will be sold to public and commercial organisations to enable them to target resources more effectively and efficiently…Other data characteristics that are also linked to Mosiac can then be used to understand broader lifestyle characteristics of those most at risk to ensure that messages and communications are appropriate and well targeted.” [July register]

Question from Leicester: “Are we saying there will be only clinical use of the data – no marketing, no insurance, no profit making? This is our data.”[13]

So I hope it is clear, that these concerns are not only mine, but remain unanswered for the broader participants of listening events, gathered throughout the last year.

Questions from others

I’m publishing here the filtered and NHS England written, summary response of the 26th June event [14], I received as an attendee. (40 people, of whom ca 10 NHS England and HSCIC staff).

I disagreed with one of the statements made at our table at the meeting, and pointed out it was not factual. History as I understand, and has been stated by HSCIC in FOIs, will not be deleted. Yet this was allowed to be included in the notes sent to all:

“communicate that identifiable information can be deleted.”

The workshop was about how to access ‘hard-to-reach’ groups, so focused on communications methods. You will see that many statements are about how to market the programme, and do not clarify questions of substance, although many were asked on the day about scope definition, and future data changes.

Questions have not yet been addressed, such as Gillick, on children in care, young offenders, the forces, avoiding ‘propaganda-ish’ sounding and bias in the materials, to ensure the ‘adequate requirements’ for data processing.

You can see from this, that although the listening events may be deemed to have been a success, the answering part is still missing.

How are NHS England measuring success? What does good look like? I guarantee from a public perspective, it’s not there yet.

Long term benefit must not be harmed in the rush for a pilot tick-box

Since the programme is heralded as so vital for the NHS, I believe we should not be making the best of a bad job, but shaping process, security and communications to be world class, worthy of our NHS.[15]

We also need to see a long-term cost benefit plan – if we don’t know how some of these future processes are to be managed, how will we know what they will cost, and are they worth it?

The project should not aim for a quick and dirty pilot rollout. Perhaps there is a need to tick the ‘on time’ box for an NHS England target or meet a job description appraisal, as I would have had when I was responsible for project implementations in my past commercial industry role?

As it stands it is not NHS England/DoH who has the most to lose if this goes ahead as is. They must look at the big picture and accept their responsibility for this project, decide not to rush it and not expect the public and GPs to carry its risk.

At the weekend, in a speech about TTIP I heard the phrase, it’s “a classic case of socialising the risk and privatising the profit.”

So too it feels for me on care.data. NHS England wants all the benefit of our information, including from its sale, but it is we, individual patients and GPs who will be harmed if its security, commercial use [16], or everyday trust & confidentiality are compromised.

The Department of Health must look beyond party political aims pre-election. This is for the good of the NHS, which belongs to us all.

We must see open questions on process and content openly answered, for professionals and public alike.

Only then, can we trust that the infrastructure and promises made behind the scenes have set the foundation for this scheme to be worthy of our most intimate and confidential data.[17]

What can Patients do now?

“The policy and practical answers we need to ensure success, will not fit on a flyer or SMS.”

I have spoken with some of my fellow attendees since these events, including for example Stan Burridge, the Research Lead on Service User Involvement at Pathway London. (A charity providing healthcare to the homeless and which works with others on policy and best-practice approach sharing. Their recent work on dentistry outreach achieved a 0% no-show rate – getting the vital care needed for their clients and saving ££ for NHS dentist provision.)

His comments are a good summary of what has happened since:

“In the events, opinions could be expressed, questions asked, and I was made to feel they were valid questions, but they’re doing very little to answer them so that it makes a difference.

“I feel I was engaged with the process, but it’s doing nothing for the people on the margins.

“They should be given an informed choice to opt in, an uninformed choice not to opt out is not the same.

It is unclear what patients can now do, to get the answers we have asked for. We want to make a positive difference to make the project better.

The listening events seem to have been a one way process, and participation for PR purposes, rather than real engagement. The policy and practical answers we need to ensure success, will not fit on a flyer or SMS. They can’t be communicated as part of the pilot rollout. We need them published, addressed and ironed out up front.

Stan summed up exactly what I feel and what I have heard from many others:

“They seem hell bent on going ahead. I know they listened, but what did they hear?”

 

****

[1] A patient’s open letter to NHS England

[2] CCG pathfinder announcement

[3] care.data programme board minutes and materials FOI

[4] ICO Guide to Data Protection

[5] Medical Protection and care.data concern

[6] Coin Street care.data advisory group public event, Sept 6th

[7] NHS England Open House event 17th June

[8] Gillick and data protection for children

[9] RCGP Online Roadmap, includes concern on accessing data by those at risk of domestic abuse and children

[10] ICO Data Protection guidelines

[11] Hansard, Parliament 25th February 2014

[12] Parliamentary briefing note on care.data

[13] Questions from the Open House, incl. Leicester

[14] NHS England summary of feedback and statements from public event at Mencap, June 26th 2014

[15] Post from July 21st HSCIC roadmap event, future data use

[16] Commercial use of data with brokers – call for consumer data transparency

[17] Code list prepared by medConfidential and open issues

care.data – “anticipating things to come” means confidence by design

“By creating these coloured paper cut-outs, it seems to me that I am happily anticipating things to come…I know that it will only be much later that people will realise to what extent the work I am doing today is in step with the future.” Henri Matisse (1869-1954) [1]
My thoughts on the care.data advisory event Saturday September 6th.  “Minority voices, the need for confidentiality and anticipating the future.”

Part one here>> Minority voices

This is Part two >> the need for confidentiality and anticipating the future.”

[Video in full > here. Well worth a viewing.]

Matisse – The cut outs

Matisse when he could no longer paint, took to cutting shapes from coloured paper and pinning them to the walls of his home. To start with, he found the process deeply unsatisfying. He felt it wasn’t right. Initially, he was often unsure what he would make from a sheet. He pinned cutouts to his walls. But tacking things on as an afterthought, rearranging them superficially was never as successful as getting it right from the start. As he became more proficient, he would cut a form out in one piece, from start to finish. He could visualise the finished piece before he started. His later work is very impressive, much more so in real life than on  screen or poster. His cut outs took on life and movement, fronds would hang in the air, multiple pieces which matched up were grouped into large scale collections of pieces on his walls. They became no longer just 2D shapes but 3D and complete pictures. They would tell a joined-up story, just as our flat 2D pieces of individual data will tell others the story of our colourful 3D lives once they are matched and grouped together in longitudinal patient tracking from cradle to grave.

Data Confidentiality is not a luxury

From the care.data advisory meeting on September 6th, I picked out the minority voices I think we need to address better.

In addition to the minority groups, there are also cases in which privacy, for both children and adults, is more important to an individual than many of us consider in the usual discussion. For those at risk in domestic violence the ability to keep private information confidential is vital. In the cases when this fails the consequences can be terrible. My local news  told this week of just such a woman and child whose privacy were compromised.

“It is understood that the girl’s mother had moved away to escape domestic violence and that her ex-partner had discovered her new address.” (Guardian, Sept 12th)

This story has saddened me greatly.  This could have been one of my children or their classmates.

These are known issues when considering data protection, and for example are addressed in the RCGP Online Roadmap (see Box 9, p20).

“Mitigation against coercion may not have a clear solution. Domestic violence and cyberstalking by the abuser are particularly prevalent issues.”

Systems and processes can design in good privacy, or poor privacy, but the human role is a key part of the process, as human error can be the weakest link in the security chain.

Yet as regards care.data, I’ve yet to hear much mention of preventative steps in place, except an opt out. We don’t know how many people at local commissioning levels will access how much of our data and how often. This may go to show why I still have so many questions how the opt out will work in practice, [5] and why it matters. It’s not a luxury, it can be vital to an individual. How much of a difference in safety, is achieved using identifiable vs pseudonymised data, compared with real individual risk or fear?


“The British Crime Survey (BCS) findings of stalking prevalence (highest estimate: 22% lifetime, 7% in the past year) give a 5.5% lifetime risk of interference with online medical records by a partner, and a 1.75% annual risk.”
This Online Access is for direct care use. There is a greater visible benefit for the individual to access their own data than in care.data, for secondary uses. But I’m starting to wonder, if in fact care.data is just one great big pot of data and the uses will be finalised later?Is this why scope is so hard to pin down?


The slides of who will use care.data included ‘the patient’ at this 6th September meeting. How, and why? I want to have the following  explained to me, because I think it’s fundamental to opt out. This is detailed, I warn you now, but I think really important:

How does the system use the Opt out?

If you imagine different users looking at the same item of data in any one record, let’s say prescribing history, then it’s the security role and how the opt out codes work which will determine who gets to see what.



I assume here, there are not multiple copies of “my medications” in my record.  The whole point of giant databases is real-time, synched data, so “my medications” will not be stored in one place in the Summary Care Record (SCR) and copied again in ‘care.data’ and a third time in my ‘Electronic Prescription Service (EPS). There will be one place in which “my medications” is recorded.


The label under which a user can see that data for me, is their security role, but to me largely irrelevant. Except for opt out.


I have questions: If I opt out of the SCR programme at my GP, but opt in at my pharmacy to the EPS, what have I opted in to? Who has permission to view “my medications”  in my core record now? Have I created in effect an SCR, without realising it?


[I realise these are detailed questions, but ones we need to ask if we are to understand and inform our decision, especially if we have responsibility for the care of others.]


If I want to permit the use of my record for direct care (SCR) but not secondary uses (care.data) how do the two opt outs work together,  and what about my other hospital information?


Do we understand what we have and have not given permission for and to whom?
If there’s only one record, but multiple layers of user access who get to see it,  how will those be built, and where is the overlap?
We should ask these questions on behalf of others, because these under represented groups and minorities cannot if they are not in the room.

Sometimes we all need privacy. What is it worth?

Individuals and minorities in our community may feel strongly about maintaining privacy, for reasons of discrimination, or of being ‘found out’ through a system which can trace them. For reasons of fear. Others can’t always see the reasons for it, but that doesn’t take away from the value it has for the person who wants it or their need for that human right to be respected. How much is it worth?

It seems the more we value keeping data private, the more the cash value it has for others. In 2013, the FT created a nifty calculator and in an interview with Dave Morgan, reckoned our individual data is worth less than $1. General details such as age, gender and location are in the many decimal place range of fractions of a cent. The more interesting your life events, the more you can add to your data’s total value. Take pregnancy as an example.  Or if you add genomic data it  goes up in market value again.

Whilst this data may on a spreadsheet be no more than a dollar amount, in real life it may have immeasurably greater value to us on which you cannot put a price tag. It may be part of our life we do not wish others to see into. We may have personal or medical data, or recorded experiences we simply do not want to share with anyone but our GP. We might want a layered option like this suggestion by medConfidential to allow some uses but not others. [6]

In this debate it is rare that we mention the PDS (Personal Demographic Service), which holds the name and core contact details of every person with and NHS number past and present, almost 80 million. This is what can compromise privacy, when the patient can be looked up by any A&E, everyone with Summary Care Record access on N3 with technical ability to do so. It is a weak link. The security system relies on human validations, effectively in audit ‘does this seem OK to have looked up?’  These things happen and can go unchecked for a long period without being traced.

Systems and processes on this scale need security designed, that scales up to match in size.

Can data be included but not cut out privacy?

Will the richness of GP record / care.data datasharing afford these individuals the level of privacy they want? If properly anonymised, it would go some way to permitting groups to feel they could stay opted in, and the data quality and completeness would be better. But the way it is now, they may feel the risks created by removing their privacy are too great. The care.data breadth and data quality will suffer as a consequence.

The requirement of care.data to share identifiable information we may not want to, and that it is an assumed right of others to do so, with an assumed exploitation for the benefit of UK plc, especially if an opt-out system proceeds, feels to many, an invasion of the individual’s privacy and right to confidentiality. It can have real personal consequences for the individual.

The right to be open, honest and trusting without fear of repercussion matters. It matters to a traveller or to someone fleeing domestic violence with fears of being traced. It matters to someone of transgender, and others who want to live without stigma. It matters to our young people.

The BMA recognised this with their vote for an opt-in system earlier this year. 

Quality & Confidence by Design

My favourite exhibition piece at Tate Britain is still Barbara Hepworth’s [3] Pelagos from 1946. It is artistically well reviewed but even if you know little of art, it is simply a beautiful thing to see. (You’re not allowed to touch, even though it really should be, and it makes you want to.) Carved from a single piece of wood, designed with movement, shape, colour and shadow. It contains a section of strings, a symbol of interconnectivity. (Barbara Hepworth: Pelagos[4]). Seen as a precious and valuable collection, the Hepworth room has its own guard and solid walls. As much as I would have liked to take pictures, photography was not permitted and natural light was too low. Visitors must respect that.

So too, I see the system design needs of good tech. Set in and produced in a changing landscape. Designed with the view in mind of how it will look completed, and fully designed before the build began, but with flexibility built in. Planned interconnectivity. Precise and professional. Accurate. And the ability to see the whole from the start. Once finished, it is kept securely, with physical as well as system-designed security features.

All these are attributes which care.data failed to present from its conception but appear to be in progress of development through the Health and Social Care Information Centre. Plans are in progress [6] following the Partridge Review, and were released on September 3rd, with forward looking dates. For example, a first wave of audits is scheduled for completion 1/09 for four organisations. HSCIC will ‘pursue a technical solution to allow data access, w/out need to release data out to external orgs. Due 30/11.’ These steps are playing catch up, with what should have been good governance practices and procedures in the past. It need not be this way for GP care.data if we know that design is right, from the start.

As I raised on Saturday, at the Sept 6th workshop advisory committee, and others will no doubt have done before me, this designing from the start matters.  Design for change of scope, and incorporating that into the communications process for the future is vital for the pathfinders. One thing will be certain for pathfinder practices, there will be future changes.

This wave of care.data is only one step along a broad and long data sharing path

To be the best of its kind, care.data must create confidence by design, build-in the solutions to all these questions which have been and continue to be asked. We should be able to see today the plans for what care.data is intended to be when finished, and design the best practices into the structure from the start. Scope is still a large part of that open question. Scope content, future plans, and how the future project will manage its change processes.

As with Matisse, we must ask the designers, planners and comms/intelligence and PR teams, please think ahead  ”anticipating things to come”. Then we can be confident that we’ve  something fit for the time we’re in, and all of our kids’ futures. Whether they’ll be travellers, trans, have disabilities, be in care or not.  For our majority and all our minorities. We need to build a system that serves all of the society we want to see. Not only the ‘easy-to-reach’ parts.

”Anticipating things to come” can mean anticipating problems early, so that costly mistakes can be avoided.

Anticipating the future

One must keep looking to design not for the ‘now’ but for tomorrow. Management of future change, scope and communication is vital to get right.

This is as much a change process as a technical implementation project. In fact, it is perhaps more about the transformation, as it is called at NHS England, than the technology.The NHS landscape is changing – who will deliver our healthcare. And the how is changing too, as telecare and ever more apps are rolled out. Nothing is constant, but change. How do we ensure everyone involved in top-down IT projects understands how the system supports, but does not drive change? Change is about process and people. The system is a tool to enable people. The system is not the goal.

We need to work today to be ahead of the next step for the future. We must ensure that processes and technology, the way we do things and the tools that enable what we do, are designing the very best practices into the whole, from the very beginning. From the ground up. Taking into account fair processing of Data Protection Law, EU law – the upcoming changes in EU data protection law –  and best practice. Don’t rush to bend a future law in current design or take a short cut in security for the sake of speed. Those best practices need not cut out the good ethics of consent and confidentiality. They can co-exist with world class research and data management. They just need included by design, not tacked on, and superficially rearranged afterwards.

So here’s my set of challenge scenarios for NHS England to answer.

1. The integration of health and social care marches on at a pace, and the systems and its users are to follow suit. How is NHS England ensuring the building of a system and processes  which are ‘anticipating by design’ these new models of data management for this type of care delivery, not staying stuck on the model of top-down mass surveillance database, planned for the last decade?

2. How will NHS England audit that a system check does not replace qualified staff decisions, with algorithms and flags for example, on a social care record? Risk averse, I fear that the system will encourage staff to be less likely to make a decision that goes against the system recommendation, ‘for child removal’, for example. Even though their judgement based on human experience, may suggest a different outcome. What are the system-built-in assumed outcomes – if you view the new social care promotional videos at least it’s pretty consistent. The most depressing stereo typed scenarios I’ve seen anywhere I think. How will this increase in data and sharing, work?

“What makes more data by volume, equal more intelligence by default?”

Just like GP call centre OOH today, sends too many people calling the 111 service to A&E now, I wonder if a highly systemised social care system risks sending too many children from A&E into social care? Children who should not be there but who meet the criteria set by insensitive algorithms or the converse risk that don’t, and get missed by over reliance on a system, missing what an experienced professional can spot.

3. How will the users of the system use their system data, and how has it been tested and likely outcomes measured against current data? i.e. will more or fewer children taken into care be seen as a measure of success? How will any system sharing be audited in governance and with what oversight in future?

Children’s social care is not a system that is doing well as it is today, by many accounts, you only need glance at the news most days, but integration will change how is it delivers service for the needs of our young people. It is an example we can apply in many other cases.

What plan is in place to manage these changes of process and system use? Where is public transparency?

care.data has to build in consent, security and transparency from the start, because it’s a long journey ahead, as data is to be added incrementally over time. As our NHS and social care organisational models are changing, how are we ensuring confidentiality and quality built-in-by-design to our new health and social care data sharing processes?

What is set up now, must be set up fit for the future.

Tacking things on afterwards, means lowering your chance of success.

Matisse knew, “”Anticipating things to come” can mean being positively in step with the future by the time it was needed. By anticipating problems early, costly mistakes can be avoided.”

*****

Immediate information and support for women experiencing domestic violence: National Domestic Violence, Freephone Helpline 0808 2000 247

*****

[1] Interested in a glimpse into the Matisse exhibition which has now closed? Check out this film.

[2] Previous post: My six month pause round up [part one] https://jenpersson.com/care-data-pause-six-months-on/

[3] Privacy and Prejudice: http://www.raeng.org.uk/publications/reports/privacy-and-prejudice-views This study was conducted by The Royal Academy of Engineering (the Academy) and Laura Grant Associates and was made possible by a partnership with the YTouring Theatre Company, support from Central YMCA, and funding from the Wellcome Trust and three of the Research Councils (Engineering and Physical and Sciences Research Council; Economic and Social Research Council and Medical Research Council).

[4]  Barbara Hepworth – Pelagos – in Prospect Magazine

[5] Questions remain open on how opt out works with identifiable vs pseudonymous data sharing requirement and what the objection really offers. [ref: Article by Tim Kelsey in Prospect Magazine 2009 “Long Live the Database State.”]
[6] HSCIC current actions published with Board minutes
[8] NIB https://app.box.com/s/aq33ejw29tp34i99moam/1/2236557895/19347602687/1
*****

More information about the Advisory Group is here: http://www.england.nhs.uk/ourwork/tsd/ad-grp/

More about the care.data programme here at HSCIC – there is an NHS England site too, but I think the HSCIC is cleaner and more useful: http://www.hscic.gov.uk/article/3525/Caredata