Category Archives: scope creep

Destination smart-cities: design, desire and democracy (Part four)

Who is using all this Big Data? What decisions are being made on the back of it that we never see?

In the everyday and press it often seems that the general public does not understand data, and can easily be told things which we misinterpret.

There are tools in social media influencing public discussions and leading conversations in a different direction from that it had taken, and they operate without regulation.

It is perhaps meaningful that pro-reform Wellington School last week opted out of some of the greatest uses of Big Data sharing in the UK. League tables. Citing their failures. Deciding they werein fact, a key driver for poor educational practice.”

Most often we cannot tell from the data provided what we are told those Big Data should be telling us. And we can’t tell if the data are accurate, genuine and reliable.

Yet big companies are making big money selling the dream that Big Data is the key to decision making. Cumulatively through lack of skills to spot inaccuracy, and inability to do necessary interpretation, we’re being misled by what we find in Big Data.

Being misled is devastating for public trust, as the botched beginnings of care.data found in 2014. Trust has come to be understood as vital for future based on datasharing. Public involvement in how we are used in Big Data in the future, needs to include how our data are used in order to trust they are used well. And interpreting those data well is vital. Those lessons of the past and present must be learned, and not forgotten.

It’s time to invest some time in thinking about safeguarding trust in the future, in the unknown, and the unseen.

We need to be told which private companies like Cinven and FFT have copies of datasets like HES, the entire 62m national hospital records, or the NPD, our entire schools database population of 20 million, or even just its current cohort of 8+ million.

If the public is to trust the government and public bodies to use our data well, we need to know exactly how those data are used today and all these future plans that others have for our personal data.

When we talk about public bodies sharing data they hold for administrative purposes, do we know which private companies this may mean in reality?

The UK government has big plans for big data sharing, sharing across all public bodies, some tailored for individual interventions.

While there are interesting opportunities for public benefit from at-scale systems, the public benefit is at risk not only from lack of trust in how systems gather data and use them, but that interoperability gets lost in market competition.

Openness and transparency can be absent in public-private partnerships until things go wrong. Given the scale of smart-cities, we must have more than hope that data management and security will not be one of those things.

But how will we know if new plans design well, or not?

Who exactly holds and manages those data and where is the oversight of how they are being used?

Using Big Data to be predictive and personal

How do we definde “best use of data” in “public services” right across the board in a world in which boundaries between private and public in the provision of services have become increasingly blurred?

UK researchers and police are already analysing big data for predictive factors at postcode level for those at risk or harm, for example in combining health and education data.

What has grown across the Atlantic is now spreading here. When I lived there I could already see some of what is deeply flawed.

When your system has been as racist in its policing and equity of punishment as institutionally systemic as it is in the US, years of cumulative data bias translates into ‘heat lists’ and means “communities of color will be systematically penalized by any risk assessment tool that uses criminal history as a legitimate criterion.”

How can we ensure British policing does not pursue flawed predictive policies and methodologies, without seeing them?

What transparency have our use of predictive prisons and justice data?

What oversight will the planned new increase in use of satellite tags, and biometrics access in prisons have?

What policies can we have in place to hold data-driven decision-making processes accountable?<

What tools do we need to seek redress for decisions made using flawed algorithms that are apparently indisputable?

Is government truly committed to being open and talking about how far the nudge unit work is incorporated into any government predictive data use? If not, why not?

There is a need for a broad debate on the direction of big data and predictive technology and whether the public understands and wants it.If we don’t understand, it’s time someone explained it.

If I can’t opt out of O2 picking up my travel data ad infinitum on the Tube, I will opt out of their business model and try to find a less invasive provider. If I can’t opt out of EE picking up my personal data as I move around Hyde park, it won’t be them.

Most people just want to be left alone and their space is personal.

A public consultation on smart-technology, and its growth into public space and effect on privacy could be insightful.

Feed me Seymour?

With the encroachment of integrated smart technology over our cities – our roads, our parking, our shopping, our parks, our classrooms, our TV and our entertainment, even our children’s toys – surveillance and sharing information from systems we cannot see  start defining what others may view, or decide about us, behind the scenes in everything we do.

As it expands city wide, it will be watched closely if data are to be open for public benefit, but not invade privacy if “The data stored in this infrastructure won’t be confidential.”

If the destination of digital in all parts of our lives is smart-cities then we have to collectively decide, what do we want, what do we design, and how do we keep it democratic?

What price is our freedom to decide how far its growth should reach into public space and private lives?

The cost of smart cities to individuals and the public is not what it costs in investment made by private conglomerates.

Already the cost of smart technology is privacy inside our homes, our finances, and autonomy of decision making.

Facebook and social media may run algorithms we never see that influence our mood or decision making. Influencing that decision making is significant enough when it’s done through advertising encouraging us to decide which sausages to buy for your kids tea.

It is even more significant when you’re talking about influencing voting.

Who influences most voters wins an election. If we can’t see the technology behind the influence, have we also lost sight of how democracy is decided? The power behind the mechanics of the cogs of Whitehall may weaken inexplicably as computer driven decision from the tech companies’ hidden tools takes hold.

What opportunity and risk to “every part of government” does ever expanding digital bring?

The design and development of smart technology that makes decisions for us and about us, lies in in the hands of large private corporations, not government.

The means the public-interest values that could be built by design and their protection and oversight are currently outside our control.

There is no disincentive for companies that have taken private information that is none of their business, and quite literally, made it their business to not want to collect ever more data about us. It is outside our control.

We must plan by-design for the values we hope for, for ethics, to be embedded in systems, in policies, embedded in public planning and oversight of service provision by all providers. And that the a fair framework of values is used when giving permission to private providers who operate in public spaces.

We must plan for transparency and interoperability.

We must plan by-design for the safe use of data that does not choke creativity and innovation but both protects and champions privacy as a fundamental building block of trust for these new relationships between providers of private and public services, private and public things, in private and public space.

If “digital is changing how we deliver every part of government,” and we want to “harness the best of digital and technology, and the best use of data to improve public services right across the board” then we must see integration in the planning of policy and its application.

Across the board “the best use of data” must truly value privacy, and enable us to keep our autonomy as individuals.

Without this, the cost of smart cities growing unchecked, will be an ever growing transfer of power to the funders behind corporations and campaign politics.

The ultimate price of this loss of privacy, will be democracy itself.

****

This is the conclusion to a four part set of thoughts: On smart technology and data from the Sprint16 session (part one). I thought about this more in depth on “Smart systems and Public Services” here (part two), and the design and development of smart technology making “The Best Use of Data” here looking at today in a UK company case study (part three) and this part four, “The Best Use of Data” used in predictions and the Future.

Destination smart-cities: design, desire and democracy (Part two)

Smart cities: private reach in public space and personal lives

Smart-cities are growing in the UK through private investment and encroachment on public space. They are being built by design at home, and supported by UK money abroad, with enormous expansion plans in India for example, in almost 100 cities.

With this rapid expansion of “smart” technology not only within our living rooms but my living space and indeed across all areas of life, how do we ensure equitable service delivery, (what citizens generally want, as demonstrated by strength of feeling on the NHS) continues in public ownership, when the boundary in current policy is ever more blurred between public and private corporate ownership?

How can we know and plan by-design that the values we hope for, are good values, and that they will be embedded in systems, in policies and planning? Values that most people really care about. How do we ensure “smart” does not ultimately mean less good? That “smart” does not in the end mean, less human.

Economic benefits seem to be the key driver in current government thinking around technology – more efficient = costs less.

While using technology progressing towards replacing repetitive work may be positive, how will we accommodate for those whose skills will no longer be needed? In particular its gendered aspect, and the more vulnerable in the workforce, since it is women and other minorities who work disproportionately in our part-time, low skill jobs. Jobs that are mainly held by women, even what we think of as intrinsically human, such as carers, are being trialed for outsourcing or assistance by technology. These robots monitor people, in their own homes and reduce staffing levels and care home occupancy. We’ll no doubt hear how good it is we need fewer carers because after all, we have a shortage of care staff. We’ll find out whether it is positive for the cared, or whether they find it it less ‘human'[e]. How will we measure those costs?

The ideal future of us all therefore having more leisure time sounds fab, but if we can’t afford it, we won’t be spending more of our time employed in leisure. Some think we’ll simply be unemployed. And more people live in the slums of Calcutta than in Soho.

One of the greatest benefits of technology is how more connected the world can be, but will it also be more equitable?

There are benefits in remote sensors monitoring changes in the atmosphere that dictate when cars should be taken off the roads on smog-days, or indicators when asthma risk-factors are high.

Crowd sourcing information about things which are broken, like fix-my-street, or lifts out-of-order are invaluable in cities for wheelchair users.

Innovative thinking and building things through technology can create things which solve simple problems and add value to the person using the tool.

But what of the people that cannot afford data, cannot be included in the skilled workforce, or will not navigate apps on a phone?

How this dis-incentivises the person using the technology has not only an effect on their disappointment with the tool, but the service delivery, and potentially wider still even to societal exclusion or stigma.These were the findings of the e-red book in Glasgow explained at the Digital event in health, held at the King’s Fund in summer 2015.

Further along the scale of systems and potential for negative user experience, how do we expect citizens to react to finding punishments handed out by unseen monitoring systems, finding out our behaviour was ‘nudged’ or find decisions taken about us, without us?

And what is the oversight and system of redress for people using systems, or whose data are used but inaccurate in a system, and cause injustice?

And wider still, while we encourage big money spent on big data in our part of the world how is it contributing to solving problems for millions for whom they will never matter? Digital and social media makes increasingly transparent our one connected world, with even less excuse for closing our eyes.

Approximately 15 million girls worldwide are married each year – that’s one girl, aged under 18, married off against her will every two seconds. [Huff Post, 2015]

Tinder-type apps are luxury optional extras for many in the world.

Without embedding values and oversight into some of what we do through digital tools implemented by private corporations for profit, ‘smart’ could mean less fair, less inclusive, less kind. Less global.

If digital becomes a destination, and how much it is implemented is seen as a measure of success, by measuring how “smart” we become risks losing sight of seeing technology as solutions and steps towards solving real problems for real people.

We need to be both clever and sensible, in our ‘smart’.

Are public oversight and regulation built in to make ‘smart’ also be safe?

If there were public consultation on how “smart” society will look would we all agree if and how we want it?

Thinking globally, we need to ask if we are prioritising the wrong problems? Are we creating more tech that we already have invented solutions for place where governments are willing to spend on them? And will it in those places make the society more connected across class and improve it for all, or enhance the lives of the ‘haves’ by having more, and the ‘have-nots’ be excluded?

Does it matter how smart your TV gets, or carer, or car, if you cannot afford any of these convenient add-ons to Life v1.1?

As we are ever more connected, we are a global society, and being ‘smart’ in one area may be reckless if at the expense or ignorance of another.

People need to Understand what “Smart” means

“Consistent with the wider global discourse on ‘smart’ cities, in India urban problems are constructed in specific ways to facilitate the adoption of “smart hi-tech solutions”. ‘Smart’ is thus likely to mean technocratic and centralized, undergirded by alliances between the Indian government and hi-technology corporations.”  [Saurabh Arora, Senior Lecturer in Technology and Innovation for Development at SPRU]

Those investing in both countries are often the same large corporations. Very often, venture capitalists.

Systems designed and owned by private companies provide the information technology infrastructure that i:

the basis for providing essential services to residents. There are many technological platforms involved, including but not limited to automated sensor networks and data centres.’

What happens when the commercial and public interest conflict and who decides that they do?

Decision making, Mining and Value

Massive amounts of data generated are being mined for making predictions, decisions and influencing public policy: in effect using Big Data for research purposes.

Using population-wide datasets for social and economic research today, is done in safe settings, using deidentified data, in the public interest, and has independent analysis of the risks and benefits of projects as part of the data access process.

Each project goes before an ethics committee review to assess its considerations for privacy and not only if the project can be done, but should be done, before it comes for central review.

Similarly our smart-cities need ethics committee review assessing the privacy impact and potential of projects before commissioning or approving smart-technology. Not only assessing if they are they feasible, and that we ‘can’ do it, but ‘should’ we do it. Not only assessing the use of the data generated from the projects, but assessing the ethical and privacy implications of the technology implementation itself.

The Committee recommendations on Big Data recently proposed that a ‘Council of Data Ethics’ should be created to explicitly address these consent and trust issues head on. But how?

Unseen smart-technology continues to grow unchecked often taking root in the cracks between public-private partnerships.

We keep hearing about Big Data improving public services but that “public” data is often held by private companies. In fact our personal data for public administration has been widely outsourced to private companies of which we have little oversight.

We’re told we paid the price in terms of skills and are catching up.

But if we simply roll forward in first gear into the connected city that sees all, we may find we arrive at a destination that was neither designed nor desired by the majority.

We may find that the “revolution, not evolution”, hoped for in digital services will be of the unwanted kind if companies keep pushing more and more for more data without the individual’s consent and our collective public buy-in to decisions made about data use.

Having written all this, I’ve now read the Royal Statistical Society’s publication which eloquently summarises their recent work and thinking. But I wonder how we tie all this into practical application?

How we do governance and regulation is tied tightly into the practicality of public-private relationships but also into deciding what should society look like? That is what our collective and policy decisions about what smart-cities should be and may do, is ultimately defining.

I don’t think we are addressing in depth yet the complexity of regulation and governance that will be sufficient to make Big Data and Public Spaces safe because companies say too much regulation risks choking off innovation and creativity.

But that risk must not be realised if it is managed well.

Rather we must see action to manage the application of smart-technology in a thoughtful way quickly, because if we do not, very soon, we’ll have lost any say in how our service providers deliver.

*******

I began my thoughts about this in Part one, on smart technology and data from the Sprint16 session and after this (Part two), continue to look at the design and development of smart technology making “The Best Use of Data” with a UK company case study (Part three) and “The Best Use of Data” used in predictions and the Future (Part four).

The National Pupil Database end of year report: an F in Fair Processing

National Pupil Database? What National Pupil Database? Why am I on it?

At the start of the school year last September 2014, I got the usual A4 pieces of paper. Each of my children’s personal details, our home address and contact details, tick boxes for method of transport each used to get to school, types of school meal eaten all listed, and a privacy statement at the bottom:

“Data Protection Act 1988: The school is registered under the Data Protection Act for holding personal data. The school has a duty to protect this information and to keep it up to date. The school is required to share some of the data with the Local Authority and with the DfE.”

There was no mention of the DfE sharing it onwards with anyone else. But they do, through the National Pupil Database [NPD] and  it is enormous [1].  It’s a database which holds personal information of every child who has ever been in state education since 2002, some data since 1996. [That includes me as both a student AND a parent.]

“Never heard of it?”

Well neither have I from my school, which is what I pointed out to the DfE in September 2014.

School heads, governors, and every parent I have spoken with in my area and beyond, are totally unaware of the National Pupil database. All are surprised. Some are horrified at the extent of data sharing at such an identifiable and sensitive level, without school and parental knowledge.[2]

Here’s a list what it holds. Fully identifiable data at unique, individual level. Tiered from 1-4, where 1 is the most sensitive. A full list of what data is available in each of the tiers and standard extracts can be found in the ‘NPD data tables’.

K5

I’d like to think it has not been deliberately hidden from schools and parents. I hope it has simply been careless about its communications.

Imagine that the data once gathered only for administration since 1996, was then decided about at central level and they forgot to tell the people whom they should have been asking. The data controllers and subjects the data were from – the schools, parents/guardians and pupils – were forgotten. That could happen when you see data as a commodity and not as people’ s personal histories.

The UK appears to have gathered admin data for years until the coalition decided it was an asset it could further exploit. The DfE may have told others in 2002 and in 2012 when it shaped policy on how the NPD would be used, but it forgot to tell the children whose information it is and used them without asking. In my book, that’s an abuse of power and misuse of data.

It seems to me that current data policies in practice across all areas of government have simply drifted at national level towards ever greater access by commercial users.

And although that stinks, it has perhaps arisen from lack of public transparency and appropriate oversight, rather than some nefarious intent.

Knowingly failing to inform schools, pupils and guardians how the most basic of our personal data are used is outdated and out of touch with public feeling. Not to mention, that it fails fair processing under Data Protection law.

Subject Access Request – User experience gets an ‘F’ for failing

The submission of the school census, including a set of named pupil records, is a statutory requirement on schools.

This means that children and parents data, regardless of how well or poorly informed they may be, are extracted for administrative purposes, and are used in addition to those we would expect, for various secondary reasons.

Unless the Department for Education makes schools aware of the National Pupil Database use and users, the Department fails to provide an adequate process to enable schools to meet their local data protection requirements. If schools don’t know, they can’t process data properly.

So I wrote to the Department for Education (DfE) in September 2014, including the privacy notice used in schools like ours, showing it fails to inform parents how our children’s personal data and data about us (as related parent/guardians) are stored and onwardly used by the National Pupil Database (NPD). And I asked three questions:

1. I would like to know what information is the minimum you require for an individual child from primary schools in England?

2. Is there an opt out to prevent this sharing and if so, under what process can parents register this?

3. Is there a mechanism for parents to restrict the uses of the data (i.e. opt out our family data) with third parties who get data from the National Pupil Database?

I got back some general information, but no answer to my three questions.

What data do you hold and share with third parties about my children?

In April 2015 I decided to find out exactly what data they held, so I made a subject access request [SAR], expecting to see the data they held about my children. They directed me to ask my children’s school instead and to ask for their educational record. The difficulty with that is, it’s a different dataset.

My school is not the data controller of the National Pupil Database. I am not asking for a copy of my children’s educational records held by the school, but what information that the NPD holds about me and my children. One set of data may feed the other but they are separately managed. The NPD is the data controller for that data it holds and as such I believe has data controller responsibility for it, not the school they attend.

Why do I care? Well for starters, I want to know if the data are accurate.  And I want to know who else has access to it and for what purposes – school can’t tell me that. They certainly couldn’t two months ago, as they had no idea the NPD existed.

I went on to ask the DfE for a copy of the publicly accessible subject access request (SAR) policy and procedures, aware that I was asking on behalf of my children. I couldn’t find any guidance, so asked for the SAR policy. They helpfully provided some advice, but I was then told:

“The department does not have a publicly accessible standard SAR policy and procedures document.”  and “there is not an expectation that NPD data be made available for release in response to a SAR.”

It seems policies are inconsistent. For this other DfE project, there is information about the database, how participants can opt out and  respecting your choice. On the DfE website a Personal Information Charter sets out “what you can expect when we ask for and hold your personal information.”

It says: “Under the terms of the Data Protection Act 1998, you’re entitled to ask us:

  • if we’re processing your personal data
  • to give you a description of the data we hold about you, the reasons why we’re holding it and any recipient we may disclose it to (eg Ofsted)
  • for a copy of your personal data and any details of its source

You’re also entitled to ask us to change the information we hold about you, if it is wrong.

To ask to see your personal data (make a ‘subject access request’), or to ask for clarification about our processing of your personal data, contact us via the question option on our contact form and select ‘other’.”

So I did. But it seems while it applies to that project,  Subject Access Request is not to apply to the data they hold in the NPD. And they finally rejected my request last week, stating it is exempt:

SAR_reject

I appealed the decision on the basis that the section 33 Data Protection Act criteria given, are not met:

“the data subject was made fully aware of the use(s) of their personal data (in the form of a privacy notice)”

But it remains rejected.

It seems incomprehensible that third parties can access my children’s data and I can’t even check to see if it is correct.

While acknowledging section 7 of the Data Protection Act 1998 (DPA) “an individual has the right to ask an organisation to provide them with information they hold which identifies them and, in certain circumstances, a parent can make such a request on behalf of a child” they refused citing the Research, History and Statistics exemption (i.e. section 33(4) of the DPA).

Fair processing, another F for failure and F for attitude

The Department of Education response to me said that it “makes it clear what information is held, why it is held, the uses made of it by DfE and its partners and publishes a statement on its website setting this out. Schools also inform parents and pupils of how the data is used through privacy notices.”

I have told the DfE the process does not work. The DfE / NPD web instructions do not reach parents. Even if they did, information is thoroughly inadequate and either deliberately hides or does so by omission, the commercial third party use of data.

The Department for Education made a web update on 03/07/2015 with privacy information to be made available to parents by schools: http://t.co/PwjN1cwe6r

Despite this update this year, it is inadequate on two counts. In content and communication.

To claim as they did in response to me that: “The Department makes it clear to children and their parents what information is held about pupils and how it is processed, through a statement on its website,” lacks any logic.

Updating their national web page doesn’t create a thorough communications process or engage anyone who does not know about it to start with.

Secondly, the new privacy policy is inadequate in it content and utterly confusing. What does this statement mean, is there now some sort of opt out on offer? I doubt it, but it is unclear:

“A parent/guardian can ask that no information apart from their child’s name, address and date of birth be passed to [insert name of local authority or the provider of Youth Support Services in your area] by informing [insert name of school administrator]. This right is transferred to the child once he/she reaches the age 16. For more information about services for young people, please go to our local authority website [insert link].” [updated privacy statement, July 3, 2015]

Information that I don’t know exists, about a database I don’t know exists, that my school does not know exists, they believe meets fair processing through a statement on its own website?

Appropriate at this time of year,  I have to ask, “you cannot be serious?”

Fair processing means transparently sharing the purpose or purposes for which you intend to process the information, not hiding some of the users through careful wording.

It thereby fails to legally meet the first data protection principle. as parents are not informed at all, never mind fully of further secondary uses.

As a parent, when I register my child for school, I of course expect that some personal details must be captured to administer their education.

There must be data shared to adequately administer, best serve, understand, and sometimes protect our children.  And bona fide research is in the public interest.

However I have been surprised in the last year to find that firstly, I can’t ask what is stored on my own children and that secondly, a wide range of sensitive data are shared through the Department of Education with third parties.

Some of these potential third parties don’t meet research criteria in my understanding of what a ‘researcher’ should be. Journalists? the MOD?

To improve, there would be little additional time or work burden required to provide proper fair processing as a starting point, but to do so, the department can’t only update a policy on its website and think it’s adequate. And the newly updated suggested text for pupils is only going to add confusion.

The privacy policy text needs carefully reworded in human not civil service speak.

It must not omit [as it does now] the full range of potential users.

After all the Data Protection principles state that: “If you wish to use or disclose personal data for a purpose that was not contemplated at the time of collection (and therefore not specified in a privacy notice), you have to consider whether this will be fair.”

Now that it must be obvious to DfE that it is not the best way to carry on, why would they choose NOT to do better? Our children deserve better.

What would better look like? See part 3. The National Pupil Database end of year report: a D in transparency, C minus in security.

*****

[PS: I believe the Freedom of Information Officer tried their best and was professional and polite in our email exchanges, B+. Can’t award an A as I didn’t get any information from my requests. Thank you to them for their effort.]

*****

Updated on Sunday 19th July to include the criteria of my SAR rejection.

1. Our children’s school data: an end of year report card
2. The National Pupil Database end of year report: an F in fair processing
3. The National Pupil Database end of year report: a D in transparency, C minus in security

References:

[1] The National Pupil Database user guide: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/261189/NPD_User_Guide.pdf

[2] The Department for Education has specific legal powers to collect pupil, child and workforce data held by schools, local authorities and awarding bodies under section 114 of the Education Act 2005section 537A of the Education Act 1996, and section 83 of the Children Act 1989. The submission of the school census returns, including a set of named pupil records, is a statutory requirement on schools under Section 537A of the Education Act 1996.

[3] Data tables to see the individual level data items stored and shared (by tabs on the bottom of the file) https://www.gov.uk/government/publications/national-pupil-database-user-guide-andsupporting-information

[4] The table to show who has bought or received data and for what purpose https://www.gov.uk/government/publications/national-pupil-database-requests-received

[5] Data Trust Deficit – from the RSS: http://www.statslife.org.uk/news/1672-new-rss-research-finds-data-trust-deficit-with-lessons-for-policymakers

[6] Talk by Phil Booth and Terri Dowty: http://www.infiniteideasmachine.com/2013/04/terris-and-my-talk-on-the-national-pupil-database-at-the-open-data-institute/

[7] Presentation given by Paul Sinclair of the Department for Education at the Workshop on Evaluating the Impact of Youth Programmes, 3rd June 2013

The National Pupil Database end of year report: D for transparency, C minus in security.

Transparency and oversight of how things are administered are simple ways that the public can both understand and trust that things run as we expect.

For the National Pupil Database, parents might be surprised, as I was about some of the current practices.

The scope of use and who could access the National Pupil Database was changed in 2012 and although I had three children at school at that time and heard nothing about it, nor did I read it in the papers. (Hah – time to read the papers?)  So I absolutely agree with Owen Boswara’s post when he wrote:

“There appears to have been no concerted effort to bring the consultation or the NPD initiative to the attention of parents or pupils (i.e. the data subjects themselves). This is a quote from one of the parents who did respond:

“I am shocked and appalled that I wasn’t notified about this consultation through my child’s school – I read about it on Twitter of all things. A letter should have gone to every single parent explaining the proposals and how to respond to this consultation.”

(Now imagine that sentiment amplified via Mumsnet …)”
[July 2013, blog by O. Boswara]

As Owen wrote,  imagine that sentiment amplified via Mumsnet indeed.

Here’s where third parties can apply and here’s a list of who has been given data from the National Pupil Database . (It’s only been updated twice in 18 months. The most recent of which has been since I’ve asked about it, in .) The tier groups 1-4 are explained here on p.18, where 1 is the most sensitive identifiable classification.

The consultation suggested in 2012 that the changes could be an “effective engine of economic growth, social wellbeing, political accountability and public service improvement.”.  

Has this been measured at all if the justification given has begun to be achieved? Often research can take a long time and implementing any changes as a result, more time. But perhaps there has been some measure of public benefit already begun to be accrued?

The release panel would one hope, have begun to track this. [update: DfE confirmed August 20th they do not track benefits, nor have ever done any audit of recipients]

And in parallel what oversight governs checks and balances to make sure that the drive for the ‘engine of economic growth’ remembers to treat these data as knowledge about our children?

Is there that level of oversight from application to benefits measurement?

Is there adequate assessment of privacy impact and ethics in applications?

Why the National Pupil Database troubles me, is not the data it contains per se, but the lack of child/guardian involvement, lack of accountable oversight how it is managed and full transparency around who it is used by and its processes.

Some practical steps forward

Taken now, steps could resolve some of these issues and avoid the risk of them becoming future issues of concern.

The first being thorough fair processing, as I covered in my previous post.

The submission of the school census returns, including a set of named pupil records, has been a statutory requirement on schools since the Education Act 1996. That’s almost twenty years ago in the pre-mainstream internet age.

The Department must now shape up its current governance practices in its capacity as the data processor and controller of the National Pupil Database, to be fit for the 21st century.

Ignoring current weaknesses, actively accepts an ever-increasing reputational risk for the Department, schools, other data sharing bodies or those who link to the data and its bona fide research users. If people lose trust in data uses, they won’t share at all and the quality of data will suffer, bad for functional admin of the state and individual, but also for the public good.

That concerns me also wearing my hat as a lay member on the ADRN panel because it’s important that the public trusts our data is looked after wisely so that research can continue to use it for advances in health and social science and all sorts of areas of knowledge to improve our understanding of society and make it better.

Who decides who gets my kids data, even if I can’t?

A Data Management Advisory Panel (DMAP) considers applications for only some of the applications, tier 1 data requests. Those are the most, but not the only applications for access to sensitive data.

“When you make a request for NPD data it will be considered for approval by the Education Data Division (EDD) with the exception of tier 1 data requests, which will be assessed by the department’s Data Management Advisory Panel. The EDD will inform you of the outcome of the decision.”

Where is governance transparency?

What is the make up of both the Data Management Advisory Panel and and the Education Data Division (EDD)? Who sits on them and how are they selected? Do they document their conflicts of interest for each application? For how long are they appointed and under what selection criteria?

Where is decision outcome transparency?

The outcome of the decision should be documented and published. However, the list has been updated only twice since its inception in 2012. Once was December 2013, and the most recently was, ahem, May 18 2015. After considerable prodding. There should be a regular timetable, with responsible owner and a depth of insight into its decision making.

Where is transparency over decision making to approve or reject requests?

Do privacy impact assessments and ethics reviews play any role in their application and if so, how are they assessed and by whom?

How are those sensitive and confidential data stored and governed?

The weakest link in any system is often said to be human error. Users of the NPD data vary from other government departments to “Mom and Pop” small home businesses, selling schools’ business intelligence and benchmarking.

So how secure are our children’s data really, and once the data have left the Department database, how are they treated? Does lots of form filling and emailed data with a personal password ensure good practice, or simply provide barriers to slow down the legitimate applications process?

What happens to data that are no longer required for the given project? Are they properly deleted and what audits have ever been carried out to ensure that?

The National Pupil Database end of year report: a C- in security

The volume of data that can be processed now at speed is incomparable with 1996, and even 2012 when the current processes were set up. The opportunities and risks in cyber security have also moved on.

Surely the Department for Education should take responsibility seriously to treat our children’s personal data and sensitive records equally as well as the HSCIC now intends to manage health data?

Processing administrative or linked data in an environment with layered physical security (e.g. a secure perimeter, CCTV, security guarding or a locked room without remote connection such as internet access) is good practice. And reduces the risk of silly, human error. Or  simple theft.

Is giving out chunks of raw data by email, with reams of paperwork as its approval ‘safeguards’ really fit for the 21st century and beyond?

tiers

Twenty years on from the conception of the National Pupil Database, it is time to treat the personal data of our future adult citizens with the respect it deserves and we expect of best-in-class data management.

It should be as safe and secure as we treat other sensitive government data, and lessons could be learned from the FARR, ADRN and HSCIC safe settings.

Back to school – more securely, with public understanding and transparency

Understanding how that all works, how technology and people, data sharing and privacy, data security and trust all tie together is fundamental to understanding the internet. When administrations take our data, they take on responsibilities for some of our participation in dot.everyone that the state is so keen for us all to take part in. Many of our kids will live in the world which is the internet of things.  Not getting that, is to not understand the Internet.

And to reiterate some of why that matters, I go back to my previous post in which I quoted Martha Lane Fox recently and the late Aaron Swartz when he said: “It’s not OK not understand the internet, anymore”.

While the Department of Education has turned down my subject access request to find out what the National Pupil Database stores on my own children, it matters too much to brush the issues aside, as only important for me. About 700,000 children are born each year and will added to this database every academic year. None ever get deleted.

Parents can, and must ask that it is delivered to the highest standards of fair processing, transparency, oversight and security. I’m certainly going to.

It’s going to be Back to School in September, and those annual privacy notices, all too soon.

*****

1. The National Pupil Database end of year report card

2. The National Pupil Database end of year report: an F in fair processing

3. The National Pupil Database end of year report: a D in transparency

References:

[1] The National Pupil Database user guide: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/261189/NPD_User_Guide.pdf

[2] Data tables to see the individual level data items stored and shared (by tabs on the bottom of the file) https://www.gov.uk/government/publications/national-pupil-database-user-guide-andsupporting-information

[3] The table to show who has bought or received data and for what purpose https://www.gov.uk/government/publications/national-pupil-database-requests-received

[4] Data Trust Deficit – from the RSS: http://www.statslife.org.uk/news/1672-new-rss-research-finds-data-trust-deficit-with-lessons-for-policymakers

[5] Talk by Phil Booth and Terri Dowty: http://www.infiniteideasmachine.com/2013/04/terris-and-my-talk-on-the-national-pupil-database-at-the-open-data-institute/

[6] Presentation given by Paul Sinclair of the Department for Education at the Workshop on Evaluating the Impact of Youth Programmes, 3rd June 2013

What is in the database?

The Schools Census dataset contains approximately eight million records incrementally every year (starting in 1996) and includes variables on the pupil’s home postcode, gender, age, ethnicity, special educational needs (SEN), free school meals eligibility, and schooling history. It covers pupils in state-funded primary, secondary, nursery, special schools and pupil referral units. Schools that are entirely privately funded are not included.

Pupils can be tracked across schools. Pupils can now be followed throughout their school careers. And it provides a very rich set of data on school characteristics. There is further use by linking the data from other related datasets such as those on higher education, neighbourhoods and teachers in schools.

Data stored include the full range of personal and sensitive data from name, date of birth and address, through SEN and disability needs. (Detail of content is here.)  To see what is in it download the excel sheet : NPD Requests.

 

The Department for Education has specific legal powers to collect pupil, child and workforce data held by schools, local authorities and awarding bodies under section 114 of the Education Act 2005section 537A of the Education Act 1996, and section 83 of the Children Act 1989. The submission of the school census returns, including a set of named pupil records, is a statutory requirement on schools under Section 537A of the Education Act 1996.

Our children’s school data: an end-of-year report card

To quote the late Aaron Swartz: “It’s not OK to not understand the internet, anymore.”

Parents and guardians are trying their best.We leave work early and hurry to attend meetings on internet safety. We get told how vital it is that children not give away their name, age or address to strangers on social media. We read the magazines that come home in book bags about sharing their identity with players in interactive games.  We may sign school policies to opt out of permission for sharing photos from school performances on the school website.

And yet most guardians appear unaware that our children’s confidential, sensitive and basic personal data are being handed out to third parties by the Department of Education, without our knowledge or clear and accessible public accountability.

Data are extracted by the Department for Education [DfE] from schools, stored in a National Pupil Database [NPD], and onwardly shared.

Fine you may say. That makes sense, it’s the Department for Education.

But did you expect that the Ministry of Defence or Schools comparison websites may request or get given access to our children’s individual records, the data [detailed in the ‘NPD data tables’] that we provide to schools for routine administration?

School heads, governors, and every parent I have spoken with in my area, are totally unaware that data extracted by the Department of Education are used in this way.

All are surprised.

Some are shocked at the extent of data sharing at such an identifiable and sensitive level, without school and parental knowledge.

The DfE manages the NPD and holds responsibility to ensure we know all about it. But they’re not ensuring that pupils and parents are told before the data extraction, who else gets access to it and for what purposes. That fails to process data fairly which is a requirement to make its use lawful.

There’s no way to opt out, to check its accuracy or recourse for anything incorrect.

As our lives involve the ever more advanced connectivity of devices, systems, and services, that’s simply not good enough. It’s not a system fit for the 21st century or our children’s digital future.

While the majority of requestors seem to access data for bona fide research in the public interest, some use it for bench marking, others are commercial users.

Is that what pupils and parents expect their data are used for?

And what happens in future when, not if, the Department chooses to change who uses it and why.

How will we know about that? Because it has done so already.

When school census data first began, it extracted no names. That changed. Every pupil’s name is now recorded along with a growing range of information.

Where it began with schools, it is now extended to nursery schools; childminders, private nurseries and playgroups.

Where it was once used only for state administrative purposes, since 2012 it has been given to third parties.

What’s next?

Data should be used in the public interest and must be shared to adequately administer, best serve, understand, and sometimes protect our children.

I want to see our children’s use of technology, and their data created in schools used well in research that will enable inclusive, measurable benefits in education and well being.

However this can only be done with proper application of law, future-proofed security, and respectful recognition of public opinion.

The next academic year must bring these systems into the 21st century to safeguard both our children and the benefits that using data wisely can bring.

Out of sight, out of date, out of touch?

The data sharing is made possible through a so-called ‘legal gateway’, law that gives permission to the Secretary of State for Education to require data from schools.

In this case, it is founded on legislation almost twenty years old.

Law founded in the 1996 Education Act and other later regulations changed in 2009 give information-sharing powers to the Secretary of State and to public bodies through law pre-dating wide use of the Internet, social media, and the machine learning and computer processing power of today.

Current law and policies have not kept pace with modern technology. 2015 is a world away even from 2009 when Pluto was still a planet.

Our children’s data is valuable, and gives insights into society that researchers should of course use to learn from and make policy recommendations. That has widespread public support in the public interest. But it has to be done in an appropriate and secure way, and as soon as it’s for commercial use. there are more concerns and questions to ask.

As an example why NPD doesn’t do this as I feel it should, the data are still given away to users in their own offices rather than properly and securely accessed in a safe-setting, as bona fide accredited researchers at the Office of National Statistics do.

In addition to leaving our children’s personal data vulnerable to cybersecurity threats, it actively invites greater exposure to human error.

Remember those HMRC child benefit discs lost in the post with personal and bank data of 25 million individuals?

Harder to do if you only access sensitive data in a safe setting where you can walk out with your research but not raw files.

When biometrics data are already widely used in schools and are quite literally, our children’s passport to the world, poor data management approaches from government in health and education are simply not good enough anymore.

It’s not OK anymore.

Our children’s personal data is too valuable to lose control of as their digital footprint will become not an add-on, but integral to everything they do in future.

Guardians do their best to bring up children as digitally responsible citizens and that must be supported, not undermined by state practices.

Children will see the divide between online and ‘real’-life activities blend ever more seamlessly.

We cannot predict how their digital identity will become used in their adult lives.

If people don’t know users have data about them, how can we be sure they are using it properly for only the right reasons or try and repair damage when they have not?

People decide to withhold identities or data online if they don’t trust how they will be used, and who will use it well.

Research, reports and decision making are flawed if data quality is poor. That is not in the public interest.

The government must at least take responsibility for current policies to ensure our children’s rights are met in practice.

People who say data privacy does not matter, seem to lack any vision of its value.

Did you think that a social media site would ever try to control its users emotions and influence their decision-making based on the data they entered or read? It just did.

Did you foresee five years ago that a fingerprint could unlock your phone? It just did.

Did you believe 5 months ago the same fingerprint accessible phone would become an accepted payment card in England? It just did.

There is often a correlation between verification of identity and payment.

Fingerprinting for payment and library management has become common in UK schools and many parents do not know that parental consent is a legal requirement.

In reality, it’s not always enacted by schools.

Guardians can find non-participation is discouraged and worry their child will be stigmatised as the exception.

Yet no one would seriously consider asking guardians to give canteens their bank card PIN.

The broad points of use where data are created and shared about our children mean parents can often not know who knows what about them.

What will that mean for them as adults much of whose lives will be digital?

What free choice remains for people who want to be cautious with their digital identities? 

Many systems increasingly require registration, some including biometric data, sometimes from vulnerable people, and the service on offer is otherwise denied.

Is that OK anymore? Or is denial-of-service a form of coercion?

The current model of state data sharing often totally ignores that the children and young people whose personal data are held in these systems are not asked, informed or consulted about changes.

While Ministers talk about wanting our children to become digital leaders of tomorrow, policies of today promote future adults ill-educated in their own internet safety and personal data sharing practices.

But it’s not OK not to understand the internet anymore.

Where is the voice of our young people talking about who shares their information, how it is used online, and why?

When shall we stop to ask collectively, how personal is too personal?

Is analysing the exact onscreen eye movement of a child appropriate or invasive?

These deeply personal uses of our young people’s information raise ethical questions about others’ influence over their decision making.

Where do we draw the line?

Where will we say, it’s not OK anymore?

Do we trust that all uses are for bona fide reasons and not ask to find out why?

Using our children’s data across a range of practices in education seem a free for all to commercially exploit, with too little oversight and no visibility of decision making processes for the public,whose personal data they profit from.

Who has oversight for the ethical use of listening software tools in classrooms, especially if used to support government initiatives like Channel in ‘Prevent’?

What corrective action is taken if our children’s data are exposed through software brought into school over which parents have no control?

The policies and tools used to manage our children’s data in and outside schools seem often out of step with current best-in-class data protection and security practices.

Pupils and parents find it hard to track who has their personal data and why.

While the Department for Education says what it expects of others, it appears less committed to meeting its own responsibilities: “We have been clear that schools are expected to ensure that sensitive pupil information is held securely. The Data Protection Act of 1998 is clear what standards schools are expected to adhere to and we provide guidance on this.” 

A post on a webpage is hardly guidance fit to future proof the data and digital identities of a whole generation.

I believe we should encourage greater use of this administrative data for bona fide research. Promoting broader use of aggregated and open data could also be beneficial. In order to do both, key things should happen that will make researchers less risk averse in its use, and put data at reduced risk of accidental or deliberate misuse by other third parties. Parents and pupils could become more confident that their data is used for all the right reasons.

The frameworks of fair processing, physical data security, of transparent governance and publicly accountable oversight need redesigned and strengthened.

Not only for data collection, but its central management, especially on a scale as large as the National Pupil Database.

“It’s not OK not to understand the internet anymore.”

In fact, it never was.

The next academic year must bring these systems into the 21st century to safeguard both our children and the benefits that using data wisely can bring.

The Department for Education “must try harder” and must start now.

********

If you have questions or concerns about the National Pupil Database or your own experience, or your child’s data used in schools, please feel free to get in touch, and let’s see if we can make this better. [Email me as listed above right.]

1. An overview: an end of year report on our Children’s School Records
2. The National Pupil Database end of year report: an F in fair processing
3. The National Pupil Database end of year report: a D in transparency, C- in security

********

References:

[1] The National Pupil Database user guide: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/261189/NPD_User_Guide.pdf

[2] Data tables to see the individual level data items stored and shared (by tabs on the bottom of the file) https://www.gov.uk/government/publications/national-pupil-database-user-guide-andsupporting-information

[3] The table to show who has applied for and received data and for what purpose https://www.gov.uk/government/publications/national-pupil-database-requests-received

[4] Data Trust Deficit – from the RSS: http://www.statslife.org.uk/news/1672-new-rss-research-finds-data-trust-deficit-with-lessons-for-policymakers

[5] Talk by Phil Booth and Terri Dowty: http://www.infiniteideasmachine.com/2013/04/terris-and-my-talk-on-the-national-pupil-database-at-the-open-data-institute/

[6] On 1 September 2013 sections 26 and 27 of the Protection of Freedoms Act 2012 came into force, requiring schools to seek parental consent before collecting biometric data, such as fingerprints.

care.data – “anticipating things to come” means confidence by design

“By creating these coloured paper cut-outs, it seems to me that I am happily anticipating things to come…I know that it will only be much later that people will realise to what extent the work I am doing today is in step with the future.” Henri Matisse (1869-1954) [1]
My thoughts on the care.data advisory event Saturday September 6th.  “Minority voices, the need for confidentiality and anticipating the future.”

Part one here>> Minority voices

This is Part two >> the need for confidentiality and anticipating the future.”

[Video in full > here. Well worth a viewing.]

Matisse – The cut outs

Matisse when he could no longer paint, took to cutting shapes from coloured paper and pinning them to the walls of his home. To start with, he found the process deeply unsatisfying. He felt it wasn’t right. Initially, he was often unsure what he would make from a sheet. He pinned cutouts to his walls. But tacking things on as an afterthought, rearranging them superficially was never as successful as getting it right from the start. As he became more proficient, he would cut a form out in one piece, from start to finish. He could visualise the finished piece before he started. His later work is very impressive, much more so in real life than on  screen or poster. His cut outs took on life and movement, fronds would hang in the air, multiple pieces which matched up were grouped into large scale collections of pieces on his walls. They became no longer just 2D shapes but 3D and complete pictures. They would tell a joined-up story, just as our flat 2D pieces of individual data will tell others the story of our colourful 3D lives once they are matched and grouped together in longitudinal patient tracking from cradle to grave.

Data Confidentiality is not a luxury

From the care.data advisory meeting on September 6th, I picked out the minority voices I think we need to address better.

In addition to the minority groups, there are also cases in which privacy, for both children and adults, is more important to an individual than many of us consider in the usual discussion. For those at risk in domestic violence the ability to keep private information confidential is vital. In the cases when this fails the consequences can be terrible. My local news  told this week of just such a woman and child whose privacy were compromised.

“It is understood that the girl’s mother had moved away to escape domestic violence and that her ex-partner had discovered her new address.” (Guardian, Sept 12th)

This story has saddened me greatly.  This could have been one of my children or their classmates.

These are known issues when considering data protection, and for example are addressed in the RCGP Online Roadmap (see Box 9, p20).

“Mitigation against coercion may not have a clear solution. Domestic violence and cyberstalking by the abuser are particularly prevalent issues.”

Systems and processes can design in good privacy, or poor privacy, but the human role is a key part of the process, as human error can be the weakest link in the security chain.

Yet as regards care.data, I’ve yet to hear much mention of preventative steps in place, except an opt out. We don’t know how many people at local commissioning levels will access how much of our data and how often. This may go to show why I still have so many questions how the opt out will work in practice, [5] and why it matters. It’s not a luxury, it can be vital to an individual. How much of a difference in safety, is achieved using identifiable vs pseudonymised data, compared with real individual risk or fear?


“The British Crime Survey (BCS) findings of stalking prevalence (highest estimate: 22% lifetime, 7% in the past year) give a 5.5% lifetime risk of interference with online medical records by a partner, and a 1.75% annual risk.”
This Online Access is for direct care use. There is a greater visible benefit for the individual to access their own data than in care.data, for secondary uses. But I’m starting to wonder, if in fact care.data is just one great big pot of data and the uses will be finalised later?Is this why scope is so hard to pin down?


The slides of who will use care.data included ‘the patient’ at this 6th September meeting. How, and why? I want to have the following  explained to me, because I think it’s fundamental to opt out. This is detailed, I warn you now, but I think really important:

How does the system use the Opt out?

If you imagine different users looking at the same item of data in any one record, let’s say prescribing history, then it’s the security role and how the opt out codes work which will determine who gets to see what.



I assume here, there are not multiple copies of “my medications” in my record.  The whole point of giant databases is real-time, synched data, so “my medications” will not be stored in one place in the Summary Care Record (SCR) and copied again in ‘care.data’ and a third time in my ‘Electronic Prescription Service (EPS). There will be one place in which “my medications” is recorded.


The label under which a user can see that data for me, is their security role, but to me largely irrelevant. Except for opt out.


I have questions: If I opt out of the SCR programme at my GP, but opt in at my pharmacy to the EPS, what have I opted in to? Who has permission to view “my medications”  in my core record now? Have I created in effect an SCR, without realising it?


[I realise these are detailed questions, but ones we need to ask if we are to understand and inform our decision, especially if we have responsibility for the care of others.]


If I want to permit the use of my record for direct care (SCR) but not secondary uses (care.data) how do the two opt outs work together,  and what about my other hospital information?


Do we understand what we have and have not given permission for and to whom?
If there’s only one record, but multiple layers of user access who get to see it,  how will those be built, and where is the overlap?
We should ask these questions on behalf of others, because these under represented groups and minorities cannot if they are not in the room.

Sometimes we all need privacy. What is it worth?

Individuals and minorities in our community may feel strongly about maintaining privacy, for reasons of discrimination, or of being ‘found out’ through a system which can trace them. For reasons of fear. Others can’t always see the reasons for it, but that doesn’t take away from the value it has for the person who wants it or their need for that human right to be respected. How much is it worth?

It seems the more we value keeping data private, the more the cash value it has for others. In 2013, the FT created a nifty calculator and in an interview with Dave Morgan, reckoned our individual data is worth less than $1. General details such as age, gender and location are in the many decimal place range of fractions of a cent. The more interesting your life events, the more you can add to your data’s total value. Take pregnancy as an example.  Or if you add genomic data it  goes up in market value again.

Whilst this data may on a spreadsheet be no more than a dollar amount, in real life it may have immeasurably greater value to us on which you cannot put a price tag. It may be part of our life we do not wish others to see into. We may have personal or medical data, or recorded experiences we simply do not want to share with anyone but our GP. We might want a layered option like this suggestion by medConfidential to allow some uses but not others. [6]

In this debate it is rare that we mention the PDS (Personal Demographic Service), which holds the name and core contact details of every person with and NHS number past and present, almost 80 million. This is what can compromise privacy, when the patient can be looked up by any A&E, everyone with Summary Care Record access on N3 with technical ability to do so. It is a weak link. The security system relies on human validations, effectively in audit ‘does this seem OK to have looked up?’  These things happen and can go unchecked for a long period without being traced.

Systems and processes on this scale need security designed, that scales up to match in size.

Can data be included but not cut out privacy?

Will the richness of GP record / care.data datasharing afford these individuals the level of privacy they want? If properly anonymised, it would go some way to permitting groups to feel they could stay opted in, and the data quality and completeness would be better. But the way it is now, they may feel the risks created by removing their privacy are too great. The care.data breadth and data quality will suffer as a consequence.

The requirement of care.data to share identifiable information we may not want to, and that it is an assumed right of others to do so, with an assumed exploitation for the benefit of UK plc, especially if an opt-out system proceeds, feels to many, an invasion of the individual’s privacy and right to confidentiality. It can have real personal consequences for the individual.

The right to be open, honest and trusting without fear of repercussion matters. It matters to a traveller or to someone fleeing domestic violence with fears of being traced. It matters to someone of transgender, and others who want to live without stigma. It matters to our young people.

The BMA recognised this with their vote for an opt-in system earlier this year. 

Quality & Confidence by Design

My favourite exhibition piece at Tate Britain is still Barbara Hepworth’s [3] Pelagos from 1946. It is artistically well reviewed but even if you know little of art, it is simply a beautiful thing to see. (You’re not allowed to touch, even though it really should be, and it makes you want to.) Carved from a single piece of wood, designed with movement, shape, colour and shadow. It contains a section of strings, a symbol of interconnectivity. (Barbara Hepworth: Pelagos[4]). Seen as a precious and valuable collection, the Hepworth room has its own guard and solid walls. As much as I would have liked to take pictures, photography was not permitted and natural light was too low. Visitors must respect that.

So too, I see the system design needs of good tech. Set in and produced in a changing landscape. Designed with the view in mind of how it will look completed, and fully designed before the build began, but with flexibility built in. Planned interconnectivity. Precise and professional. Accurate. And the ability to see the whole from the start. Once finished, it is kept securely, with physical as well as system-designed security features.

All these are attributes which care.data failed to present from its conception but appear to be in progress of development through the Health and Social Care Information Centre. Plans are in progress [6] following the Partridge Review, and were released on September 3rd, with forward looking dates. For example, a first wave of audits is scheduled for completion 1/09 for four organisations. HSCIC will ‘pursue a technical solution to allow data access, w/out need to release data out to external orgs. Due 30/11.’ These steps are playing catch up, with what should have been good governance practices and procedures in the past. It need not be this way for GP care.data if we know that design is right, from the start.

As I raised on Saturday, at the Sept 6th workshop advisory committee, and others will no doubt have done before me, this designing from the start matters.  Design for change of scope, and incorporating that into the communications process for the future is vital for the pathfinders. One thing will be certain for pathfinder practices, there will be future changes.

This wave of care.data is only one step along a broad and long data sharing path

To be the best of its kind, care.data must create confidence by design, build-in the solutions to all these questions which have been and continue to be asked. We should be able to see today the plans for what care.data is intended to be when finished, and design the best practices into the structure from the start. Scope is still a large part of that open question. Scope content, future plans, and how the future project will manage its change processes.

As with Matisse, we must ask the designers, planners and comms/intelligence and PR teams, please think ahead  ”anticipating things to come”. Then we can be confident that we’ve  something fit for the time we’re in, and all of our kids’ futures. Whether they’ll be travellers, trans, have disabilities, be in care or not.  For our majority and all our minorities. We need to build a system that serves all of the society we want to see. Not only the ‘easy-to-reach’ parts.

”Anticipating things to come” can mean anticipating problems early, so that costly mistakes can be avoided.

Anticipating the future

One must keep looking to design not for the ‘now’ but for tomorrow. Management of future change, scope and communication is vital to get right.

This is as much a change process as a technical implementation project. In fact, it is perhaps more about the transformation, as it is called at NHS England, than the technology.The NHS landscape is changing – who will deliver our healthcare. And the how is changing too, as telecare and ever more apps are rolled out. Nothing is constant, but change. How do we ensure everyone involved in top-down IT projects understands how the system supports, but does not drive change? Change is about process and people. The system is a tool to enable people. The system is not the goal.

We need to work today to be ahead of the next step for the future. We must ensure that processes and technology, the way we do things and the tools that enable what we do, are designing the very best practices into the whole, from the very beginning. From the ground up. Taking into account fair processing of Data Protection Law, EU law – the upcoming changes in EU data protection law –  and best practice. Don’t rush to bend a future law in current design or take a short cut in security for the sake of speed. Those best practices need not cut out the good ethics of consent and confidentiality. They can co-exist with world class research and data management. They just need included by design, not tacked on, and superficially rearranged afterwards.

So here’s my set of challenge scenarios for NHS England to answer.

1. The integration of health and social care marches on at a pace, and the systems and its users are to follow suit. How is NHS England ensuring the building of a system and processes  which are ‘anticipating by design’ these new models of data management for this type of care delivery, not staying stuck on the model of top-down mass surveillance database, planned for the last decade?

2. How will NHS England audit that a system check does not replace qualified staff decisions, with algorithms and flags for example, on a social care record? Risk averse, I fear that the system will encourage staff to be less likely to make a decision that goes against the system recommendation, ‘for child removal’, for example. Even though their judgement based on human experience, may suggest a different outcome. What are the system-built-in assumed outcomes – if you view the new social care promotional videos at least it’s pretty consistent. The most depressing stereo typed scenarios I’ve seen anywhere I think. How will this increase in data and sharing, work?

“What makes more data by volume, equal more intelligence by default?”

Just like GP call centre OOH today, sends too many people calling the 111 service to A&E now, I wonder if a highly systemised social care system risks sending too many children from A&E into social care? Children who should not be there but who meet the criteria set by insensitive algorithms or the converse risk that don’t, and get missed by over reliance on a system, missing what an experienced professional can spot.

3. How will the users of the system use their system data, and how has it been tested and likely outcomes measured against current data? i.e. will more or fewer children taken into care be seen as a measure of success? How will any system sharing be audited in governance and with what oversight in future?

Children’s social care is not a system that is doing well as it is today, by many accounts, you only need glance at the news most days, but integration will change how is it delivers service for the needs of our young people. It is an example we can apply in many other cases.

What plan is in place to manage these changes of process and system use? Where is public transparency?

care.data has to build in consent, security and transparency from the start, because it’s a long journey ahead, as data is to be added incrementally over time. As our NHS and social care organisational models are changing, how are we ensuring confidentiality and quality built-in-by-design to our new health and social care data sharing processes?

What is set up now, must be set up fit for the future.

Tacking things on afterwards, means lowering your chance of success.

Matisse knew, “”Anticipating things to come” can mean being positively in step with the future by the time it was needed. By anticipating problems early, costly mistakes can be avoided.”

*****

Immediate information and support for women experiencing domestic violence: National Domestic Violence, Freephone Helpline 0808 2000 247

*****

[1] Interested in a glimpse into the Matisse exhibition which has now closed? Check out this film.

[2] Previous post: My six month pause round up [part one] https://jenpersson.com/care-data-pause-six-months-on/

[3] Privacy and Prejudice: http://www.raeng.org.uk/publications/reports/privacy-and-prejudice-views This study was conducted by The Royal Academy of Engineering (the Academy) and Laura Grant Associates and was made possible by a partnership with the YTouring Theatre Company, support from Central YMCA, and funding from the Wellcome Trust and three of the Research Councils (Engineering and Physical and Sciences Research Council; Economic and Social Research Council and Medical Research Council).

[4]  Barbara Hepworth – Pelagos – in Prospect Magazine

[5] Questions remain open on how opt out works with identifiable vs pseudonymous data sharing requirement and what the objection really offers. [ref: Article by Tim Kelsey in Prospect Magazine 2009 “Long Live the Database State.”]
[6] HSCIC current actions published with Board minutes
[8] NIB https://app.box.com/s/aq33ejw29tp34i99moam/1/2236557895/19347602687/1
*****

More information about the Advisory Group is here: http://www.england.nhs.uk/ourwork/tsd/ad-grp/

More about the care.data programme here at HSCIC – there is an NHS England site too, but I think the HSCIC is cleaner and more useful: http://www.hscic.gov.uk/article/3525/Caredata

 

Care.data – my six month pause, anniversary round up [Part 1]

On the 18th February 2014, a six month pause in the rollout of care.data was announced. [1] It’s now September. Six months is up.

When will we find out what concrete improvements have been made? There are open questions on plans for the WHAT of care.data Scope and its future change management, the WHO of Data Access and Sharing and its Opt out management, the HOW of Governance & Oversight, Legislation, and the WHY – Communication of the care.data programme as a whole. And WHEN will any of this happen?

What can happen in six months?

Based on Mo Farah‘s average running speed of 21.8km/hour over The Olympic Games 10,000m gold medal winning performance, and on 12 hours a day, he could have covered about 47,000 km in that time. Once around the world, in those 180 days. With some kilometres spare margin, into the bargain.

That’s perhaps unrealistic in 180 days, but last February promises made to the public, to the Health Select Committee and Parliament were given about data sharing as both realistic, and achievable.

So what about the publicly communicated changes to the care.data rollout in the six month time frame?

The letter from Mr.Kelsey on April 14th, said they would use the six months to listen and act on the views of patients, public, GPs and stakeholders.

I’d like to address some of those views and see how they have been acted on. Here’s the best I have been able to put together of promises made, and the questions I still have, six months on.

Scope. What part of our records is included in care.data?

The truth is this should be the simplest question, but seems the hardest to answer. Scope is elusive, and shifting.

A simple description would help us understand what data will be extracted, shared and for what purpose. The public needs an at-a-glance chart to be properly informed, to distinguish between care.data, the Summary Care Record, HES/SUS and how patient data is used, by whom for what purposes.  This will help patients distinguish between direct and indirect care uses. What doctors would use in the GP practice, versus researchers in a lab. It will help set expectations for Patient Online.  It could help explain data use in Risk Stratification.  [see care.data-info by Dr.Neil Bhatia for high level items in scope, or field name detail here p22 onwards] [11]. This lack of clarity was already identified in April 2013, point 3.3, but nothing done.

Mid-August to further complicate matters, it became apparant from published care.data advisory group minutes, that the content scope is under review and may now include sensitive data. This was met with serious concern in many quarters, not least HIV support groups, on broadening the scope of care.data extraction and access.  I realised I wasn’t in the least surprised, but continue to be shocked by the disconnect between project leadership and the public.

Are the listening exercises a complete waste of time?

If people aren’t comfortable sharing basic health records, how will suggesting they share anything more sensitive be likely to encourage participation?

[The scope of how our GP part of care.data will be used is also under consideration for expansion to research – more in part two, on that.]

Scope is undefined. It will continue to ever expand as the replacement for SUS. In April, I wrote down my concerns at that time. Most of which remain unchanged.

Stephen Dorrell, MP on the 11th March in Parliament summed up nicely, why this move now to shift scope is ludicrous. If we do not have stability of scope, we cannot know to what we are consenting. This is the foundation of our patient trust.

Mr Dorrell: I am not going to comment on whether the free text data should or should not be part of the system, or on whether the safeguards are adequate. However, I agree with the hon. Lady absolutely that the one sure way of undermining public confidence in safeguards is to change those safeguards every five minutes according to whichever witness we are listening to.

If the Patients & Information Directorate at NHS England is serious about transparency, then we should be clear about all our patient data, where it comes from, where it goes to, who accesses it and why.

Data protection principle 3 requires that the minimum possible data required is extracted, not excessive. Is this being simply ignored, as inconvenient in a project which intends scope to ever accumulate as SUS replacement?

“Will NHS England prepare an at-a-glance of differences between SCR and care.data, and HES/SUS extractions and users?”

scrcdoverview

 

Conclusion on Scope & its Communications:

This scope clarification alone would be I believe, if well done, one of the most effective communications tools for patients to make an informed choice.

1. We need to know what parts of our personal, confidential records, sensitive or otherwise are to be extracted now. 

2. How will we be informed if that scope changes in future?

3. What do we do, if we object to any of those items being included?

Before any launch of pilot or otherwise, a proper plan to ensure informed communication and choice, today and looking to future scope changes, must be clear for everyone.

What’s happened since February to the verbal agreements and promises that were made back then?

Whether in Parliament by Dan Poulter and the Secretary of State Mr.Hunt, in Select Committee Hearings, by the Patients & Information Directorate at NHS England and in patient facing hour at the mixed-subject Open Day, promises have been made, but what evidence has the public, that they are real? There has been little public communication since then.

I have read, watched or attended NHS England Board meetings, Health Select committee meetings, and read the press, media releases and social media. I’ve been to a general NHS Open Day, listened in to NHS England online events, the first HSCIC Partridge Review follow up event, and spoken to patients, public and charity groups. Had I not, I would know nothing more than I did in February which was, that something had been put on hold, about which I should have, but hadn’t, received a doordrop leaflet.

Pilot practices ‘pathfinders’ we were told will trial the extraction, in six months, then in autumn, or October 1st according to Mr.Kelsey at the Health Select Committee (extract below).

reply

I’ve not seen anywhere yet, where these practices will be, nor that patients have been informed.  The latest status I read was on EHI. In response to this lack of information, medConfidential wrote to Healthwatches and CCGs with important questions and ideas. [Well worth a read].

Scope of Access – Who will get our records and for what?

Where and to whom may our data be transferred?

As part of the what of scope, we also need clarification on the who will be in scope in which countries to access data.

“Can I confirm now, that the data connected to care.data will not be allowed outside the United Kingdom? Let me confirm that before we have further hares running.” Tim Kelsey, said at the Health Select Committee.

Since GP care.data is to be connected with HES data, and data may be linked via the Data Access Request Service (the recently renamed former HSCIC Data Linkage Service DLES) on demand;

Q.  How will I know in future that there are no plans to release my data outside the UK and EU, as HES has been in the past?

As far as I have read, geographical scope is not legislated for. I would like to be pointed to this if it is.

From the Health Select Committee: Committee Room 15 : Meeting started on Tuesday 25 February at 2.29pm – Ended at 5.20pm

Mr. Tim Kelsey, National Director for Patients and Information stated: The pause was announced, precisely to address the issues.

“People are concerned about the purpose to what their data is being put.”

It’s not yet been addressed. Neither for the now, nor the future.

We need to have a robust mechanism in place for all future scope of use changes. If today I agree to have some of my data extracted used for public health research for the public good, I don’t want to find that I’ve had all my personal details including my genomic records [which personally are somewhere in my record already] spliced with Dolly the sheep research, in the hunt for a cure for arthritis five years down the line, and there’s another me living at the Roslin Institute. [I jest to exaggerate the point, not all research definitions are equal].  A yes today, cannot mean a yes for anything and everything.

The opt out term at present only allows a later ‘opt out’ to mean that data is made less identifying ‘pseudonymous’ from that request date, nothing deleted. ‘Opt out’, is not ‘get out’.

The records from before that request date, will remain clear and fully identifying for all time. So if a company requests an historical report, will our identifiable data still be included in it?

Opt out is not as simple as it sounds.

OPT OUT

The whole issue of opt out was at best an inaccurately communicated process. I believe it was misleading.

What is still wrong to my mind with this mechanism, is that there appears to be the assumption that all data may be matched and de-identified before release. That corresponds to the September 2013 NHS England Directions led by Mr. Kelsey to HSCIC saying there is “ “no need” to take into account individual objection to pseudonymous data sharing “. [2] And the patient leaflet, which was produced before any opt out changes, which stated we could object to ‘identifiable’ data sharing. That ‘identifiable’ doesn’t include all our data.

I’d like to see that clarified. Because Mr.Hunt has promised an opt out in entirety:

25th February in Parliament:

Mr.Hunt: …”we said that if we are going to use anonymised data for the benefit of scientific discovery in the NHS, people should have the right to opt out. We introduced that right and sent a leaflet to every house in the country, and it is important that we have the debate..”

“the reason why we are having the debate is that this Government decided that people should be able to opt out from having their anonymised data used for the purposes of scientific research

Dr Julian Huppert (Cambridge) (LD): There are of course huge benefits from using properly anonymised data for research, but it is difficult to anonymise the data properly and, given how the scheme has progressed so far, there is a huge risk to public confidence. Will the Secretary of State use the current pause to work with the Information Commissioner to ensure that the data are properly anonymised and that people can have confidence in how their data will be used and how they can opt out?

Hunt: “I will do that, and NHS England was absolutely right to have a pause so that we ensure that we give people such reassurance…”

Status: the public still has no communication about any opt outs on offer or a consistent, effectively communicated method by which to request it.

Our data continues to be released regardless.

What I want to understand on opt out:

1. Can I choose to have my data used for only care, or for bona fide public health research, but not, for example, other types, such as commercial pharma marketing or data intermediaries?

2. Can I restrict the use of all my children’s data, to include all of it, including fully ‘anonymous’ data as the Secretary of State stated? Not only restricting red and amber, but all data sharing?

3. How will patients know that all of their medical data is covered by these options, not only our GP records? (For other data held see > http://www.hscic.gov.uk/datasets)

4. Will NHS staff be given the right to opt out to prevent their personal confidential data or employment data being shared as part of the workforce data set?

5. Does opt out really mean opt out – when will we see the revised definition?

6. How will objection management (storing our opt out decision) be implemented with other data sharing? (SCR, Electronic Prescription Service, OOH access, Proactive care at local level.)

7. How will objection be effectively communicated and measured?

8. Will the BMA vote [3] be ignored by the Patients & Information Directorate at NHS England? They called for an opt in system? And also for it to have the option to be used only for improving care, not commercial exploitation. They appreciate the risks of losing patient confidentiality and trust.

9. Will the views of Dr. Mike Bewick, deputy medical director at NHS England, also be ignored, who said parts (referring to commercial use) should be ‘opt-in’ only? [Pulse, June 2014]

10. What will ensure opt out remains more than just Mr.Hunt’s word, if it has no legislative backing?

The opt out on offer at Christmas was to restrict identifiable data sharing. There was “no need” to take into account individual objection to pseudonymous data sharing said the September 13th NHS England directions. Those NHS England Board directions from September and December 2013 are now possibly out of date, but I’d like to see new ones which replaced them, to reassure me that an opt out that we are offered,  works the way I would expect.

Most importantly for me, will the opt out be given more legislative weight, Q.10? Today I have only the Secretary of State’s word that any “objection will be respected.”  And as we all know, post holders come and go, a spoken agreement by one person, may not be respected by another.

**********

ACCESS

Many of the concerns around which organisations will have access to our medical records, and which were somewhat dismissed on Newsnight then, have been shown to have been legitimate concerns since:

“Access by police, sold to insurance companies, sold for commercial purposes” Newsnight, February 19th 2014
… all shown to be users of existing medical records held by the HSCIC through the Partridge Review.
police

Which other concerns over access were raised and have they been addressed?

Dr. Sarah Wollaston MP, then member, now Chair, of the Health Select Committee raised the concerns of many when she asked whether other Government Departments may share care.data. Specifically she asked Mr.Kelsey,

“are you going to have a clear concrete offer to the public at the end of the six-month delay as to how these requests will be handled […] see if their data is going to be accessed by DWP […]?”

dwp_wollaston

I believe this is still more than a very valid and open question, particularly with reference to the December 2013  Admin Data Task Force which was exploring a ‘proof of concept’ to link DWP [6] and Department of Health data:

“Primary and Secondary Care interventions with DWP over a six year period.”

DWP_strategy

 

HSCAct

 

At the Health Select Committee evidence session, Mr. Kelsey and Mr. Jones did not give a straight yes/no answer to the question.

Personally I believe it would be clearly possible that DWP administering social care or welfare payments will make a case under ‘health and social care’. Unless I see it in legislation that DWP will not have access care.data or other HSCIC held data, I personally will assume that it is going to, and may have already especially given the ‘primary and secondary linking’ pilot listed above.

What about other government departments access to health data?

A group met for the event ‘Sharing Government Administrative Data: new research opportunities’: strategic meeting on 14 July 2014, at the Wellcome Trust, London [4]  – at which both care.data and DWP data had their own agenda slots.

The DWP holds other departments’ data and is “open to acting a hub.” July 2014 [7]

The Cabinet Office presenter included suggestions UK legislation [9] may change to enable all departments (excluding NHS) to share data, and the ADT recommended that new ‘Data Sharing” legislation should be put forward in the next [Parliamentary] term.

1. Since HSCIC is an ALB and not NHS, are they included in this plan to broaden sharing across government departments?

2. Will the care.data addendum of September 2013 be amended to show the public that those listed then, are no longer considered appropriate users?

3. Will Mr.Kelsey now be able to answer Dr.Wollaston MP’s question regards DWP with a yes / no answer?

Think tanks, intermediaries and for the purposes of actuarial refinement were included in documents at the time, which suggested that DAAG alone in future, would review applications.

The DAAG is still called the DAAG and appears to have gone from 4 to 6 members. The Data Access Advisory Group (DAAG), hosted by the Health and Social Care Information Centre (HSCIC), considers applications for sensitive data made to the HSCIC’s Data Access Request Service.

Three key issues remain unclear to me on recent Data Release governance at DAAG:

1. Free text access and 2. Commercial use 3. Third Party use

The July 2014 DAAG approved free text release of data for CSUs on a conditional cleansed basis, and for Civil Eyes with a caveat letter to say it shouldn’t be used for any ‘additional commercial use.’ It either is or isn’t commercial I think this is fudging the edges of purpose and commercial use, and precisely why the lack of defined scope use undermines trust that data will be used only for proper purposes and in the definition of the Care Act.

Free text is a concern raised on a number of occasions in Parliament and Health Select Committee.  On the HSCIC website it says, none will be collected in future for care.data. How is it now approved for release, if it has not already been collected in the past – in HES?  So it would appear, free text has already been extracted and is being released. How are we to trust it will not be the case for care.data?

****

In summary: after six months pause, it remains unclear what exactly is in scope, to whom will it be released. We are still not entirely clear who will have access to what data, and why.

In part two I’ll look in brief at what legislative changes, both in the UK and wider EU may influence care.data and wider health data sharing.  Plus some status updates on Research seeking approval, Changes to Oversight & Governance and Communications.

That commercial use, the concept that you are exploiting the knowledge of our vulnerability or illness, in commercial data mining, is still the largest open question, and largest barrier to public support I foresee. ‘Will the Care Act really help us with that?’ I ask in my next post.

MedConfidential have released their technical recommendations on safe settings access to data. Their analogy struck me again, as to how important it is that the use of data is seen by the users, as a collective.

Any pollution in the collective pool, will contaminate the data flow for all.

I believe the HSCIC, NHS England Patients & Information Directorate, the Department of Health need to accept that the continued access to patient data by commercial data intermediaries is going to do that. Either those users, some of whom are young and inexperienced commercial companies, need to be excluded, or to be permitted very stringent uses of data without commercial re-use licenses.

The commercial intermediaries still need to be told, don’t pee in the pool. It spoils it, for everyone else.

I’ll leave you with a thought on that, from Martin Collignon, Industry Analyst at Google.

**********

For part two, follow link >>here>>  I share my thoughts on current status of the HOW of Governance & Oversight, Legislation, and the WHY – addressing Communication of the care.data programme as a whole.  And WHEN will any of this happen?

Key refs:

[1]. Second delay to care.data rollout announcedThe Guardian February 18th 2014: http://www.theguardian.com/society/2014/feb/18/nhs-delays-sharing-medical-records-care-data

[2] NHS England directions to HSCIC September 13th 2013: http://www.england.nhs.uk/wp-content/uploads/2013/09/item_5.pdf

[3] BMA vote for opt In system: http://www.bmj.com/content/348/bmj.g4284

[4] July 14th at Wellcome Trust event ‘Sharing Government Administrative Data: new research opportunities’

[5] EU Data Legislation http://www.esrc.ac.uk/_images/presentation%208_Beth%20Thompson%20Wellcome%20Trust_tcm8-31281.pdf

[6] DWP data linkage proof of concept trial 6 year period of primary and secondary data, December 2013

[7] Developments in Access to DWP data 2014

[8] NHS data sharing – Dr.Lewis care.data July 2014 presentation

[9] Possible UK Legislation http://www.esrc.ac.uk/_images/Presentation_7_Rufus_Rottenberg_tcm8-31280.pdf

[10] Progress of the changes to be made at HSCIC recommendations of the Partridge Review https://medconfidential.org/wp-content/uploads/hscic/20140903-board/HSCIC140604di_Progress_on_Partridge_review.pdf

[11] Scope list p22 onwards: http://www.england.nhs.uk/wp-content/uploads/2013/08/cd-ces-tech-spec.pdf

[12] Health and Social Care Transparency Panel April 2013 minutes https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/259828/HSCTP_13-1-mins_23_Apr_13__NewTemp_.pdf

Care.data – Getting the ducks in a row

Good Friday has different meanings and traditions across the cultures. For some the most sombre day of their church calendar. For others, another Bank Holiday and start of the long weekend in spring. For Mr.Cameron this year, getting stung by a jelly fish abroad.

For me, visiting family in a small nordic village, it’s the day of the annual duck race fundraiser.

2,000 numbered plastic ducks are thrown into fast moving water high upstream, and the public waits and watches anxiously as the toys approach the central village bridge and race beyond. The first to hit the finish line net at the weir after an arduous course, is the winner.

There are lots of obstacles along the route and some ducks get stuck. Children are allowed to pick up those off-track in side eddies and hurl them back into the main channel. As a parent, you inevitably lose your child at some point in the crowd, fret they may have joined the ducks for a swim, and the whole race always takes longer than we expect.

So, it feels, as a citizen and patient, is the current progress of care.data.

There was a misjudged start. There’s lots of obstacles still to overcome. It looks like the finish line is getting clearer. And some believe it might take longer than first thought.

Whilst on holiday I’ve taken time to read over the recent letter, to colleagues, from Tim Kelsey & NHS England. It’s addressed to colleagues, which I’m not, so perhaps it feels a little like looking over someone’s shoulder on the train, but hey, It’s the only update we’ve got.

Looks like some positive acknowledgements and steps are in progress:

  • We will work with stakeholders to produce support materials, such as an optional template letter for patients and ways of making opting-out more straightforward
  • We need to do more to ensure that patients and the public have a clear understanding of the care.data programme
  • This work is continuing and we will update you on these changes separately 
  • We want to hear your views and suggestions so we can take action to improve and build confidence in the care.data programme. We will also be engaging with patient groups, GPs and other stakeholders through local and regional engagement events

Notably, it’s the first time NHS England has said opt out. In the past it has only ever been an objection. As a linguist, language is important to me. And the two are not synonymous no matter how often I may be told by NHS England that they are to be used interchangeably.

It’s the first time there really feels like more give, and less we’ll take without asking you first.

And it’s the first mention towards offering local and regional engagement.

There are some new hints which need explanation, such as a change towards who may use the data – described always as for secondary uses, clinicians and patients using it is new:

“Care.data is an initiative to ensure more joined-up data is made available to clinicians, commissioners, researchers, charities and patients.”
And there are some ideas which are making progress, but seem a little stuck.
“In addition, steps have already been taken in making changes to the law”…

Whilst changes have been put into the Care Bill, other rather sensible ones, such as legal penalties for data misuse were rejected. And the purposes are still so loose as to be possible to give data for a wide range of ‘health purposed’ clients. That was the day in which it appeared fewer than 50 MPs were in the chamber to hear the Care Bill debate in which nearly 500 came in to vote. (How they can reasonably and effectively vote on something in which they did not hear the debate, I don’t understand.) These are legal changes I believe which need hurled back to Parliament to get them on track again.

Experts much wiser than me, have made a proposal of comprehensive amendments, and seem, from my lay understanding, both really positive and practical.

The “optional template letter for patients” may be something GP practices could consider using to contact individuals where they know that leaflets were not delivered. Even Dame Fiona Caldicott did not receive hers. (BBC PM listen from 33:30)

If centrally, it is known where they did not reach patients, it would be helpful for GP practices to then be able to evaluate if there is an additional need to contact their patients. For example, in my area, no one I have spoken to received a leaflet.

Perhaps that might seem trivial now, and in the past, but for trusting the scheme I believe it is really important to know why that was. Because since no opt out was originally planned I want to know that the intention was truly to tell us all. Did they print enough? Distribute enough? Follow up at all? I’ve asked to find out.  After all, it was our state money that paid for it. A previous Freedom of Information request, on the status of its distribution with Royal Mail, from Phil Booth of MedConfidential appears to contradict ministerial mutterings that said an exception was invoked. I know that for myself, I had not opted out of junk mail, yet I still didn’t get one. I knew to look out for it and inspected my pizza flyers and dog walking leaflets in every post in January. No leaflet and all of my friends were the same.

If the experts such as Dame Fiona, the GPES advisory group which in September had:

major concerns about the process for making most patients aware of the contents of the leaflets before data extraction for care.data commenced”

and ICO felt the leaflet went out with the wrong content and was rushed then I want to know why, so that the same people are not making the same decisions, and will cost us time and trust again. Why it went ahead against every expert’s better advice is important to understand. “Regrettable that you are not now able to take any of our comments into account” was ICOs comment and the sentiment seems echoed by Dame Fiona on today’s radio broadcast.

Even a lay person like me, could see it was a disaster about to happen.

My suggestion, was that role-based patient communication would be much more understandable. Take some stereotypical sample citizens, map their ‘day-in-the-life’ using HSCIC data systems, show how these interactions send data to HSCIC and map them to show what data is extracted and where it goes, is stored and may be viewed and distributed by whom. There are an awful lot of individual scenarios so no model may match any real patient experience, but looking at it backwards, take all the HSCIC systems and extract a situation which would send the data up. A&E, School nurse, Electronic Prescription Service, Choose&Book, GP screening. Mental health call centre. It would be possible.

People should know what data, is extracted when, why and who will use it. Visuals are better than words. The leaflet failed in the case of care.data, but would an individual letter have achieved more, in just a few sentences?

More has been achieved to raise our awareness of the Health and Social Care Information Centre and Government uses of our health data, through all the hoo-ha in the press, and the re-tweet by David Nicholson of the care.data downfall parody, than by the original leaflet. Perhaps the leaflet’s measure of success was not intended to be a 100% reach at all. I hope we’ll understand more soon.

(** for updated thought 19th April see note below.) Should we presume an ‘optional template’ means that no paid letter will be provided from NHS England to all? GP practices may decide to use the ‘optional’ template to send out letters now. Professor Mathers had called for one. But I wonder if GPs themselves will be expected to bear the cost, of an imposed central initiative for which there is no choice to participate and yet the GPs are legally liable Data Controllers for complaints? If no funding is offered, and GP practices decide not to send letters out, it would seem a risk trade off. The risk of a patient complaining or indeed legal action, if they did not know their data was going to be extracted and and potential risk for harm ensued. Yet fair processing should be a Data Protection Act requirement. But is it for care.data?

This week also saw the list of number of patients published by GP practice. Helpfully with postcode. So if my practice were to want to post a letter to every patient in my area, at 53p second class, it would cost around four thousand pounds. I don’t know if they get any bulk discounts and one per household might reduce numbers. But that’s a lot of money – but perhaps (**) it may be covered centrally after all, though the letter does not indicate that? (I now also know how few over 90 yr old men are registered, if interested).

It seems like there is much positive going on in the undercurrents of the care.data developments, which the general public cannot see, such as the care.data advisory group work-in-progress.

There would seem much which needs work in a very short space of time for relaunch in autumn. But if Dame Fiona Caldicott, Chair of the panel set up to advise NHS and Ministers on the use and governance of patient information, said she thinks we need longer, then I am sure she is right. To take as long as is needed to get it right would seem sensible. To rush and fail a second time, would be irretrievable. Surely, her advice would not be ignored again?

The HSCIC this week also released the Framework Agreement between the Department of Health and HSCIC. 

It will be interesting to see if this affects and changes the HSCIC roadmap. In my opinion, it should. The care.data addendum to widen commercial uses was pushed back but is still to resurface. There is still no clarity around commercial re-use licenses. These commercial drivers should come out if Mr.Hunt’s rock solid assurance is to be believed which, “puts beyond any doubt that the HSCIC cannot release identifiable, or potentially identifiable, patient data for commercial insurance or other purely commercial purposes.”

At the moment I would hope the HSCIC roadmap would change in its commercial focus:

“especially in relation to the potential sale of data”. 

“Help stimulate the market through dynamic relationships with commercial organisations, especially those who expect to use its data and outputs to design new information-based services.”

It remains to see if it does.

That framework is a good read with a hot coffee (and a short snaps if you are where I am). What’s missing for me, is any reassurance at all that the HSCIC will remain public. There is a large chapter on what process would need to be followed if it were to change structure or be merged. And therefore does not rule out a private owner of the single central repository for our health, social care, research and recipient of integrated ONS data in future.

“Any change to its core functions or duties, including mergers, significant restructuring or abolition would therefore require further primary legislation. If this were to happen, the Department would then be responsible for putting in place arrangements to ensure a smooth and orderly transition, with the protection of patients being paramount.”

It would appear to me, that a future intent to privatise the ownership of care.data and more could remain open. Certain aspects of the day-to-day functions were potentially to be outsourced in a past ISCG roadmap. I would hope the core will remain firmly State owned.

Bizarrely, duck races are not treated equally across the globe. Wisconsin recently repealed their ban. It seems almost as bizarre, as the idea of selling our taxpayer financial and VAT data. Or our school pupils personal details. I wish I could say, one of these stories were not true.

What the duck is going on with Government’s attitude to our personal data?  The Cabinet Office seems to be failing to give out legally required Freedom of Information responses, and yet happily selling the knowledge of our health, wealth and our children?

“These regulations also allow the department to disclose individual pupil information, subject to the Data Protection Act 1998, to named bodies and persons who, for the purpose of promoting the education or well-being of children in England are conducting research or analysis; producing statistics; or providing information, advice or guidance. The department may decide to share pupil and children’s information with third parties on a case by case basis where it is satisfied that to do so would be in accordance with the law and the Data Protection Act, and where it considers that such disclosure would promote the education or well-being of children.”

So if McDonalds wants to run a healthy eating campaign, would they qualify?

Open Data does not equate (must read) with being open with all of our data. Tables and summaries at aggregated level of statistics are nothing to do with individual level data. Before any Government body considers if they should enable private and other organisations to use data more freely and effectively, and their stance on charging and profit from use of data, they should think twice.

Remember the daft Deregulation Bill 162? It revokes the need to sell pre-packed knitting yarn by net weight and other nonsense. Perhaps it is the ‘Exercise of regulatory functions’ which is the root cause of much of these  issues on the monetisation of our data:

Clause 63 provides a power for a Minister of the Crown to issue guidance on: how regulatory functions can be exercised so as to promote economic growth;

Sections 60-67 of the Deregulation Act currently passing through Parliament allow the removal of any regulation that conflicts with the interests of a profit-maker. If your body manages data, there’s really only going to be one way to meet the obligations of Bill 162. Sell it.

Someone needs to tell all the departments, if you have any chance at all of getting care.data through to the finish line, stop giving away or selling any of our personal data which we trusted you with for an entirely different original purpose.

Whilst there are many people working on many manoeuvres to get all the ducks ready to relaunch for care.data, the Government has to pay attention to the whole race. If we lose faith in the Government to make wise decisions on what will be done with all data we share for a given purpose and find later it is given to others without our knowledge, we won’t trust it with our health data. If the data warehouse may one day be sold off, then all the gameplanning and rules in between will appear to have been pointless.

This is not a race to the finish with the least bad option. Care.data needs to be exemplary if it is to have any chance of reaching the podium as the world leader in patient data-sharing management. It’s got one second chance to get a relaunch.

Without public trust it will flounder. Without GPs to patient communications thoroughly thought out it and funded, it is destined for a rough ride. Without further legislative changes, it’s not going far enough to be convincing of real commitment to change.  Without these three, it will not reach the finish line.

The best summary of why we need still much work and how to respect so many of these under good governance, came out this week, from the Chair of CAG. However, we cannot expect to have all of the answers in six months time. The commitment must be an ongoing one to continue to consult with people, to continue to work to optimally protect both privacy and the public interest in the uses of health data.”

So between Dr. Taylor and Dame Caldicott the wise seem to indicate more than 6 months is needed.

There are encouraging signs, but many issues don’t seem to be addressed yet at all, from the recent NHS England letter nor Framework Agreement. Above all, in common with the tax data sharing, pseudonymous is not equal to anonymous. It’s not only what HSCIC currently determines as identifiable, which we need vital improved governance to protect.

In any upcoming public communications, I pray don’t patronise the public saying that ‘name and address will not be extracted’ as the last FAQs and poster did. Explain instead what the Personal Demographics Service stores already, educate us how the PDS and linkage works and why. Details like this must not get lost in any rushed relaunch.

And other departments’ decisions must not put it in jeopardy.

Whilst care.data is getting its ducks in a row, the wider Government approach to data management seems to have gone, I can’t help but say, absolutely quackers.

——-

** 19th April Update: This via twitter comment says, if GPs get patient letters made available they only have to address them to send to their patient list. Will this happen in this case? Good news for informed communications? Let’s hope so.

What is Care.data? Defined scope is vital for trust.

It seems impossible to date, to get an official simple line drawn around ‘what is care.data’. And therefore scope creep is inevitable and fair processing almost impossible. There is much misunderstanding, seeing it as exclusively this one-time GP load to merge with HES. Or even confusion with the Summary Care Record and its overlap, if it will be used in read-only environments such as Proactive care and Out-of-hours, or by 111 and A&E services.  The best unofficial summary is here from a Hampshire GP, Dr. Bhatia.

Care.data is an umbrella initiative, which is planned over many years.

Care.data seems to be a vision. An ethereal concept of how all Secondary Uses (ref.p28) health and social care data will be extracted and made available to share in the cloud for all manner of customers. A global standard allowing extract, query and reporting for top down control by the men behind the curtains, with intangible benefits for England’s inhabitants whose data it is. Each data set puts another brick in the path towards a perfect, all-knowing, care.data dream. And the data sets continue to be added to and plans made for evermore future flows. (Community Services make up 10 per cent of the NHS budget and the standards that will mandate the national submission of the revised CIDS data is now not due until 2015.)

Whilst offering insight opportunity for top down cost control, planning, and ‘quality’ measures, right down to the low level basics of invoice validation, it will not offer clinicians on the ground access to use data between hospitals for direct care. HES data is too clunky, or too detailed with the wrong kinds of data, or incomplete and inaccurate to benefit patients in care of their individual consultants. Prof Jonathan Kay at the Westminster Health Forum on 1st April telling hospitals, to do their own thing and go away and make local hospital IT systems work. Totally at odds with the mantra of Beverley Bryant, NHS England of, ‘interoperability’ earlier the same day. An audience question asked, how can we ensure patients can transfer successfully between hospitals without a set of standards? It is impossible to see good value for patients here.

Without a controlled scope I do not wish to release my children’s personal data for research purposes. But at the moment we have no choice. Our data is used in pseudonymous format and we have no known publicly communicated way to restrict that use. The patient leaflet, “better data means better care” certainly gives no indication that pseudonymous data is obligatory nor states clearly that only the identifiable data would be restricted if one objected.

Data extracted now, offers no possibility to time limit its use. I hope my children will have a long and happy lifetime, and can choose themselves if they are ‘a willing research patient’ as David Cameron stated in 2010 he would change the NHS Constitution for. We just don’t know to what use those purposes will be put in their lifetime.

The scope of an opt-in assumption should surely be reasonably expected only to be used for our care and nothing else, unless there is a proven patient need & benefit for otherwise? All other secondary uses cannot be assumed without any sort of fair processing, but they already are.

The general public can now see for the first time, the scope of how the HSCIC quango and its predecessors have been giving away our hospital records at arms-length, with commercial re-use licenses.

The scope of sharing and its security is clearly dependent on whether it is fully identifiable (red),  truly anonymous and aggregated (green, Open data) or so-called amber. This  pseudonymous data is re-identifiable if you know what you’re doing, according to anyone who knows about these things, and is easy when paired with other data. It’s illegal? Well so was phone hacking, and we know that didn’t happen either of course.  Knowledge once leaked, is lost. The bigger the data, the bigger the possible loss, as Target will testify. So for those who fear it falling into the wrong hands, it’s a risk which we just have to trust is well secured. This scope of what can be legitimately shared for what purposes must be reined in.

Otherwise, how can we possibly consent to something which may be entirely different purposes down the line?

If we need different data for real uses of commissioning, various aspects of research and the commercial ‘health purposes,’ why then are they conflated in the one cauldron? The Caldicott 2 review questioned many of these uses of identifiable data, notably for invoice validation and risk stratification.

Parents should be able to support research without that meaning our kids’ health data is given freely for every kind of research, for eternity, and to commercial intermediaries or other government departments. Whilst I have no qualms about Public Health research, I do about pushing today’s boundaries of predictive medicine. Our NHS belongs to us all, free-at-the-point-of-service for all, not as some sort of patient-care trade deal.

Where is the clear definition of scope and purposes for either the existing HES data or future care.data? Data extractions demand fair processing.

Data is not just a set of statistics. It is the knowledge of our bodies, minds and lifestyle choices. Sometimes it will provide knowledge to others, we don’t even yet have ourselves.

Who am I to assume today, a choice which determines my children have none forevermore? Why does the Government make that choice on our behalf and had originally decided not to even tell us at all?  It is very uncomfortable feeling like it is Mother vs Big Brother on this, but that is how it feels. You have taken my children’s hospital health records and are using them without my permission for purposes I cannot control. That is not fair processing. It was not in the past and it continues not to be now.  You want to do the same with their GP records, and planned not to ask us. And still have not explained why many had no communications leaflet. Where is my trust now?

We need to be very careful to ensure that all the right steps are put in place to safeguard patient data for the vital places which need it, public health, ethical and approved research purposes, planning and delivery of care. NHS England must surely step up publicly soon and explain what is going on. And ideally, that they will take as long as necessary to get all the right steps in the right order. Autumn is awfully close, if nothing is yet changed.

The longer trust is eroded, the greater chance there is long term damage to data quality and its flawed use by those who need it. But it would be fatal to rush and fail again.

If we set the right framework now, we should build a method that all future changes to scope ensure communication and future fair processing.

We need to be told transparently, to what purposes our data is being used today, so we can trust those who want to use it tomorrow. Each time purposes change, the right to revoke consent should change. And not just going forward, but from all records use. Historic and future.

How have we got here? Secondary Uses (SUS) is the big data cloud from which Hospital Episode Statistics (HES) is a subset. HES was originally extracted and managed as an admin tool. From the early days of the Open Exeter system GP patient data was used for our clinical care and its management. When did that change? Scope seems not so much to have crept, but skipped along a path to being OK to share the data, linked on demand even with Personal Demographics or from QOF data too, with pharma, all manner of research institutions and third party commercial intermediaries, but no one thought to tell the public. Oops says ICO.

Without scope definition, there can be no fair processing. We don’t know who will access which data for what purposes. Future trust can only be built if we know what we have been signed up to, stays what we were signed up to, across all purposes, across all classes of data. Scope creep must be addressed for all patient data handling and will be vital if we are to trust care.data extraction.

***