Tag Archives: data sharing

The power of imagination in public policy

“A new, a vast, and a powerful language is developed for the future use of analysis, in which to wield its truths so that these may become of more speedy and accurate practical application for the purposes of mankind than the means hitherto in our possession have rendered possible.” [on Ada Lovelace, The First tech Visionary, New Yorker, 2013]

What would Ada Lovelace have argued for in today’s AI debates? I think she may have used her voice not only to call for the good use of data analysis, but for her second strength.The power of her imagination.

James Ball recently wrote in The European [1]:

“It is becoming increasingly clear that the modern political war isn’t one against poverty, or against crime, or drugs, or even the tech giants – our modern political era is dominated by a war against reality.”

My overriding take away from three days spent at the Conservative Party Conference this week, was similar. It reaffirmed the title of a school debate I lost at age 15, ‘We only believe what we want to believe.’

James writes that it is, “easy to deny something that’s a few years in the future“, and that Conservatives, “especially pro-Brexit Conservatives – are sticking to that tried-and-tested formula: denying the facts, telling a story of the world as you’d like it to be, and waiting for the votes and applause to roll in.”

These positions are not confined to one party’s politics, or speeches of future hopes, but define perception of current reality.

I spent a lot of time listening to MPs. To Ministers, to Councillors, and to party members. At fringe events, in coffee queues, on the exhibition floor. I had conversations pressed against corridor walls as small press-illuminated swarms of people passed by with Queen Johnson or Rees-Mogg at their centre.

In one panel I heard a primary school teacher deny that child poverty really exists, or affects learning in the classroom.

In another, in passing, a digital Minister suggested that Pupil Referral Units (PRU) are where most of society’s ills start, but as a Birmingham head wrote this week, “They’ll blame the housing crisis on PRUs soon!” and “for the record, there aren’t gang recruiters outside our gates.”

This is no tirade on failings of public policymakers however. While it is easy to suspect malicious intent when you are at, or feel, the sharp end of policies which do harm, success is subjective.

It is clear that an overwhelming sense of self-belief exists in those responsible, in the intent of any given policy to do good.

Where policies include technology, this is underpinned by a self re-affirming belief in its power. Power waiting to be harnessed by government and the public sector. Even more appealing where it is sold as a cost-saving tool in cash strapped councils. Many that have cut away human staff are now trying to use machine power to make decisions. Some of the unintended consequences of taking humans out of the process, are catastrophic for human rights.

Sweeping human assumptions behind such thinking on social issues and their causes, are becoming hard coded into algorithmic solutions that involve identifying young people who are in danger of becoming involved in crime using “risk factors” such as truancy, school exclusion, domestic violence and gang membership.

The disconnect between perception of risk, the reality of risk, and real harm, whether perceived or felt from these applied policies in real-life, is not so much, ‘easy to deny something that’s a few years in the future‘ as Ball writes, but a denial of the reality now.

Concerningly, there is lack of imagination of what real harms look like.There is no discussion where sometimes these predictive policies have no positive, or even a negative effect, and make things worse.

I’m deeply concerned that there is an unwillingness to recognise any failures in current data processing in the public sector, particularly at scale, and where it regards the well-known poor quality of administrative data. Or to be accountable for its failures.

Harms, existing harms to individuals, are perceived as outliers. Any broad sweep of harms across policy like Universal Credit, seem perceived as political criticism, which makes the measurable failures less meaningful, less real, and less necessary to change.

There is a worrying growing trend of finger-pointing exclusively at others’ tech failures instead. In particular, social media companies.

Imagination and mistaken ideas are reinforced where the idea is plausible, and shared. An oft heard and self-affirming belief was repeated in many fora between policymakers, media, NGOs regards children’s online safety. “There is no regulation online”. In fact, much that applies offline applies online. The Crown Prosecution Service Social Media Guidelines is a good place to start. [2] But no one discusses where children’s lives may be put at risk or less safe, through the use of state information about them.

Policymakers want data to give us certainty. But many uses of big data, and new tools appear to do little more than quantify moral fears, and yet still guide real-life interventions in real-lives.

Child abuse prediction, and school exclusion interventions should not be test-beds for technology the public cannot scrutinise or understand.

In one trial attempting to predict exclusion, this recent UK research project in 2013-16 linked children’s school records of 800 children in 40 London schools, with Metropolitan Police arrest records of all the participants. It found interventions created no benefit, and may have caused harm. [3]

“Anecdotal evidence from the EiE-L core workers indicated that in some instances schools informed students that they were enrolled on the intervention because they were the “worst kids”.”

Keeping students in education, by providing them with an inclusive school environment, which would facilitate school bonds in the context of supportive student–teacher relationships, should be seen as a key goal for educators and policy makers in this area,” researchers suggested.

But policy makers seem intent to use systems that tick boxes, and create triggers to single people out, with quantifiable impact.

Some of these systems are known to be poor, or harmful.

When it comes to predicting and preventing child abuse, there is concern with the harms in US programmes ahead of us, such as both Pittsburgh, and Chicago that has scrapped its programme.

The Illinois Department of Children and Family Services ended a high-profile program that used computer data mining to identify children at risk for serious injury or death after the agency’s top official called the technology unreliable, and children still died.

“We are not doing the predictive analytics because it didn’t seem to be predicting much,” DCFS Director Beverly “B.J.” Walker told the Tribune.

Many professionals in the UK share these concerns. How long will they be ignored and children be guinea pigs without transparent error rates, or recognition of the potential harmful effects?

Helen Margetts, Director of the Oxford Internet Institute and Programme Director for Public Policy at the Alan Turing Institute, suggested at the IGF event this week, that stopping the use of these AI in the public sector is impossible. We could not decide that, “we’re not doing this until we’ve decided how it’s going to be.” It can’t work like that.” [45:30]

Why on earth not? At least for these high risk projects.

How long should children be the test subjects of machine learning tools at scale, without transparent error rates, audit, or scrutiny of their systems and understanding of unintended consequences?

Is harm to any child a price you’re willing to pay to keep using these systems to perhaps identify others, while we don’t know?

Is there an acceptable positive versus negative outcome rate?

The evidence so far of AI in child abuse prediction is not clearly showing that more children are helped than harmed.

Surely it’s time to stop thinking, and demand action on this.

It doesn’t take much imagination, to see the harms. Safe technology, and safe use of data, does not prevent the imagination or innovation, employed for good.

If we continue to ignore views from Patrick Brown, Ruth Gilbert, Rachel Pearson and Gene Feder, Charmaine Fletcher, Mike Stein, Tina Shaw and John Simmonds I want to know why.

Where you are willing to sacrifice certainty of human safety for the machine decision, I want someone to be accountable for why.

 


References

[1] James Ball, The European, Those waging war against reality are doomed to failure, October 4, 2018.

[2] Thanks to Graham Smith for the link. “Social Media – Guidelines on prosecuting cases involving communications sent via social media. The Crown Prosecution Service (CPS) , August 2018.”

[3] Obsuth, I., Sutherland, A., Cope, A. et al. J Youth Adolescence (2017) 46: 538. https://doi.org/10.1007/s10964-016-0468-4 London Education and Inclusion Project (LEIP): Results from a Cluster-Randomized Controlled Trial of an Intervention to Reduce School Exclusion and Antisocial Behavior (March 2016)

Ethically problematic

Five years ago, researchers at the Manchester University School of Social Sciences wrote, “It will no longer be possible to assume that secondary data use is ethically unproblematic.”

Five years on, other people’s use of the language of data ethics puts social science at risk. Event after event, we are witnessing the gradual dissolution of the value and meaning of ‘ethics’, into little more than a buzzword.

Companies and organisations are using the language of ‘ethical’ behaviour blended with ‘corporate responsibility’ modelled after their own values, as a way to present competitive advantage.

Ethics is becoming shorthand for, ‘we’re the good guys’. It is being subverted by personal data users’ self-interest. Not to address concerns over the effects of data processing on individuals or communities, but to justify doing it anyway.

An ethics race

There’s certainly a race on for who gets to define what data ethics will mean. We have at least three new UK institutes competing for a voice in the space. Digital Catapult has formed an AI ethics committee. Data charities abound. Even Google has developed an ethical AI strategy of its own, in the wake of their Project Maven.

Lessons learned in public data policy should be clear by now. There should be no surprises how administrative data about us are used by others. We should expect fairness. Yet these basics still seem hard for some to accept.

The NHS Royal Free Hospital in 2015 was rightly criticised – because they tried “to commercialise personal confidentiality without personal consent,” as reported in Wired recently.

The shortcomings we found were avoidable,” wrote Elizabeth Denham in 2017 when the ICO found six ways the Google DeepMind — Royal Free deal did not comply with the Data Protection Act. The price of innovation, she said, didn’t need to be the erosion of fundamental privacy rights underpinned by the law.

If the Centre for Data Ethics and Innovation is put on a statutory footing where does that leave the ICO, when their views differ?

It’s why the idea of DeepMind funding work in Ethics and Society seems incongruous to me. I wait to be proven wrong. In their own words, “technologists must take responsibility for the ethical and social impact of their work“. Breaking the law however, is conspicuous by its absence, and the Centre must not be used by companies, to generate pseudo lawful or ethical acceptability.

Do we need new digital ethics?

Admittedly, not all laws are good laws. But if recognising and acting under the authority of the rule-of-law is now an optional extra, it will undermine the ICO, sink public trust, and destroy any hope of achieving the research ambitions of UK social science.

I am not convinced there is any such thing as digital ethics. The claimed gap in an ability to get things right in this complex area, is too often after people simply get caught doing something wrong. Technologists abdicate accountability saying “we’re just developers,” and sociologists say, “we’re not tech people.

These shrugs of the shoulders by third-parties, should not be rewarded with more data access, or new contracts. Get it wrong, get out of our data.

This lack of acceptance of responsibility creates a sense of helplessness. We can’t make it work, so let’s make the technology do more. But even the most transparent algorithms will never be accountable. People can be accountable, and it must be possible to hold leaders to account for the outcomes of their decisions.

But it shouldn’t be surprising no one wants to be held to account. The consequences of some of these data uses are catastrophic.

Accountability is the number one problem to be solved right now. It includes openness of data errors, uses, outcomes, and policy. Are commercial companies, with public sector contracts, checking data are accurate and corrected from people who the data are about, before applying in predictive tools?

Unethical practice

As Tim Harford in the FT once asked about Big Data uses in general: “Who cares about causation or sampling bias, though, when there is money to be made?”

Problem area number two, whether researchers are are working towards a profit model, or chasing grant funding is this:

How data users can make unbiased decisions whether they should use the data? We have all the same bodies deciding on data access, that oversee its governance. Conflict of self interest is built-in by default, and the allure of new data territory is tempting.

But perhaps the UK key public data ethics problem, is that the policy is currently too often about the system goal, not about improving the experience of the people using systems. Not using technology as a tool, as if people mattered. Harmful policy, can generate harmful data.

Secondary uses of data are intrinsically dependent on the ethics of the data’s operational purpose at collection. Damage-by-design is evident right now across a range of UK commercial and administrative systems. Metrics of policy success and associated data may be just wrong.

Some of the damage is done by collecting data for one purpose and using it operationally for another in secret. Until these modus operandi change no one should think that “data ethics will save us”.

Some of the most ethical research aims try to reveal these problems. But we need to also recognise not all research would be welcomed by the people the research is about, and few researchers want to talk about it. Among hundreds of already-approved university research ethics board applications I’ve read, some were desperately lacking. An organisation is no more ethical than the people who make decisions in its name. People disagree on what is morally right. People can game data input and outcomes and fail reproducibility. Markets and monopolies of power bias aims. Trying to support the next cohort of PhDs and impact for the REF, shapes priorities and values.

Individuals turn into data, and data become regnant.” Data are often lacking in quality and completeness and given authority they do not deserve.

It is still rare to find informed discussion among the brightest and best of our leading data institutions, about the extensive everyday real world secondary data use across public authorities, including where that use may be unlawful and unethical, like buying from data brokers. Research users are pushing those boundaries for more and more without public debate. Who says what’s too far?

The only way is ethics? Where next?

The latest academic-commercial mash-ups on why we need new data ethics in a new regulatory landscape where the established is seen as past it, is a dangerous catch-all ‘get out of jail free card’.

Ethical barriers are out of step with some of today’s data politics. The law is being sidestepped and regulation diminished by lack of enforcement of gratuitous data grabs from the Internet of Things, and social media data are seen as a free-for-all. Data access barriers are unwanted. What is left to prevent harm?

I’m certain that we first need to take a step back if we are to move forward. Ethical values are founded on human rights that existed before data protection law. Fundamental human decency, rights to privacy, and to freedom from interference, common law confidentiality, tort, and professional codes of conduct on conflict of interest, and confidentiality.

Data protection law emphasises data use. But too often its first principles of necessity and proportionality are ignored. Ethical practice would ask more often, should we collect the data at all?

Although GDPR requires new necessary safeguards to ensure that technical and organisational measures are met to control and process data, and there is a clearly defined Right to Object, I am yet to see a single event thought giving this any thought.

Let’s not pretend secondary use of data is unproblematic, while uses are decided in secret. Calls for a new infrastructure actually seek workarounds of regulation. And human rights are dismissed.

Building a social license between data subjects and data users is unavoidable if use of data about people hopes to be ethical.

The lasting solutions are underpinned by law, and ethics. Accountability for risk and harm. Put the person first in all things.

We need more than hopes and dreams and talk of ethics.

We need realism if we are to get a future UK data strategy that enables human flourishing, with public support.

Notes of desperation or exasperation are increasingly evident in discourse on data policy, and start to sound little better than ‘we want more data at all costs’. If so, the true costs would be lasting.

Perhaps then it is unsurprising that there are calls for a new infrastructure to make it happen, in the form of Data Trusts. Some thoughts on that follow too.


Part 1. Ethically problematic

Ethics is dissolving into little more than a buzzword. Can we find solutions underpinned by law, and ethics, and put the person first?

Part 2. Can Data Trusts be trustworthy?

As long as data users ignore data subjects rights, Data Trusts have no social license.


Data Horizons: New Forms of Data For Social Research,

Elliot, M., Purdam, K., Mackey, E., School of Social Sciences, The University Of Manchester, CCSR Report 2013-312/6/2013

The Queen’s Speech, Information Society Services and GDPR

The Queen’s Speech promised new laws to ensure that the United Kingdom retains its world-class regime protecting personal data. And the government proposes a new digital charter to make the United Kingdom the safest place to be online for children.

Improving online safety for children should mean one thing. Children should be able to use online services without being used by them and the people and organisations behind it. It should mean that their rights to be heard are prioritised in decisions about them.

As Sir Tim Berners-Lee is reported as saying, there is a need to work with companies to put “a fair level of data control back in the hands of people“. He rightly points out that today terms and conditions are “all or nothing”.

There is a gap in discussions that we fail to address when we think of consent to terms and conditions, or “handing over data”. It is that this assumes that these are always and can be always, conscious acts.

For children the question of whether accepting Ts&Cs giving them control and whether it is meaningful becomes even more moot. What are the agreeing to? Younger children cannot give free and informed consent. After all most privacy policies standardly include phrases such as, “If we sell all or a portion of our business, we may transfer all of your information, including personal information, to the successor organization,” which means in effect that “accepting” a privacy policy today, is effectively a blank cheque for anything tomorrow.

The GDPR requires terms and conditions to be laid out in policies that a child can understand.

The current approach to legislation around children and the Internet is heavily weighted towards protection from seen threats. The threats we need to give more attention to, are those unseen.

By 2024 more than 50% of home Internet traffic will be used by appliances and devices, rather than just for communication and entertainment…The IoT raises huge questions on privacy and security, that have to be addressed by government, corporations and consumers. (WEF, 2017)

Our lives as measured in our behaviours and opinions, purchases and likes, are connected by trillions of sensors. My parents may have described using the Internet as going online. Today’s online world no longer means our time is spent ‘on the computer’, but being online, all day every day. Instead of going to a desk and booting up through a long phone cable, we have wireless computers in our pockets and in our homes, with functionality built-in to enable us to do other things; make a phonecall, make toast, and play. In a smart city surrounded by sensors under pavements, in buildings, cameras and tracking everywhere we go, we are living ever more inside an overarching network of cloud computers that store our data. And from all that data decisions are made, which adverts to show us, on which network sites, what we get offered and do not, and our behaviours and our conscious decision-making may be nudged quite invisibly.

Data about us, whether uniquely identifiable or not, is all too often collected passively, IP Address, linked sign-ins that extract friends lists, and some decide if we can either use the thing or not. It’s part of the deal. We get the service, they get to trade our identity, like Top Trumps, behind the scenes. But we often don’t see it, and under GDPR, there should be no contractual requirement as part of consent. I.e. agree or don’t get the service, is not an option.

From May 25, 2018 there will be special “conditions applicable to child’s consent in relation to information society services,” in Data Protection law which are applicable to the collection of data.

As yet, we have not had debate in the UK what that means in concrete terms, and if we do not soon, we risk it becoming an afterthought that harms more than helps protect children’s privacy, and therefore their digital identity.

I think of five things needed by policy shapers to tackle it:

  • In depth understanding of what ‘online’ and the Internet mean
  • Consistent understanding of what threat models and risk are connected to personal data, which today are underestimated
  • A grasp of why data privacy training is vital to safeguarding
    Confront the idea that user regulation as a stand-alone step will create a better online experience for users, when we know that perceived problems are created by providers or other site users
  • Siloed thinking that fails to be forward thinking or join the dots of tactics across Departments into cohesive inclusive strategy

If the government’s new “major new drive on internet safety” involves the world’s largest technology companies in order to make the UK the “safest place in the world for young people to go online,” then we must also ensure that these strategies and papers join things up and above all, a technical knowledge of how the Internet works needs to join the dots of risks and benefits in order to form a strategy that will actually make children safe, skilled and see into their future.

When it comes to children, there is a further question over consent and parental spyware. Various walk-to-school apps, lauded by the former Secretary of State two years running, use spyware and can be used without a child’s consent. Guardian Gallery, which could be used to scan for nudity in photos on anyone’s phone that the ‘parent’ phone holder has access to install it on, can be made invisible on the ‘child’ phone. Imagine this in coercive relationships.

If these technologies and the online environment are not correctly assessed with regard to “online safety” threat models for all parts of our population, then they fail to address the risk for the most vulnerable who need it.

What will the GDPR really mean for online safety improvement? What will it define as online services for remuneration in the IoT? And who will be considered as children, “targeted at” or “offered to”?

An active decision is required in the UK. Will 16 remain the default age needed for consent to access Information Society Services, or will we adopt 13 which needs a legal change?

As banal as these questions sound they need close attention paid, and clarity, between now and May 25, 2018 if the UK is to be GDPR ready for providers of online services to know who and how they should treat Internet access, participation and age [parental] verification.

How will the “controller” make “reasonable efforts to verify in such cases that consent is given or authorised by the holder of parental responsibility over the child”, and “taking into consideration available technology”.

These are fundamental questions of what the Internet is and means to people today. And if the current government approach to security is anything to go by, safety will not mean what we think it will mean.

It will matter how these plans join up. Age verification was not being considered in UK law in relation to how we would derogate GDPR, even as late as in October 2016 despite age verification requirements already in the Digital Economy Bill. It shows a lack of joined up digital thinking across our government and needs addressed with urgency to get into the next Parliamentary round.

In recent draft legislation I am yet to see the UK government address Internet rights and safety for young people as anything other than a protection issue, treating the online space in the same way as offline, irl, focused on stranger danger, and sexting.

The UK Digital Strategy commits to the implementation of the General Data Protection Regulation by May 2018, and frames it as a business issue, labelling data as “a global commodity” and as such, its handling is framed solely as a requirements needed to ensure “that our businesses can continue to compete and communicate effectively around the world” and that adoption “will ensure a shared and higher standard of protection for consumers and their data.”

The Digital Economy Bill, despite being a perfect vehicle for this has failed to take on children’s rights, and in particular the requirements of GDPR for consent at all. It was clear if we were to do any future digital transactions we need to level up to GDPR, not drop to the lowest common denominator between that and existing laws.

It was utterly ignored. So were children’s rights to have their own views heard in the consultation to comment on the GDPR derogations for children, with little chance for involvement from young people’s organisations, and less than a monthto respond.

We must now get this right in any new Digital Strategy and bill in the coming parliament.

Gotta know it all? Pokémon GO, privacy and behavioural research

I caught my first Pokémon and I liked it. Well, OK, someone else handed me a phone and insisted I have a go. Turns out my curve ball is pretty good. Pokémon GO is enabling all sorts of new discoveries.

Discoveries reportedly including a dead man, robbery, picking up new friends, and scrapes and bruises. While players are out hunting anime in augmented reality, enjoying the novelty, and discovering interesting fun facts about their vicinity, Pokémon GO is gathering a lot of data. It’s influencing human activity in ways that other games can only envy, taking in-game interaction to a whole new level.

And it’s popular.

But what is it learning about us as we do it?

This week questions have been asked about the depth of interaction that the app gets by accessing users’ log in credentials.

What I would like to know is what access goes in the other direction?

Google, heavily invested in AI and Machine intelligence research, has “learning systems placed at the core of interactive services in a fast changing and sometimes adversarial environment, combinations of techniques including deep learning and statistical models need to be combined with ideas from control and game theory.”

The app, which is free to download, has raised concerns over suggestions the app could access a user’s entire Google account, including email and passwords. Then it seemed it couldn’t. But Niantic is reported to have made changes to permissions to limit access to basic profile information anyway.

If Niantic gets access to data owned by Google through its use of google log in credentials, does Nantic’s investor, Google’s Alphabet, get the reverse: user data from the Google log in interaction with the app, and if so, what does Google learn through the interaction?

Who gets access to what data and why?

Brian Crecente writes that Apple, Google, Niantic likely making more on Pokémon Go than Nintendo, with 30 percent of revenue from in-app purchases on their online stores.

Next stop  is to make money from marketing deals between Niantic and the offline stores used as in-game focal points, gyms and more, according to Bryan Menegus at Gizmodo who reported Redditors had discovered decompiled code in the Android and iOS versions of Pokémon Go earlier this week “that indicated a potential sponsorship deal with global burger chain McDonald’s.”

The logical progressions of this, is that the offline store partners, i.e. McDonald’s and friends, will be making money from players, the people who get led to their shops, restaurants and cafes where players will hang out longer than the Pokéstop, because the human interaction with other humans, the battles between your collected creatures and teamwork, are at the heart of the game. Since you can’t visit gyms until you are level 5 and have chosen a team, players are building up profiles over time and getting social in real life. Location data that may build up patterns about the players.

This evening the two players that I spoke to were already real-life friends on their way home from work (that now takes at least an hour longer every evening) and they’re finding the real-life location facts quite fun, including that thing they pass on the bus every day, and umm, the Scientology centre. Well, more about that later**.

Every player I spotted looking at the phone with that finger flick action gave themselves away with shared wry smiles. All 30 something men. There is possibly something of a legacy in this they said, since the initial Pokémon game released 20 years ago is drawing players who were tweens then.

Since the app is online and open to all, children can play too. What this might mean for them in the offline world, is something the NSPCC picked up on here before the UK launch. Its focus  of concern is the physical safety of young players, citing the risk of in-game lures misuse. I am not sure how much of an increased risk this is compared with existing scenarios and if children will be increasingly unsupervised or not. It’s not a totally new concept. Players of all ages must be mindful of where they are playing**. Some stories of people getting together in the small hours of the night has generated some stories which for now are mostly fun. (Go Red Team.) Others are worried about hacking. And it raises all sorts of questions if private and public space is has become a Pokestop.

While the NSPCC includes considerations on the approach to privacy in a recent more general review of apps, it hasn’t yet mentioned the less obvious considerations of privacy and ethics in Pokémon GO. Encouraging anyone, but particularly children, out of their home or protected environments and into commercial settings with the explicit aim of targeting their spending. This is big business.

Privacy in Pokémon GO

I think we are yet to see a really transparent discussion of the broader privacy implications of the game because the combination of multiple privacy policies involved is less than transparent. They are long, they seem complete, but are they meaningful?

We can’t see how they interact.

Google has crowd sourced the collection of real time traffic data via mobile phones.  Geolocation data from google maps using GPS data, as well as network provider data seem necessary to display the street data to players. Apparently you can download and use the maps offline since Pokémon GO uses the Google Maps API. Google goes to “great lengths to make sure that imagery is useful, and reflects the world our users explore.” In building a Google virtual reality copy of the real world, how data are also collected and will be used about all of us who live in it,  is a little wooly to the public.

U.S. Senator Al Franken is apparently already asking Niantic these questions. He points out that Pokémon GO has indicated it shares de-identified and aggregate data with other third parties for a multitude of purposes but does not describe the purposes for which Pokémon GO would share or sell those data [c].

It’s widely recognised that anonymisation in many cases fails so passing only anonymised data may be reassuring but fail in reality. Stripping out what are considered individual personal identifiers in terms of data protection, can leave individuals with unique characteristics or people profiled as groups.

Opt out he feels is inadequate as a consent model for the personal and geolocational data that the app is collecting and passing to others in the U.S.

While the app provider would I’m sure argue that the UK privacy model respects the European opt in requirement, I would be surprised if many have read it. Privacy policies fail.

Poor practices must be challenged if we are to preserve the integrity of controlling the use of our data and knowledge about ourselves. Being aware of who we have ceded control of marketing to us, or influencing how we might be interacting with our environment, is at least a step towards not blindly giving up control of free choice.

The Pokémon GO permissions “for the purpose of performing services on our behalf“, “third party service providers to work with us to administer and provide the Services” and  “also use location information to improve and personalize our Services for you (or your authorized child)” are so broad as they could mean almost anything. They can also be changed without any notice period. It’s therefore pretty meaningless. But it’s the third parties’ connection, data collection in passing, that is completely hidden from players.

If we are ever to use privacy policies as meaningful tools to enable consent, then they must be transparent to show how a chain of permissions between companies connect their services.

Otherwise they are no more than get out of jail free cards for the companies that trade our data behind the scenes, if we were ever to claim for its misuse.  Data collectors must improve transparency.

Behavioural tracking and trust

Covert data collection and interaction is not conducive to user trust, whether through a failure to communicate by design or not.

By combining location data and behavioural data, measuring footfall is described as “the holy grail for retailers and landlords alike” and it is valuable.  “Pavement Opportunity” data may be sent anonymously, but if its analysis and storage provides ways to pitch to people, even if not knowing who they are individually, or to groups of people, it is discriminatory and potentially invisibly predatory. The pedestrian, or the player, Jo Public, is a commercial opportunity.

Pokémon GO has potential to connect the opportunity for profit makers with our pockets like never before. But they’re not alone.

Who else is getting our location data that we don’t sign up for sharing “in 81 towns and cities across Great Britain?

Whether footfall outside the shops or packaged as a game that gets us inside them, public interest researchers and commercial companies alike both risk losing our trust if we feel used as pieces in a game that we didn’t knowingly sign up to. It’s creepy.

For children the ethical implications are even greater.

There are obligations to meet higher legal and ethical standards when processing children’s data and presenting them marketing. Parental consent requirements fail children for a range of reasons.

So far, the UK has said it will implement the EU GDPR. Clear and affirmative consent is needed. Parental consent will be required for the processing of personal data of children under age 16. EU Member States may lower the age requiring parental consent to 13, so what that will mean for children here in the UK is unknown.

The ethics of product placement and marketing rules to children of all ages go out the window however, when the whole game or programme is one long animated advert. On children’s television and YouTube, content producers have turned brand product placement into programmes: My Little Pony, Barbie, Playmobil and many more.

Alice Webb, Director of BBC Children’s and BBC North,  looked at some of the challenges in this as the BBC considers how to deliver content for children whilst adapting to technological advances in this LSE blog and the publication of a new policy brief about families and ‘screen time’, by Alicia Blum-Ross and Sonia Livingstone.

So is this augmented reality any different from other platforms?

Yes because you can’t play the game without accepting the use of the maps and by default some sacrifice of your privacy settings.

Yes because the ethics and implications of of putting kids not simply in front of a screen that pitches products to them, but puts them physically into the place where they can consume products – if the McDonalds story is correct and a taster of what will follow – is huge.

Boundaries between platforms and people

Blum-Ross says, “To young people, the boundaries and distinctions that have traditionally been established between genres, platforms and devices mean nothing; ditto the reasoning behind the watershed system with its roots in decisions about suitability of content. “

She’s right. And if those boundaries and distinctions mean nothing to providers, then we must have that honest conversation with urgency. With our contrived consent, walking and running and driving without coercion, we are being packaged up and delivered right to the door of for-profit firms, paying for the game with our privacy. Smart cities are exploiting street sensors to do the same.

Freewill is at the very heart of who we are. “The ability to choose between different possible courses of action. It is closely linked to the concepts of responsibility, praise, guilt, sin, and other judgments which apply only to actions that are freely chosen.” Free choice of where we shop, what we buy and who we interact with is open to influence. Influence that is not entirely transparent presents opportunity for hidden manipulation, while the NSPCC might be worried about the risk of rare physical threat, the potential for the influencing of all children’s behaviour, both positive and negative, reaches everyone.

Some stories of how behaviour is affected, are heartbreakingly positive. And I met and chatted with complete strangers who shared the joy of something new and a mutual curiosity of the game. Pokémon GOis clearly a lot of fun. It’s also unclear on much more.

I would like to explicitly understand if Pokémon GO is gift packaging behavioural research by piggybacking on the Google platforms that underpin it, and providing linked data to Google or third parties.

Fishing for frequent Pokémon encourages players to ‘check in’ and keep that behaviour tracking live. 4pm caught a Krabby in the closet at work. 6pm another Krabby. Yup, still at work. 6.32pm Pidgey on the street outside ThatGreenCoffeeShop. Monday to Friday.

The Google privacy policies changed in the last year require ten clicks for opt out, and in part, the download of an add-on. Google has our contacts, calendar events, web searches, health data, has invested in our genetics, and all the ‘Things that make you “you”. They have our history, and are collecting our present. Machine intelligence work on prediction, is the future. For now, perhaps that will be pinging you with a ‘buy one get one free’ voucher at 6.20, or LCD adverts shifting as you drive back home.

Pokémon GO doesn’t have to include what data Google collects in its privacy policy. It’s in Google’s privacy policy. And who really read that when it came out months ago, or knows what it means in combination with new apps and games we connect it with today? Tracking and linking data on geolocation, behavioural patterns, footfall, whose other phones are close by,  who we contact, and potentially even our spend from Google wallet.

Have Google and friends of Niantic gotta know it all?

The illusion that might cheat us: ethical data science vision and practice

This blog post is also available as an audio file on soundcloud.


Anais Nin, wrote in her 1946 diary of the dangers she saw in the growth of technology to expand our potential for connectivity through machines, but diminish our genuine connectedness as people. She could hardly have been more contemporary for today:

“This is the illusion that might cheat us of being in touch deeply with the one breathing next to us. The dangerous time when mechanical voices, radios, telephone, take the place of human intimacies, and the concept of being in touch with millions brings a greater and greater poverty in intimacy and human vision.”
[Extract from volume IV 1944-1947]

Echoes from over 70 years ago, can be heard in the more recent comments of entrepreneur Elon Musk. Both are concerned with simulation, a lack of connection between the perceived, and reality, and the jeopardy this presents for humanity. But both also have a dream. A dream based on the positive potential society has.

How will we use our potential?

Data is the connection we all have between us as humans and what machines and their masters know about us. The values that masters underpin their machine design with, will determine the effect the machines and knowledge they deliver, have on society.

In seeking ever greater personalisation, a wider dragnet of data is putting together ever more detailed pieces of information about an individual person. At the same time data science is becoming ever more impersonal in how we treat people as individuals. We risk losing sight of how we respect and treat the very people whom the work should benefit.

Nin grasped the risk that a wider reach, can mean more superficial depth. Facebook might be a model today for the large circle of friends you might gather, but how few you trust with confidences, with personal knowledge about your own personal life, and the privilege it is when someone chooses to entrust that knowledge to you. Machine data mining increasingly tries to get an understanding of depth, and may also add new layers of meaning through profiling, comparing our characteristics with others in risk stratification.
Data science, research using data, is often talked about as if it is something separate from using information from individual people. Yet it is all about exploiting those confidences.

Today as the reach has grown in what is possible for a few people in institutions to gather about most people in the public, whether in scientific research, or in surveillance of different kinds, we hear experts repeatedly talk of the risk of losing the valuable part, the knowledge, the insights that benefit us as society if we can act upon them.

We might know more, but do we know any better? To use a well known quote from her contemporary, T S Eliot, ‘Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?’

What can humans achieve? We don’t yet know our own limits. What don’t we yet know?  We have future priorities we aren’t yet aware of.

To be able to explore the best of what Nin saw as ‘human vision’ and Musk sees in technology, the benefits we have from our connectivity; our collaboration, shared learning; need to be driven with an element of humility, accepting values that shape  boundaries of what we should do, while constantly evolving with what we could do.

The essence of this applied risk is that technology could harm you, more than it helps you. How do we avoid this and develop instead the best of what human vision makes possible? Can we also exceed our own expectations of today, to advance in moral progress?

Continue reading The illusion that might cheat us: ethical data science vision and practice

OkCupid and Google DeepMind: Happily ever after? Purposes and ethics in datasharing

This blog post is also available as an audio file on soundcloud.


What constitutes the public interest must be set in a universally fair and transparent ethics framework if the benefits of research are to be realised – whether in social science, health, education and more – that framework will provide a strategy to getting the pre-requisite success factors right, ensuring research in the public interest is not only fit for the future, but thrives. There has been a climate change in consent. We need to stop talking about barriers that prevent datasharing  and start talking about the boundaries within which we can.

What is the purpose for which I provide my personal data?

‘We use math to get you dates’, says OkCupid’s tagline.

That’s the purpose of the site. It’s the reason people log in and create a profile, enter their personal data and post it online for others who are looking for dates to see. The purpose, is to get a date.

When over 68K OkCupid users registered for the site to find dates, they didn’t sign up to have their identifiable data used and published in ‘a very large dataset’ and onwardly re-used by anyone with unregistered access. The users data were extracted “without the express prior consent of the user […].”

Are the registration consent purposes compatible with the purposes to which the researcher put the data should be a simple enough question.  Are the research purposes what the person signed up to, or would they be surprised to find out their data were used like this?

Questions the “OkCupid data snatcher”, now self-confessed ‘non-academic’ researcher, thought unimportant to consider.

But it appears in the last month, he has been in good company.

Google DeepMind, and the Royal Free, big players who do know how to handle data and consent well, paid too little attention to the very same question of purposes.

The boundaries of how the users of OkCupid had chosen to reveal information and to whom, have not been respected in this project.

Nor were these boundaries respected by the Royal Free London trust that gave out patient data for use by Google DeepMind with changing explanations, without clear purposes or permission.

The legal boundaries in these recent stories appear unclear or to have been ignored. The privacy boundaries deemed irrelevant. Regulatory oversight lacking.

The respectful ethical boundaries of consent to purposes, disregarding autonomy, have indisputably broken down, whether by commercial org, public body, or lone ‘researcher’.

Research purposes

The crux of data access decisions is purposes. What question is the research to address – what is the purpose for which the data will be used? The intent by Kirkegaard was to test:

“the relationship of cognitive ability to religious beliefs and political interest/participation…”

In this case the question appears intended rather a test of the data, not the data opened up to answer the test. While methodological studies matter, given the care and attention [or self-stated lack thereof] given to its extraction and any attempt to be representative and fair, it would appear this is not the point of this study either.

The data doesn’t include profiles identified as heterosexual male, because ‘the scraper was’. It is also unknown how many users hide their profiles, “so the 99.7% figure [identifying as binary male or female] should be cautiously interpreted.”

“Furthermore, due to the way we sampled the data from the site, it is not even representative of the users on the site, because users who answered more questions are overrepresented.” [sic]

The paper goes on to say photos were not gathered because they would have taken up a lot of storage space and could be done in a future scraping, and

“other data were not collected because we forgot to include them in the scraper.”

The data are knowingly of poor quality, inaccurate and incomplete. The project cannot be repeated as ‘the scraping tool no longer works’. There is an unclear ethical or peer review process, and the research purpose is at best unclear. We can certainly give someone the benefit of the doubt and say intent appears to have been entirely benevolent. It’s not clear what the intent was. I think it is clearly misplaced and foolish, but not malevolent.

The trouble is, it’s not enough to say, “don’t be evil.” These actions have consequences.

When the researcher asserts in his paper that, “the lack of data sharing probably slows down the progress of science immensely because other researchers would use the data if they could,”  in part he is right.

Google and the Royal Free have tried more eloquently to say the same thing. It’s not research, it’s direct care, in effect, ignore that people are no longer our patients and we’re using historical data without re-consent. We know what we’re doing, we’re the good guys.

However the principles are the same, whether it’s a lone project or global giant. And they’re both wildly wrong as well. More people must take this on board. It’s the reason the public interest needs the Dame Fiona Caldicott review published sooner rather than later.

Just because there is a boundary to data sharing in place, does not mean it is a barrier to be ignored or overcome. Like the registration step to the OkCupid site, consent and the right to opt out of medical research in England and Wales is there for a reason.

We’re desperate to build public trust in UK research right now. So to assert that the lack of data sharing probably slows down the progress of science is misplaced, when it is getting ‘sharing’ wrong, that caused the lack of trust in the first place and harms research.

A climate change in consent

There has been a climate change in public attitude to consent since care.data, clouded by the smoke and mirrors of state surveillance. It cannot be ignored.  The EUGDPR supports it. Researchers may not like change, but there needs to be an according adjustment in expectations and practice.

Without change, there will be no change. Public trust is low. As technology advances and if we continue to see commercial companies get this wrong, we will continue to see public trust falter unless broken things get fixed. Change is possible for the better. But it has to come from companies, institutions, and people within them.

Like climate change, you may deny it if you choose to. But some things are inevitable and unavoidably true.

There is strong support for public interest research but that is not to be taken for granted. Public bodies should defend research from being sunk by commercial misappropriation if they want to future-proof public interest research.

The purpose for which the people gave consent are the boundaries within which you have permission to use data, that gives you freedom within its limits, to use the data.  Purposes and consent are not barriers to be overcome.

If research is to win back public trust developing a future proofed, robust ethical framework for data science must be a priority today.

Commercial companies must overcome the low levels of public trust they have generated in the public to date if they ask ‘trust us because we’re not evil‘. If you can’t rule out the use of data for other purposes, it’s not helping. If you delay independent oversight it’s not helping.

This case study and indeed the Google DeepMind recent episode by contrast demonstrate the urgency with which working out what common expectations and oversight of applied ethics in research, who gets to decide what is ‘in the public interest’ and data science public engagement must be made a priority, in the UK and beyond.

Boundaries in the best interest of the subject and the user

Society needs research in the public interest. We need good decisions made on what will be funded and what will not be. What will influence public policy and where needs attention for change.

To do this ethically, we all need to agree what is fair use of personal data, when is it closed and when is it open, what is direct and what are secondary uses, and how advances in technology are used when they present both opportunities for benefit or risks to harm to individuals, to society and to research as a whole.

The potential benefits of research are potentially being compromised for the sake of arrogance, greed, or misjudgement, no matter intent. Those benefits cannot come at any cost, or disregard public concern, or the price will be trust in all research itself.

In discussing this with social science and medical researchers, I realise not everyone agrees. For some, using deidentified data in trusted third party settings poses such a low privacy risk, that they feel the public should have no say in whether their data are used in research as long it’s ‘in the public interest’.

For the DeepMind researchers and Royal Free, they were confident even using identifiable data, this is the “right” thing to do, without consent.

For the Cabinet Office datasharing consultation, the parts that will open up national registries, share identifiable data more widely and with commercial companies, they are convinced it is all the “right” thing to do, without consent.

How can researchers, society and government understand what is good ethics of data science, as technology permits ever more invasive or covert data mining and the current approach is desperately outdated?

Who decides where those boundaries lie?

“It’s research Jim, but not as we know it.” This is one aspect of data use that ethical reviewers will need to deal with, as we advance the debate on data science in the UK. Whether independents or commercial organisations. Google said their work was not research. Is‘OkCupid’ research?

If this research and data publication proves anything at all, and can offer lessons to learn from, it is perhaps these three things:

Who is accredited as a researcher or ‘prescribed person’ matters. If we are considering new datasharing legislation, and for example, who the UK government is granting access to millions of children’s personal data today. Your idea of a ‘prescribed person’ may not be the same as the rest of the public’s.

Researchers and ethics committees need to adjust to the climate change of public consent. Purposes must be respected in research particularly when sharing sensitive, identifiable data, and there should be no assumptions made that differ from the original purposes when users give consent.

Data ethics and laws are desperately behind data science technology. Governments, institutions, civil, and all society needs to reach a common vision and leadership how to manage these challenges. Who defines these boundaries that matter?

How do we move forward towards better use of data?

Our data and technology are taking on a life of their own, in space which is another frontier, and in time, as data gathered in the past might be used for quite different purposes today.

The public are being left behind in the game-changing decisions made by those who deem they know best about the world we want to live in. We need a say in what shape society wants that to take, particularly for our children as it is their future we are deciding now.

How about an ethical framework for datasharing that supports a transparent public interest, which tries to build a little kinder, less discriminating, more just world, where hope is stronger than fear?

Working with people, with consent, with public support and transparent oversight shouldn’t be too much to ask. Perhaps it is naive, but I believe that with an independent ethical driver behind good decision-making, we could get closer to datasharing like that.

That would bring Better use of data in government.

Purposes and consent are not barriers to be overcome. Within these, shaped by a strong ethical framework, good data sharing practices can tackle some of the real challenges that hinder ‘good use of data’: training, understanding data protection law, communications, accountability and intra-organisational trust. More data sharing alone won’t fix these structural weaknesses in current UK datasharing which are our really tough barriers to good practice.

How our public data will be used in the public interest will not be a destination or have a well defined happy ending, but it is a long term  process which needs to be consensual and there needs to be a clear path to setting out together and achieving collaborative solutions.

While we are all different, I believe that society shares for the most part, commonalities in what we accept as good, and fair, and what we believe is important. The family sitting next to me have just counted out their money and bought an ice cream to share, and the staff gave them two. The little girl is beaming. It seems that even when things are difficult, there is always hope things can be better. And there is always love.

Even if some might give it a bad name.

********

img credit: flickr/sofi01/ Beauty and The Beast  under creative commons

Thoughts on Digital Participation and Health Literacy: Opportunities for engaging citizens in the NHS [#NHSWDP 1]

“..smartphones […] the single most important health treatment and diagnostic tool at our disposal over the coming decade and beyond “

That’s what Simon Stevens said at a meeting on “digital participation and health literacy: opportunities for engaging citizens” in the National Health Service this week, at the King’s Fund in London.

It seemed a passing comment, but its enormity from the Chief Executive of the commissioning body for the NHS, made me catch my breath.

Other than inspiration from the brilliance of Helen Milner, Chief Executive of the Tinder Foundation – the only speaker who touched on the importance of language around digital participation – what did I take away from the meeting?

The full text of Simon Steven’s speech is below at the end of this post, but he didn’t elaborate further on this comment.

Where to start?

The first thing I took away to think about, was the impact of the statement. 

“the single most important health treatment and diagnostic tool at our disposal over the coming decade and beyond “

So I thought about that more in a separate post, part two.

The second, was on consent.

This tied into the statement by Tim Kelsey, Director of Patients and Information at NHS England. It seems that the era when consent will be king is fast approaching, and I thought about this more in part three.

The third key learning I had of the day, which almost everyone I met voiced to me was, that the “best bit of these events is the learnings outside the sessions, from each other. From other people you meet.”

That included Roger who we met via video. And GP Dr Ollie Hart. All the tweeps I’ve now met in real life, and as Roz said, didn’t disappoint. People with experience and expertise in their fields. All motivated to make things better and make things work, around digital, for people.

Really important when thinking about ‘digital’ it doesn’t necessarily mean remote or reduce the people-time involved.

Change happens through people. Not necessarily seen as ‘clients’ or ‘consumers’ or even ‘customers’. How human interaction is supported by or may be replaced by digital contact fascinates me.

My fourth learning? was about how to think about data collection and use in a personalised digital world.

Something which will be useful in my new lay role on the ADRN approvals panel (which I’m delighted to take on and pretty excited about).

Data collection is undergoing a slow but long term sea change, in content, access, expectations, security & use.

Where, for who, and from whom data is collected varies enormously. It’s going to vary even more in future if some will have free access to apps, to wifi, and others be digitally excluded.

For now, the overall effect is perhaps only ripples on the surface (like interruptions to long-term research projects due to HSCIC data stops after care.data outcry) but research direction, and currents of thought may shift fundamentally if how we collect data changes radically for even small pockets of society, or the ‘worried well’.

My fifth learning, was less a learning and more the triggering of lots of questions on wearables about which I want to learn more.

#digitalinclusion is clearly less about a narrow focus on apps than applied skills and online access.

But I came away wondering how apps will affect research and the NHS in the UK, and much more.

[Next: part two #NHSWDP 2: Smartphones: the single most important health treatment & diagnostic tool at our disposal – on wearables]

[And: part three #NHSWDP 3: Wearables & Consent: patients will ‘essentially manage their data as they wish’. What will this mean for diagnostics, treatment and research and why should we care?]

*****

Full text of the speech given by Simon Stevens, Keynote speaker:

“The reality is we all can see that we’ve got to change […] as part of that we have got to have more integrated services, between primary and specialist services, between physical and mental health services, and between health and social care services.

“And the guiding principle of that integration has got to be care that is personal, and coordinated around individuals, with leadership of communities and patient groups.

“There is no way that can happen without a strong, technological underpinning using the information revolution which is sweeping just about every other part of the economy.

“We are not unusual in this country in having a health sector which has been a little slower, in some respects, than many other parts of national life to take full advantage of that.

“We are not unusual, because that is the experience of health services in every industrialised country.

“We obviously have a huge opportunity, and have a comparative advantage in the way that the NHS is organised, to put that right.

“We know that 8 out of 10 adults are now online, we know that two thirds of people in this country have got smartphones which is going to be the single most important health treatment and diagnostic tool at our disposal over the coming decade and beyond.

“But we know we have got 6.4m people who are not.

“And so when you of course then get serious about who are those six and a half million people, many of them are our highest users of services with the greatest needs.

“So this is not an optional extra. This has got to be central about what the redesign of care looks like, with a fundamental power shift actually, in the way in which services are produced and co-produced.

“This agenda goes to the heart of what we’ve got to get right, not just on inequalities but around co-production of services and the welcome steps that have been taken by the organisations involved, I think that the point is obviously we have now got to scale this in a much more fundamental fashion, but when you look at the impact of what has already been achieved, and some of the work that has already been done by the Tinder Foundation, you take some of the examples here, with the Sikh community in  Leicester around diabetes, and parenting in other parts of the country, you can see that this is an agenda which can potentially get real quite quickly and can have quite a big impact.

“The early evaluation anyway indicates that about half of people involved say they are leading healthier lives on the back of it, 48% in healthy eating, a third do more physical activity, 72% say they have saved money or time.

“Given that we are often talking about resource poor, time poor communities, that is hugely impactful as well.

“So my role here today, I think is simply to underline the weight that we place on this, as NHS England nationally, to thank all of you for the engagement that you have been having with us, and to learn from the discussion we are about to have as what you see where you see key priorities and what you need from us.”

[March 18, 2015 at the event “Digital Participation and Health Literacy: Opportunities for engaging citizens” held at the King’s Fund, London]

 

care.data – one of our business cases is missing

“The government takes the view that transparency is vital to healthy public services. It has created a new Statistics Commission to improve the quality of information collected (and to end arguments about “fiddling” figures).” [Tim Kelsey, New Statesman, 2001] [1]

In a time of continuing cuts to budgets across the public sector the members of the public have every right and good sense to question, how is public money spent and what is its justification.[#NHS2billion]

For the flagship data extraction care.data programme, it is therefore all the more surprising, that for the short and long term there is [2]:

a) no public proof of how much the programme is costing,
b) little around measurable tangible and intangible benefits,
c) or how the risks have been evaluated.

The Woolly Mammoth in the Room

The care.data programme has been running under its ‘toxic’ [3] brand in a similar form now, for two years.

When asked directly on costs at the Health Select Committee last month, the answer was, at best, woolly.

“Q655   Rosie Cooper: While I appreciate that, can you give us any rough figures? What would a CCG be contributing to this?

Tim Kelsey: I cannot answer that question, but we will very rapidly come back to you with the CCGs’ own estimates of the costs of the programme and how much of that cost is being met by the programme.” [Hansard January 2015][4]

The department appears very unwilling to make public and transparent its plans, risks and costs. I’ve been asking for them since October 2014, in a freedom of information request. [5]

They are still not open. Very much longer will look decidedly shady.

A few limited and heavily redacted parts were released [2] in poor quality .pdf files in Jan 2015, and don’t meet my request as there’s nothing from April-October 2014, and many missing files:

Transparent?

As I followed the minutes and materials released over the last 18 months this was a monstrous gap [7], so I have asked for it before.[8]

I had imagined there was reticence in making it public.
I had imagined, the numbers may be vague.
I hadn’t imagined it just didn’t exist at all.

For the programme whose watchword is transparency, this is more than a little surprising.  A plan had to be drafted to drive transparency, after the FOI was received [which I believe fails section 22 refusal criteria, as the decision to publish was made after the FOI]

– here’s the plan [9] – where are the outcomes?nessie

Is the claim that without care.data the NHS will fail, [10] no more than a myth?

 

Why does the business case and cost/risk analysis matter? What is the future of our data ownership?

 

Because history has a habit of repeating itself and there is a terrible track record in NHS IT which the public cannot afford [22] to allow to repeat, ever again.

The mentality that these unaccountable monster programmes are allowed to grow unchecked, must die out.

Of the NPfIT, Mr Bacon MP said: “This saga is one of the worst and most expensive contracting fiascos in the history of the public sector.”

Last autumn, a new case history [23] examined its rollout, including why local IT systems fail to deliver patient joined up digital records.

Yet, even today, as we hear that IT is critical to the digital delivery of NHS care and we must all be able to access our own health records, we read that tech funds are being cut.

Where is common sense and cohesion of their business planning?

These Big Data programmes do not stand alone, but interact with all sorts of other programmes, policies, and ideas on what will be done and what is possible in future for long term data purposes.

The public is not privvy to that to be able to scrutinise , criticise and positively contribute to plans. That seems short-sighted.

And what of previous data-based ventures? Take as a case study the Dr. Foster IC Joint Venture [NAO, February 2007] [24]

“The Information Centre spent £2.5 million on legal and consultancy advice in developing the joint venture, and setting up the Information Centre. The Information Centre contends that £855,000 of the money paid to KPMG was associated with costs for setting up the Information Centre which included business planning.

However, they could not provide an explicit breakdown of these costs […] We therefore calculate that the total cost to the taxpayer of a 50 per cent share is between £15.4 million and £16.3 million.”

“The Information Centre paid £12 million in cash for a 50 per cent share of the joint venture (see Figure 2 overleaf).

The UK plc made a sizeable investment here. The UK state invested UK taxes in this firm – so what’s the current business case for using data? How transparent are our current state assets and risks?

Being a shareholder in one half, it is fair to ask who are we now sharing the investment risk with or was this part sold soon after?[25] Was that investment a long-term one, or always meant to be so short term and are there any implications for the future of HSCIC?

In 2011 this report [26] another investment group, Bamboo holdings [related to other investor companies], wanted but did not succeed in selling its Dr. Foster stock at an acceptable price, said the portfolio introduction due in their words, to ‘poor performance’.  [Annual investor review from 2013 [p.5]

So what risks does the market see as a whole which are not made available to the public which affect how data is used and shared?

What of the other parts of Dr. Foster Research and so on, we, the state, went on to buy or sell later? It appears complex.

Is the commercial benefit to be made for private companies, seen as part of the big picture benefit to the UK plc or where does state investment and expectation for economic growth fit in?

What assessment has been made of the app market in the NHS and how patient data is expected in future to be held by the individual, released by personal choice to providers through phones?

Is a state infrastructure being built which in the surprisingly short term, may see few healthy people who store their data in it or will we see bias to exclude those with the money and technology to opt out who prefer to keep their health data in a handheld device?

What is the government plan for the future of the HSCIC and our data it manages? The provider Northgate was just bought by European private equity firm Cinven, which now manages a huge swathe of UK’s data [32] and HSCIC brought others in-house. [33]

“Its software and services are used by over 400 UK local authorities, all UK police forces, social housing providers in the UK and internationally, and NHS hospitals. Its IT projects support the sharing of information for criminal intelligence and investigations across UK police forces and the management of health screening records in the UK and in Ireland.”

All the easier to manage – or to manage to sell off?

Is the business plan future-proofed to survive the new age of health data management?

One of the problems with business cases for programmes which drag on and get swamped down in delays, is they become obsolete.

The one year mark has now passed in the announced care.data pause, announced on February 18th 2014.

The letter from Mr.Kelsey on April 14th 2014, said they would use the six months to listen and act on the view of patients, public, GPs and stakeholders.

Many of the open questions remain without any reply at all, never mind public answers to solutions to open issues.

The spine proposal by medConfidential [30] is one of the best and clearest proposals I have found with practical solutions to the failed opt out 9Nu4 for example.

Will these be addressed, or will NHS England answer the Data Guardian report and 27 questions [31] from December?

Is care.data arthritic or going quietly extinct? The last public information made available, is that it is rolling on in the background towards the pathfinders.

“By when will NHS England commit to respect the 700,000 objections to secondary data sharing already logged but not enacted?” [updated ref June 6th 2015]

How is the business plan kept up to date as the market moves on?

Is Big Data in the NHS too big to survive or has the programme learned to adapt and changed?

As Peter Mills asked a year ago, “Is the Government going to take this, as a live issue, into the next general election? Or will it (like the National Programme for IT) continue piecemeal, albeit without the toxic ‘care.data’ banner? “

The care.data programme board transparency agenda in Nov 2014 : “The care.data programme has yet to routinely publish agendas, minutes, highlight reports and finalised papers which arise from the care.data Programme Board.

“This may lead to external stakeholders and members of the public having a lack of confidence in the transparency of the programme.”

We all recognise the problem, but where’s the solution?

Where’s the cost, benefit and risk analysis?

Dear NHS England. One of your business cases is missing.
Why has the public not seen it?
Why are you making it hard to hunt down?
Why has transparency been gagged?

Like Dippy, the care.data business case belongs in the public domain, not hidden in a back room.

Like the NHS, the care.data full risk & planning files belong to us all.

Or is the truth that, like Nessie, despite wild claims, they may not actually exist?

***

more detail:

[1] New Statesman article, Tim Kelsey, 2001

[2]http://www.england.nhs.uk/ourwork/tsd/care-data/prog-board/ care.data programme board webpage

[3] http://www.infosecurity-magazine.com/news/nhs-caredata-pr-fiasco-continues/

[4] http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/health-committee/handling-of-nhs-patient-data/oral/17740.html

[5] https://www.whatdotheyknow.com/request/caredata_programme_board_minutes?nocache=incoming-621173#incoming-621173

[6] http://www.england.nhs.uk/wp-content/uploads/2015/02/cd-prog-brd-highlt-rep-15-12-14.pdf

[7] http://www.telegraph.co.uk/news/science/science-news/11377168/Natural-History-Museums-star-Dippy-the-dinosaur-to-retire.html

[8] https://jenpersson.com/care-data-postings-summary/

[9] http://www.england.nhs.uk/wp-content/uploads/2015/02/propsl-transpncy-pub-cd-papers.pdf

[10] http://www.computerweekly.com/news/2240215074/NHS-England-admits-failure-to-explain-benefits-of-caredata

[11] http://nuffieldbioethics.org/blog/2014/care-data-whats-in-a-dot-and-whats/

[12] http://www.theinformationdaily.com/2014/03/26/business-scents-boom-in-personal-information-economy

[13] http://www.hscic.gov.uk/article/3887/HSCIC-publishes-strategy-for-2013-2015

[14] https://jenpersson.com/flagship-care-data-2-commercial-practice/

[15] http://www.publications.parliament.uk/pa/ld201415/ldhansrd/text/141015-0001.htm

[16] http://www.publications.parliament.uk/pa/ld201415/ldhansrd/text/141015-0001.htm

[17] http://www.legislation.gov.uk/ukpga/2014/23/pdfs/ukpga_20140023_en.pdf

[18] https://jenpersson.com/hear-evil-evil-speak-evil/

[19] https://www.whatdotheyknow.com/request/nhs_patient_data_sharing_with_us

[20] http://www.hscic.gov.uk/hesdatadictionary

[21] http://www.bbc.co.uk/news/uk-politics-24130684

[22]  http://www.nao.org.uk/wp-content/uploads/2007/02/0607151.pdf

[23] http://www.cl.cam.ac.uk/~rja14/Papers/npfit-mpp-2014-case-history.pdf

[24] http://www.nao.org.uk/wp-content/uploads/2007/02/0607151.pdf

[25] http://www.healthpolicyinsight.com/?q=node/688

[26]http://www.albion-ventures.co.uk/ourfunds/pdf%20bamboo/Bamboo%20IOM%20signed%20interims%2030611.pdf

[27] http://www.v3.co.uk/v3-uk/news/2370877/nhs-needs-patients-digital-data-to-survive-warns-health-chief

[28 ]http://uk.emc.com/campaign/global/NHS-Healthcare-Report-2014/index.htm

[29 ] http://uk.emc.com/campaign/global/NHS-Healthcare-Report-2014/index.htm

[30] https://medconfidential.org/wp-content/uploads/2015/01/2015-01-29-A-short-proposal.pdf

[31] https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/389219/IIGOP_care.data.pdf

[32] http://www.privateequitywire.co.uk/2014/12/23/215235/cinven-acquire-northgate-public-services

[33] http://www.ehi.co.uk/news/EHI/9886/hscic-starts-sus-and-care-id-transfer

 

Patient questions on care.data – an open letter

Dear NHS England Patients & Information Directorate,

We’ve been very patient patients in the care.data pause. Please can we have some answers now?

I would like to call for greater transparency and openness about the promises made to the public, project processes & policies and your care.data communication plans.

In 2013, in the Health Service Journal Mr. Kelsey wrote:

“When patients are ignored, they are most at risk; that was the central conclusion of the report by Robert Francis into Stafford hospital.

Don Berwick, in his safety review, said the NHS should be “engaging, empowering and hearing patients and their carers all the time.

“That has been my mission since I started as National Director for Patients and Information: to support health and care services transform transparency and participation.

HSJ, 10th December 2013

It is time to walk-the-talk for care.data under this banner of transparency, participation and open government.

Response to the Listening exercises

The care.data listening phase, introduced by the pause announced on February 18th, has captured a mass of questions, the majority of which still remain unaddressed.

At one of these sessions, [the 1-hr session on June 17th Open House, linking ca. 100 people at each of the locations in Basingstoke, Leicester, London, and York] participants were promised that our feedback would be shared with us later in the summer, and posted online. After the NHS AGM on Sept 18th I was told it would happen ‘soon’. It is still not in the public domain.

At every meeting all the unanswered questions, on post-it notes, in table-group minutes or scribbled flipcharts, were gathered ‘to be answered at a later date’. When will that be?

To date, there has been no published information which addresses the unanswered event questions.

Transparency of Process, Policies and Approach

The care.data Programme Board has held meetings to plan the rollout process, policies and approach. The minutes and materials from which have not been published. I find this astonishing when one considers that the minutes of the care.data advisory group, NIB (new), CAG, GPES advisory or even NHS England Board itself are in the public domain. I believe the care.data Programme Board meeting materials should be too.

It was acknowledged through the Partridge Review of past use of our hospital records that this HES data is not anonymous. The extent of its sale to commercial third-parties and use by police and the Home Office was revealed. This is our medical data we gave to hospitals and in our wider medical use for our care. Why are we the last to hear it’s being accessed by all sorts of people who are not at all involved in our clinical care?

Even for commissioning purposes it is unclear how these datasharing reasons are justified when the Caldicott Review said extracting identifiable data for risk stratification or commissioning could not be assumed under some sort of ‘consent deal’?

“The Review Panel found that commissioners do not need dispensation from confidentiality, human rights and data protection law…” [The Information Governance review, ch7]

The 251 approval just got extended *again* – until 30th April 2015. If you can’t legally extract data without repeat approvals from on high, then maybe it’s time to question why?

The DoH, NHS England Patients and Information Directorate, HSCIC, and indeed many data recipients, all appear to have normalised an approach that for many is still a shock. The state centralised and passed on our medical records to others without our knowledge or permission. For years. With financial exchange. 

Amazingly, it continues to be released in this way today, still without our consent or fair processing or publicised way to opt out.

“To earn the public’s trust in future we must be able to show that our controls are meticulous, fool-proof and solid as a rock.”  said Sir Nick Partridge in his summary review.

Now you ask us to trust in care.data that the GP data, a degree more personal, will be used properly.

Yet you ask us to do this without significant changes in legislation to safeguard tightly defined purposes who can access it and why, how we control what future changes may be made without our knowledge and without a legally guaranteed opt out.

There is no information about what social care dataset is to be included in future, so how can we know what care.data scope even is yet?

Transparency cannot be a convenient watch word which applies with caveats. Quid pro quo, you want our data under an assumed consent process, then guarantee a genuinely informed public.

You can’t tell patients one approach now, then plan to change what will be said after the pilot is complete, knowingly planning a wider scope to include musculoskeletal or social care data and more.  Or knowing you plan to broaden users of data [like research and health intelligence currently under discussion at IAG ] but only communicate a smaller version in the pilot. That is like cheating on a diet. You can’t say and do one thing in public, then have your cake and eat it later when no one is looking. It still counts.

In these processes, policies and approach, I don’t feel my trust can be won back with lack of openness and transparency. I don’t yet see a system which is, ‘meticulous, fool-proof or solid as a rock’.

‘Pathfinder’ pilots

Most recently you have announced that four areas of CCGs will pilot the ‘pathfinder’ stage in the rollout of phase one. But where and when remains a  mystery. Pathfinder communications methods may vary from place to place and trial what works and what fails. One commendable method will be a written letter.

However even given that individual notice intent, we cannot ignore that many remaining questions will be hard to address in a leaflet or letter. They certainly won’t fit into an SMS text.

Why pilot communications at all which will leave the same open questions unanswered you already know, but have not answered?

For example, let’s get a few of the missing processes clarified up front:

  • How will you communicate with Gillick competent children, whose records may contain information about which their parents are not aware?
  • How will you manage this for elderly or vulnerable patients in care homes and with diminished awareness or responsibility?
  • What of  the vulnerable at risk of domestic abuse and coercion?
  • When things change in scope or use, how will we be given the choice to change our opt out decision?

I ask you not to ignore the processes which remain open. They need addressed BEFORE the pilot, unless you want people to opt out on the basis of their uncertainty and confusion.

What you do now, will set the model expectations for future communications. Patient online. Personalised medicine. If NHS health and social care is to become all about the individual, will you address all individuals equally or is reaching some less important than others?

It seems there is time and effort in talking to other professionals about big data, but not to us, whose data it is. Dear Patients & Information Directorate, you need to be talking to us, before to others about how to use us.

In March, this twelve point plan made some sensible suggestions.

Many of them remain unaddressed. You could start there. But in addition it must be clear before getting into communications tools, what is it that the pathfinders are actually piloting?

You can’t pilot communications without clearly defined contents to talk about.

Questions of substance need answers, the ten below to start with.

What determines that patients understand the programme and are genuinely informed, and how will it be measured?

Is it assumed that pilots will proceed to extraction? Or will the fair processing efforts be evaluated first and the effort vs cost be taken into account whether it is worth proceeding at all?

Given the cost involved, and legal data protection requirements, surely the latter? But the pathfinder action plan conflates the two.

Citizen engagement

Let’s see this as an opportunity to get care.data right, for us, the patients. After all, you and the rest of the NHS England Board were keen to tell us at the NHS AGM on September 18th, how valuable citizen engagement is, and to affirm that the NHS belongs to us all.

How valued is our engagement in reality, if it is ignored? How will involvement continue to be promoted in NHS Citizen and other platforms, if it is seen to be ineffective? How might this negatively affect future programmes and our willingness to get involved in clinical research if we don’t trust this basic programme today?

This is too important to get wrong. It confuses people and causes concern. It put trust and confidence in jeopardy. Not just for now, but for other future projects. care.data risks polluting across data borders, even to beyond health:

“The care.data story is a warning for us all. It is far better if the industry can be early on writing standards and protocols to protect privacy now rather than later on down the track,” he said. [David Willets, on 5G]

So please, don’t keep the feedback and this information to internal departments.

We are told it is vital to the future of our NHS. It’s our personal information.  And both belong to us.

During one Health Select Committee hearing, Mr. Kelsey claimed: “If 90 per cent opt out [of care.data], we won’t have an NHS.”

The BMA ARM voted in June for an opt in model.

ICO has ruled that an opt in model by default at practice level with due procedures for patient notification will satisfy both legal requirements and protect GPs in their role as custodians of confidentiality and data controllers. Patient Concern has called for GPs to follow that local choice opt in model.

I want to understand why he feels what the risk is, to the NHS and examine its evidence base. It’s our NHS and if it is going to fail without care.data and the Board let it come to this, then we must ask why. And we can together do something to fix it. There was a list of pre-conditions he stated at those meetings would be needed before any launch, which the public is yet to see met. Answering this question should be part of that.

It can’t afford to fail, but how do we measure at what cost?

I was one of many, including much more importantly the GPES Advisory Group, who flagged the shortcomings of the patient leaflet in October 2013, which failed to be a worthwhile communications process in January. I flagged it with comms teams, my MP, the DoH.

[Sept 2013 GPES Advisory] “The Group also had major concerns about the process for making most patients aware of the contents of the leaflets before data extraction for care.data commenced”.

No one listened. No action was taken. It went ahead as planned. It cost public money, and more importantly, public trust.

In the words of Lord Darzi,

“With more adroit handling, this is a row that might have been avoided.”

Now there is still a chance to listen and to act. This programme can’t afford to pilot another mistake. I’m sure you know this, but it would appear that with the CCG announcement, the intent is to proceed to pilot soon.  Ready or not.

If the programme is so vital to the NHS future, then let’s stop and get it right. If it’s not going to get the participation levels needed, then is it worth the cost? What are the risks and benefits of pressing ahead or at what point do we call a halt? Would it be wise to focus first on improving the quality and correct procedures around the data you already have – before increasing the volume of data you think you need? Where is the added intelligence, in adding just more information?

Is there any due diligence, a cost benefit analysis for care.data?

Suggestions

Scrap the ‘soon’ timetable. But tell us how long you need.

The complete raw feedback from all these care.data events should be made public, to ensure all the questions and concerns are debated and answers found BEFORE any pilot.

The care.data programme board minutes papers and all the planning and due diligence should be published and open to scrutiny, as any other project spending public funds should be.

A public plan of how the pathfinders fit into the big picture and timeline of future changes and content would remove the lingering uncertainty of the public and GPs: what is going on and when will I be affected?

The NHS 5 year forward view was quite clear; our purse strings have been pulled tight. The NHS belongs to all of us. And so we should say, care.data  can’t proceed at any and at all costs. It needs to be ‘meticulous, fool-proof and solid as a rock’.

We’ve been patient patients. We should now expect the respect and response, that deserves.

Thank you for your consideration.

Yours sincerely.

 

Addendum: Sample of ten significant questions still outstanding

1. Scope: What is care.data? Scope content is shifting. and requests for scope purposes are changing already, from commissioning only to now include research and health intelligence. How will we patients know what we sign up to today, stays the purposes to which data may be used tomorrow?

2. Scope changes fair processing: We cannot sign up to one thing today, and find it has become something else entirely tomorrow without our knowledge. How will we be notified of any changes in what is to be extracted or change in how what has been extracted is to be used in future – a change notification plan?

3. Purposes clarity: Who will use which parts of our medical data for what? a: Clinical care vs secondary uses:

Given the widespread confusion – demonstrated on radio and in press after the pathfinders’ announcement – between care.data  which is for ‘secondary use’ only, i.e. purposes other than the direct care of the patient – and the Summary Care Record (SCR) for direct care in medical settings, how will uses be made very clear to patients and how it will affect our existing consent settings?

3. Purposes definition: Who will use which parts of our medical data for what?  b) Commercial use  It is claimed the Care Act will rule out “solely commercial”purposes, but how when what remains is a broad definition open to interpretation? Will “the promotion of health” still permit uses such as marketing? Will HSCIC give its own interpretation, it is after all, the fact it operates within the law which prescribes what it should promote and permit.

3. Purposes exclusion: Who will use which parts of our medical data for what?  c) Commercial re-use by third parties: When will the new contracts and agreements be in place? Drafts on the HSCIC website still appear to permit commercial re-use and make no mention of changes or revoking licenses for intermediaries.

4a. Opt out: It is said that patients who opt out will have this choice respected by the Health and Social Care Information Centre (i.e. no data will be extracted from their GP record) according to the Secretary of State for Health  [col 147] – but when will the opt out – currently no more than a spoken promise – be put on a statutory basis? There seem to be no plans whatsoever for this.

Further wider consents: how patients will know what they have opted into or out from is currently almost impossible. We have the Summary Care Record, Proactive care in some local areas, different clinical GP systems, the Electronic Prescription Service and soon to be Patient Online, all using different opt in methods of asking and maintaining data and consent, means patients are unsurprisingly confused.

4b. Opt out: At what point do you determine that levels of participation are worth the investment and of value? If parts of the population are not represented, how will it be taken into account and remain valuable to have some data? What will be statistically significant?

5. Legislation around security: The Care Act 2014 is supposed to bring in new legislation for our data protection. But there are no changes to date as far as I can see – what happened to the much discussed in Parliament, one strike and out. Is any change still planned? If so, how has this been finalised and with what wording, will it be open to Parliamentary scrutiny?  The Government claim to have added legal protection is meaningless until the new Care Act Regulations are put in front of Parliament and agreed.

6. What of the Governance changes discussed?

There was some additional governance and oversight promised, but to date no public communication of changes to the data management groups through the HRA CAG or DAAG and no sight of the patient involvement promised.

The Data Guardian role remains without the legal weight that the importance of its position should command. It has been said this will be granted ‘at the earliest opportunity.’ Many seem to have come and gone.

7. Data security: The planned secure data facility (‘safe setting’) at HSCIC to hold linked GP and hospital data is not yet built for expanded volume of data and users expected according to Ciaran Devane at the 6th September event. When will it be ready for the scale of care.data?

Systems and processes on this scale need security designed, that scales up to match in size with the data and its use.

Will you proceed with a pilot which uses a different facility and procedures from the future plan? Or worse still, with extracting data into a setting you know is less secure than it should be?

8. Future content sharing: Where will NHS patients’ individual-level data go in the longer term? The current documentation says ‘in wave 1’ or phase one, which would indicate a future change is left open, and indicated identifiable ‘red’ data is to be shared in future?  “care.data will provide the longer term visions as well as […] the replacement for SUS.

9.  Current communications:

    • How will GPs and patients in ‘pathfinder’ practices be contacted?
    • Will every patient be written to directly with a consent form?
    • What will patients who opted out earlier this year be told if things have changed since then?
    • How will NHS England contact those who have retired or moved abroad recently or temporarily, still with active GP records?
    • How will foreign pupils’ parents be informed abroad and rights respected?
    • How does opt out work for sealed envelopes?
    • All the minorities with language needs or accessibility needs – how will you cater for foreign language, dialect or disability?
    • The homeless, the nomadic,  children-in-care
    • How can we separate these uses clearly from clinical care in the public’s mind to achieve a genuinely informed opinion?
    • How will genuine mistakes in records be deleted – wrong data on wrong record, especially if we only get Patient Online access second and then spot mistakes?
    • How long will data be retained for so that it is relevant and not excessive – Data Protection principle 3?
    • How will the communications cater for both GP records and HES plus other data collection and sharing?
    • If the plan is to have opt out effective for all secondary uses, communications must cater for new babies to give parents an informed choice from Day One. How and when will this begin?

No wonder you wanted first no opt out, then an assumed consent via opt out junk mail leaflet. This is hard stuff to do well. Harder still, how will you measure effectiveness of what you may have missed?

10. Pathfinder fixes: Since NHS England doesn’t know what will be effective communications tools, what principles will be followed to correct any failures in communications for any particular trial run and how will that be measured?

How will patients be asked if they heard about it and how will any survey, or follow up ensure the right segmentation does not miss measuring the hard to reach groups – precisely those who may have been missed?  i.e. If you only inform 10% of the population, then ask that same 10% if they heard of care.data, you would expect a close to 100% yes. That’s not reflective that the whole population was well informed about the programme.

If it is shown to have been ineffective, at what point do you say Fair Processing failed and you cannot legally proceed to extraction?

> This list doesn’t yet touch on the hundreds of questions generated from public events, on post-its and minutes. But it would be a start.

*******

References for remaining questions:

17th June Open House: Q&A

17th June Open House: Unanswered public Questions

Twelve point plan [March 2014] positive suggestions by Jeremy Taylor, National Voices

6th September care.data meeting in London

image quote: Winnie The Pooh, A.A. Milne

care.data should be like playing Chopin – or will it be all the right notes, but in the wrong order? [Part two]

How our data sharing performance will be judged, matters not just today, or in this electoral term but for posterity. The current work-in-progress is not a dress rehearsal for a care.data quick talent show, but the preparations for lifetime performance and at world standard.

How have we arrived where we are now, at a Grand Pause in the care.data performance? I looked at the past, reviewed through the Partridge Review meeting in [part one here] the first half of this post from attending the HSCIC ‘Driving Positive Change’ meeting on July 21st. (official minutes are online via HSCIC >>  here.)

Looking forward, how do we want our data sharing to be? I believe we must not lose sight of classical values in the rush to be centre stage in the Brave New World of medical technology. [updated link  August 3rd]* Our medical datasharing must be above and beyond the best model standards to be acceptable technically, legally and ethically, worldwide. Exercised with discipline, training and precision, care.data should be of the musical equivalent of Chopin.

Not only does HSCIC have a pivotal role to play in the symphony that the Government wishes research to play in the ‘health & wealth’ future of our economy, but they are currently alone on the world stage. Nowhere in the world has a comparable health data set over such length of time, as we do, and none has ever brought in all it’s primary care records into a central repository to merge and link, as is planned with care.data. Sir Kingsley Manning said in the current July/August Pharma Times article, data sharing now has to manage its reputation, just like Big Pharma.

reputation
Pharma Times – July/Aug 2014 http://www.pharmatimes.com/DigitalOnlineArea/digitaleditionlogin.aspx

Countries around the world, will be watching HSCIC, the companies and organisations involved in the management and in the use of our data.  They will be assessing the involvement and reaction of England’s population, to HSCIC’s performance. This performance will help shape what is acceptable, works well and failings will be learned from, by other countries, who will want to do the same in future.

Can we rise to the Challenge to be a world leader in Data Sharing?

If the UK Government wants England to be the world leader in research, we need, not only to be exemplary in how we govern the holding, management and release of data, but also exemplary in our ethics model and expectations of each other in the data sharing process.

How can we expect China [1] with whom the British Government recently agreed £14 billion in trade deals, [2] India, the country to which our GP support services are potentially poised to be outsourced through Steria [3] or any other organi Continue reading care.data should be like playing Chopin – or will it be all the right notes, but in the wrong order? [Part two]