Category Archives: genomics

Waste products: bodily data and the datafied child

Recent conversations and the passage of the Data Protection and Digital Information Bill in parliament, have made me think once again about what the future vision for UK children’s data could be.

Some argue that processing and governance should be akin to a health model, first do no harm, professional standards, training, ISO lifecycle oversight, audits and governance bodies to approve exceptional releases and re-use.

Education data is health and body data

Children’s personal data in the educational context is remarkably often health data directly (social care, injury, accident, self harm, mental health) or indirectly (mood and emotion or eating patterns).

Children’s data in education is increasingly bodily data. An AI education company CEO was even reported to have considered, “bone-mapping software to track pupils’ emotions” linking a child’s bodily data and data of the mind. For a report written by Pippa King and myself in 2021, The State of Biometrics 2022: A Review of Policy and Practice in UK Education, we mapped the emerging prevalence of biometrics in educational settings. Published on the ten-year anniversary of the Protection of Freedoms Act 2012, we challenged the presumption that the data protection law is complied with well, or is effective enough alone in the protection of children’s data or digital rights.

We mustn’t forget, when talking about data in education, children do not go to school in order to produce data or to have their lives recorded, monitored or profiled through analytics. It’s not the purpose of their activity. They go to school to exercise their right in law to receive education, that data production is a by-product of the activity they are doing.

Education data as a by product of the process

Thinking of these together as children’s lives in by-products used by others, reminded me of the Alder Hey scandal published over twenty years ago, but going back decades.  In particular, the inquiry considered the huge store of body parts and residual human tissue of dead children accumulated between 1988 to 1995.

“It studied the obligation to establish ‘lack of objection’ in the event of a request to retain organs and tissue taken at a Coroner’s post-mortem for medical education and research.” (2001)

Thinking about the parallels of children’s personal data produced and extracted in education as a by-product, and organ and tissue waste a by-product of routine medical procedures in the living, highlights several lessons that we could be drawing today about digital processing of children’s lives in data and child/parental rights.

Digital bodies of the dead less protected than their physical parts

It also exposes gaps between the actual scenario today that the bodily tissue and the bodily data about deceased children could be being treated differently, since the data protection regime only applies to the living. We should really be forward looking and include rights here for all that go beyond the living “natural persons”, because our data does, and that may affect those who we leave behind. It is insufficient for researchers and others who wish to use data without restriction to object, because this merely pushes off the problem, increasing the risk of public rejection of ‘hidden’ plans later. (see DDM second reading briefing on recital 27, p 30/32),

What could we learn from handling body parts for the digital body?

In the children’s organ and tissue scandal, management failed to inform or provide suitable advice and support necessary to families.

Recommendations were made for change on consent to post-mortem examinations of children, and a new approach to consent and an NHS hospital post-mortem consent form for children and all residual tissue were adopted sector-wide.

The retention and the destruction of genetic material is considered in the parental consent process required for any testing that continues to use the bodily material from the child. In the Alder Hey debate this was about deceased children, but similar processes are in place now for obtaining parental consent to research re-use and retention for waste or ‘surplus’ tissue that comes from everyday operations on the living.

But new law in the Data Protection and Digital Information Bill is going to undermine current protections for genetic material in the future and has experts in that subject field extremely worried.

The DPDI Bill will consider the data of the dead for the first time

To date it only covers the data of or related to the living or “natural persons” and it is ironic that the rest of the Bill does the polar opposite, not about living and dead, but by redefining both personal data and research purposes it takes what is today personal data ‘in scope’ of data protection law and places it out of scope and beyond its governance due to exemptions, or changes in controller responsibility over time. Meaning a whole lot of data about children and the rest of us) will not be covered by DP law at all. (Yes, those are bad things in the Bill).

Separately, the new law as drafted, will also create a divergence from its generally accepted scope, and will start to bring data into scope the ‘personal data’ of the dead.

Perhaps as a result of limited parliamentary time, the DPDI Bill (see col. 939) is being used to include amendments on, “Retention of information by providers of internet services in connection with death of child,” to amend the Online Safety Act 2023 to enable OFCOM to give internet service providers a notice requiring them to retain information in connection with an investigation by a coroner (or, in Scotland, procurator fiscal) into the death of a child suspected to have taken their own life. The new clause also creates related offences.”

While primarily for the purposes of formal investigation into the role of social media in children’s suicide, and directions from Ofcom to social media companies to retain information for the period of one year beginning with the date of the notice, it highlights the difficulty of dealing with data after the death of a loved one.

This problem is perhaps no less acute where a child or adult has left no ‘digital handover’ via a legacy contact eg at Apple you can assign someone to be this person in the event of your own death from any cause. But what happens if your relation has not set this up and has been the holder of the digital key to your entire family photo history stored on a company’s cloud?  Is this a question of data protection, or digital identity management, or of physical product ownership?

Harvesting children’s digital bodies is not what people want

In our DDM research and report, “the words we use in data policy: putting people back in the picture” we explored how the language used to talk about personal data, has a profound effect on how people think about it.

In the current digital landscape personal data can often be seen as a commodity, a product to mine, extract and exploit and pass around to others. More of an ownership and IP question and the broadly U.S. approach. Data collection is excessive in “Big Data” mountains and “data lakes”, described just like the EU food surpluses of the 1970s. Extraction and use without effective controls creates toxic waste, is polluting and met with resistance. This environment is not sustainable and not what young people want. Enforcement of the data protection principles of purpose limitation and data minimisation should be helping here, but young people don’t see it.

When personal data is considered as ‘of the body’ or bodily residue, data as part of our life, the resulting view was that data is something that needs protecting. That need is generally held to be true, and represented in European human rights-based data laws and regulation. A key aim of protecting data is to protect the person.

In a workshop for that report preparation, teenagers expressed unease that data about them being ‘harvested’ to exploit as human capital and find their rights are not adequately enabled or respected. They find data can be used to replace conversation with them, and mean they are misrepresented by it, and at the same time there is a paradox that a piece of data can be your ‘life story’ and single source of truth advocating on your behalf.

Parental and children’s rights are grafted together and need recognised processes that respect this, as managed in health

Children’s competency and parental rights are grafted together in many areas of a child’s life and death, so why not by default in the digital environment? What additional mechanisms in a process are needed where both views carry legal weight? What are the specific challenges that need extra attention in data protection law due to the characteristics of data that can be about more than one person, be controlled by and not only be about the child, and parental rights?

What might we learn for the regulation of practice of a child’s digital footprint from how health manages residual tissue processing? Who is involved, what are the steps of the process and how is it communicated onwardly accompanying data flows around a system?

Where data protection rules do not apply, certain activities may still constitute an interference with Article 8 of the European Convention on Human Rights, which protects the right to private and family life. (WP 29 Opinion 4/2007 on the concept of personal data p24).

Undoubtedly the datafied child is an inseparable ‘data double’ of the child. Data users about children, who do so without their permission, without informing them or their families, without giving children and parents the tools to exercise their rights to have a say and control their digital footprint in life and in death, might soon find themselves being treated in the same way as accountable individuals in the Alder Hey scandal were, many years after the events took place.

 


Minor edits and section sub-headings added on 18/12 for clarity plus a reference to the WP29 opinion 04/2007 on personal data.

OkCupid and Google DeepMind: Happily ever after? Purposes and ethics in datasharing

This blog post is also available as an audio file on soundcloud.


What constitutes the public interest must be set in a universally fair and transparent ethics framework if the benefits of research are to be realised – whether in social science, health, education and more – that framework will provide a strategy to getting the pre-requisite success factors right, ensuring research in the public interest is not only fit for the future, but thrives. There has been a climate change in consent. We need to stop talking about barriers that prevent datasharing  and start talking about the boundaries within which we can.

What is the purpose for which I provide my personal data?

‘We use math to get you dates’, says OkCupid’s tagline.

That’s the purpose of the site. It’s the reason people log in and create a profile, enter their personal data and post it online for others who are looking for dates to see. The purpose, is to get a date.

When over 68K OkCupid users registered for the site to find dates, they didn’t sign up to have their identifiable data used and published in ‘a very large dataset’ and onwardly re-used by anyone with unregistered access. The users data were extracted “without the express prior consent of the user […].”

Are the registration consent purposes compatible with the purposes to which the researcher put the data should be a simple enough question.  Are the research purposes what the person signed up to, or would they be surprised to find out their data were used like this?

Questions the “OkCupid data snatcher”, now self-confessed ‘non-academic’ researcher, thought unimportant to consider.

But it appears in the last month, he has been in good company.

Google DeepMind, and the Royal Free, big players who do know how to handle data and consent well, paid too little attention to the very same question of purposes.

The boundaries of how the users of OkCupid had chosen to reveal information and to whom, have not been respected in this project.

Nor were these boundaries respected by the Royal Free London trust that gave out patient data for use by Google DeepMind with changing explanations, without clear purposes or permission.

The legal boundaries in these recent stories appear unclear or to have been ignored. The privacy boundaries deemed irrelevant. Regulatory oversight lacking.

The respectful ethical boundaries of consent to purposes, disregarding autonomy, have indisputably broken down, whether by commercial org, public body, or lone ‘researcher’.

Research purposes

The crux of data access decisions is purposes. What question is the research to address – what is the purpose for which the data will be used? The intent by Kirkegaard was to test:

“the relationship of cognitive ability to religious beliefs and political interest/participation…”

In this case the question appears intended rather a test of the data, not the data opened up to answer the test. While methodological studies matter, given the care and attention [or self-stated lack thereof] given to its extraction and any attempt to be representative and fair, it would appear this is not the point of this study either.

The data doesn’t include profiles identified as heterosexual male, because ‘the scraper was’. It is also unknown how many users hide their profiles, “so the 99.7% figure [identifying as binary male or female] should be cautiously interpreted.”

“Furthermore, due to the way we sampled the data from the site, it is not even representative of the users on the site, because users who answered more questions are overrepresented.” [sic]

The paper goes on to say photos were not gathered because they would have taken up a lot of storage space and could be done in a future scraping, and

“other data were not collected because we forgot to include them in the scraper.”

The data are knowingly of poor quality, inaccurate and incomplete. The project cannot be repeated as ‘the scraping tool no longer works’. There is an unclear ethical or peer review process, and the research purpose is at best unclear. We can certainly give someone the benefit of the doubt and say intent appears to have been entirely benevolent. It’s not clear what the intent was. I think it is clearly misplaced and foolish, but not malevolent.

The trouble is, it’s not enough to say, “don’t be evil.” These actions have consequences.

When the researcher asserts in his paper that, “the lack of data sharing probably slows down the progress of science immensely because other researchers would use the data if they could,”  in part he is right.

Google and the Royal Free have tried more eloquently to say the same thing. It’s not research, it’s direct care, in effect, ignore that people are no longer our patients and we’re using historical data without re-consent. We know what we’re doing, we’re the good guys.

However the principles are the same, whether it’s a lone project or global giant. And they’re both wildly wrong as well. More people must take this on board. It’s the reason the public interest needs the Dame Fiona Caldicott review published sooner rather than later.

Just because there is a boundary to data sharing in place, does not mean it is a barrier to be ignored or overcome. Like the registration step to the OkCupid site, consent and the right to opt out of medical research in England and Wales is there for a reason.

We’re desperate to build public trust in UK research right now. So to assert that the lack of data sharing probably slows down the progress of science is misplaced, when it is getting ‘sharing’ wrong, that caused the lack of trust in the first place and harms research.

A climate change in consent

There has been a climate change in public attitude to consent since care.data, clouded by the smoke and mirrors of state surveillance. It cannot be ignored.  The EUGDPR supports it. Researchers may not like change, but there needs to be an according adjustment in expectations and practice.

Without change, there will be no change. Public trust is low. As technology advances and if we continue to see commercial companies get this wrong, we will continue to see public trust falter unless broken things get fixed. Change is possible for the better. But it has to come from companies, institutions, and people within them.

Like climate change, you may deny it if you choose to. But some things are inevitable and unavoidably true.

There is strong support for public interest research but that is not to be taken for granted. Public bodies should defend research from being sunk by commercial misappropriation if they want to future-proof public interest research.

The purpose for which the people gave consent are the boundaries within which you have permission to use data, that gives you freedom within its limits, to use the data.  Purposes and consent are not barriers to be overcome.

If research is to win back public trust developing a future proofed, robust ethical framework for data science must be a priority today.

Commercial companies must overcome the low levels of public trust they have generated in the public to date if they ask ‘trust us because we’re not evil‘. If you can’t rule out the use of data for other purposes, it’s not helping. If you delay independent oversight it’s not helping.

This case study and indeed the Google DeepMind recent episode by contrast demonstrate the urgency with which working out what common expectations and oversight of applied ethics in research, who gets to decide what is ‘in the public interest’ and data science public engagement must be made a priority, in the UK and beyond.

Boundaries in the best interest of the subject and the user

Society needs research in the public interest. We need good decisions made on what will be funded and what will not be. What will influence public policy and where needs attention for change.

To do this ethically, we all need to agree what is fair use of personal data, when is it closed and when is it open, what is direct and what are secondary uses, and how advances in technology are used when they present both opportunities for benefit or risks to harm to individuals, to society and to research as a whole.

The potential benefits of research are potentially being compromised for the sake of arrogance, greed, or misjudgement, no matter intent. Those benefits cannot come at any cost, or disregard public concern, or the price will be trust in all research itself.

In discussing this with social science and medical researchers, I realise not everyone agrees. For some, using deidentified data in trusted third party settings poses such a low privacy risk, that they feel the public should have no say in whether their data are used in research as long it’s ‘in the public interest’.

For the DeepMind researchers and Royal Free, they were confident even using identifiable data, this is the “right” thing to do, without consent.

For the Cabinet Office datasharing consultation, the parts that will open up national registries, share identifiable data more widely and with commercial companies, they are convinced it is all the “right” thing to do, without consent.

How can researchers, society and government understand what is good ethics of data science, as technology permits ever more invasive or covert data mining and the current approach is desperately outdated?

Who decides where those boundaries lie?

“It’s research Jim, but not as we know it.” This is one aspect of data use that ethical reviewers will need to deal with, as we advance the debate on data science in the UK. Whether independents or commercial organisations. Google said their work was not research. Is‘OkCupid’ research?

If this research and data publication proves anything at all, and can offer lessons to learn from, it is perhaps these three things:

Who is accredited as a researcher or ‘prescribed person’ matters. If we are considering new datasharing legislation, and for example, who the UK government is granting access to millions of children’s personal data today. Your idea of a ‘prescribed person’ may not be the same as the rest of the public’s.

Researchers and ethics committees need to adjust to the climate change of public consent. Purposes must be respected in research particularly when sharing sensitive, identifiable data, and there should be no assumptions made that differ from the original purposes when users give consent.

Data ethics and laws are desperately behind data science technology. Governments, institutions, civil, and all society needs to reach a common vision and leadership how to manage these challenges. Who defines these boundaries that matter?

How do we move forward towards better use of data?

Our data and technology are taking on a life of their own, in space which is another frontier, and in time, as data gathered in the past might be used for quite different purposes today.

The public are being left behind in the game-changing decisions made by those who deem they know best about the world we want to live in. We need a say in what shape society wants that to take, particularly for our children as it is their future we are deciding now.

How about an ethical framework for datasharing that supports a transparent public interest, which tries to build a little kinder, less discriminating, more just world, where hope is stronger than fear?

Working with people, with consent, with public support and transparent oversight shouldn’t be too much to ask. Perhaps it is naive, but I believe that with an independent ethical driver behind good decision-making, we could get closer to datasharing like that.

That would bring Better use of data in government.

Purposes and consent are not barriers to be overcome. Within these, shaped by a strong ethical framework, good data sharing practices can tackle some of the real challenges that hinder ‘good use of data’: training, understanding data protection law, communications, accountability and intra-organisational trust. More data sharing alone won’t fix these structural weaknesses in current UK datasharing which are our really tough barriers to good practice.

How our public data will be used in the public interest will not be a destination or have a well defined happy ending, but it is a long term  process which needs to be consensual and there needs to be a clear path to setting out together and achieving collaborative solutions.

While we are all different, I believe that society shares for the most part, commonalities in what we accept as good, and fair, and what we believe is important. The family sitting next to me have just counted out their money and bought an ice cream to share, and the staff gave them two. The little girl is beaming. It seems that even when things are difficult, there is always hope things can be better. And there is always love.

Even if some might give it a bad name.

********

img credit: flickr/sofi01/ Beauty and The Beast  under creative commons

Can new datasharing laws win social legitimacy, public trust and support without public engagement?

I’ve been struck by stories I’ve heard on the datasharing consultation, on data science, and on data infrastructures as part of ‘government as a platform’ (#GaaPFuture) in recent weeks. The audio recorded by the Royal Statistical Society on March 17th is excellent, and there were some good questions asked.

There were even questions from insurance backed panels to open up more data for commercial users, and calls for journalists to be seen as accredited researchers, as well as to include health data sharing. Three things that some stakeholders, all users of data, feel are  missing from consultation, and possibly some of those with the most widespread public concern and lowest levels of public trust. [1]

What I feel is missing in consultation discussions are:

  1. a representative range of independent public voice
  2. a compelling story of needs – why tailored public services benefits citizens from whom data is taken, not only benefits data users
  3. the impacts we expect to see in local government
  4. any cost/risk/benefit assessment of those impacts, or for citizens
  5. how the changes will be independently evaluated – as some are to be reviewed

The Royal Statistical Society and ODI have good summaries here of their thoughts, more geared towards the statistical and research aspects of data,  infrastructure and the consultation.

I focus on the other strands that use identifiable data for targeted interventions. Tailored public services, Debt, Fraud, Energy Companies’ use. I think we talk too little of people, and real needs.

Why the State wants more datasharing is not yet a compelling story and public need and benefit seem weak.

So far the creation of new data intermediaries, giving copies of our personal data to other public bodies  – and let’s be clear that this often means through commercial representatives like G4S, Atos, Management consultancies and more –  is yet to convince me of true public needs for the people, versus wants from parts of the State.

What the consultation hopes to achieve, is new powers of law, to give increased data sharing increased legal authority. However this alone will not bring about the social legitimacy of datasharing that the consultation appears to seek through ‘open policy making’.

Legitimacy is badly needed if there is to be public and professional support for change and increased use of our personal data as held by the State, which is missing today,  as care.data starkly exposed. [2]

The gap between Social Legitimacy and the Law

Almost 8 months ago now, before I knew about the datasharing consultation work-in-progress, I suggested to BIS that there was an opportunity for the UK to drive excellence in public involvement in the use of public data by getting real engagement, through pro-active consent.

The carrot for this, is achieving the goal that government wants – greater legal clarity, the use of a significant number of consented people’s personal data for complex range of secondary uses as a secondary benefit.

It was ignored.

If some feel entitled to the right to infringe on citizens’ privacy through a new legal gateway because they believe the public benefit outweighs private rights, then they must also take on the increased balance of risk of doing so, and a responsibility to  do so safely. It is in principle a slippery slope. Any new safeguards and ethics for how this will be done are however unclear in those data strands which are for targeted individual interventions. Especially if predictive.

Upcoming discussions on codes of practice [which have still to be shared] should demonstrate how this is to happen in practice, but codes are not sufficient. Laws which enable will be pushed to their borderline of legal and beyond that of ethical.

In England who would have thought that the 2013 changes that permitted individual children’s data to be given to third parties [3] for educational purposes, would mean giving highly sensitive, identifiable data to journalists without pupils or parental consent? The wording allows it. It is legal. However it fails the DPA Act legal requirement of fair processing.  Above all, it lacks social legitimacy and common sense.

In Scotland, there is current anger over the intrusive ‘named person’ laws which lack both professional and public support and intrude on privacy. Concerns raised should be lessons to learn from in England.

Common sense says laws must take into account social legitimacy.

We have been told at the open policy meetings that this change will not remove the need for informed consent. To be informed, means creating the opportunity for proper communications, and also knowing how you can use the service without coercion, i.e. not having to consent to secondary data uses in order to get the service, and knowing to withdraw consent at any later date. How will that be offered with ways of achieving the removal of data after sharing?

The stick for change, is the legal duty that the recent 2015 CJEU ruling reiterating the legal duty to fair processing [4] waved about. Not just a nice to have, but State bodies’ responsibility to inform citizens when their personal data are used for purposes other than those for which those data had initially been consented and given. New legislation will not  remove this legal duty.

How will it be achieved without public engagement?

Engagement is not PR

Failure to act on what you hear from listening to the public is costly.

Engagement is not done *to* people, don’t think explain why we need the data and its public benefit’ will work. Policy makers must engage with fears and not seek to dismiss or diminish them, but acknowledge and mitigate them by designing technically acceptable solutions. Solutions that enable data sharing in a strong framework of privacy and ethics, not that sees these concepts as barriers. Solutions that have social legitimacy because people support them.

Mr Hunt’s promised February 2014 opt out of anonymised data being used in health research, has yet to be put in place and has had immeasurable costs for delayed public research, and public trust.

How long before people consider suing the DH as data controller for misuse? From where does the arrogance stem that decides to ignore legal rights, moral rights and public opinion of more people than those who voted for the Minister responsible for its delay?

 

This attitude is what fails care.data and the harm is ongoing to public trust and to confidence for researchers’ continued access to data.

The same failure was pointed out by the public members of the tiny Genomics England public engagement meeting two years ago in March 2014, called to respond to concerns over the lack of engagement and potential harm for existing research. The comms lead made a suggestion that the new model of the commercialisation of the human genome in England, to be embedded in the NHS by 2017 as standard clinical practice, was like steam trains in Victorian England opening up the country to new commercial markets. The analogy was felt by the lay attendees to be, and I quote, ‘ridiculous.’

Exploiting confidential personal data for public good must have support and good two-way engagement if it is to get that support, and what is said and agreed must be acted on to be trustworthy.

Policy makers must take into account broad public opinion, and that is unlikely to be submitted to a Parliamentary consultation. (Personally, I first knew such  processes existed only when care.data was brought before the Select Committee in 2014.) We already know what many in the public think about sharing their confidential data from the work with care.data and objections to third party access, to lack of consent. Just because some policy makers don’t like what was said, doesn’t make that public opinion any less valid.

We must bring to the table the public voice from past but recent public engagement work on administrative datasharing [5], the voice of the non-research community, and from those who are not stakeholders who will use the data but the ‘data subjects’, the public  whose data are to be used.

Policy Making must be built on Public Trust

Open policy making is not open just because it says it is. Who has been invited, participated, and how their views actually make a difference on content and implementation is what matters.

Adding controversial ideas at the last minute is terrible engagement, its makes the process less trustworthy and diminishes its legitimacy.

This last minute change suggests some datasharing will be dictated despite critical views in the policy making and without any public engagement. If so, we should ask policy makers on what mandate?

Democracy depends on social legitimacy. Once you lose public trust, it is not easy to restore.

Can new datasharing laws win social legitimacy, public trust and support without public engagement?

In my next post I’ll post look at some of the public engagement work done on datasharing to date, and think about ethics in how data are applied.

*************

References:

[1] The Royal Statistical Society data trust deficit

[2] “The social licence for research: why care.data ran into trouble,” by Carter et al.

[3] FAQs: Campaign for safe and ethical National Pupil Data

[4] CJEU Bara 2015 Ruling – fair processing between public bodies

[5] Public Dialogues using Administrative data (ESRC / ADRN)

img credit: flickr.com/photos/internetarchivebookimages/

On the Boundaries of Being Human and Big Data

Atlas, the Boston Dynamics created robot, won hearts and minds this week as it stoically survived man being mean.  Our collective human response was an emotional defence of the machine, and criticism of its unfair treatment by its tester.

Some on Twitter recalled the incident of Lord of The Flies style bullying by children in Japan that led the programmers to create an algorithm for ‘abuse avoidance’.

The concepts of fairness and of decision making algorithms for ‘abuse avoidance’ are interesting from perspectives of data mining, AI and the wider access to and use of tech in general, and in health specifically.

If the decision to avoid abuse can be taken out of an individual’s human hands and are based on unfathomable amounts of big data, where are its limits applied to human behaviour and activity?

When it is decided that an individual’s decision making capability is impaired or has been forfeited their consent may be revoked in their best interest.

Who has oversight of the boundaries of what is acceptable for one person, or for an organisation, to decide what is in someone else’s best interest, or indeed, the public interest?

Where these boundaries overlap – personal abuse avoidance, individual best interest and the public interest – and how society manage them, with what oversight, is yet to be widely debated.

The public will shortly be given the opportunity to respond to plans for the expansion of administrative datasharing in England through consultation.

We must get involved and it must be the start of a debate and dialogue not simply a tick-box to a done-deal, if data derived from us are to be used as a platform for future to “achieve great results for the NHS and everyone who depends on it.”

Administering applied “abuse avoidance” and Restraining Abilities

Administrative uses and secondary research using the public’s personal data are applied not only in health, but across the board of public bodies, including big plans for tech in the justice system.

An example in the news this week of applied tech and its restraint on human behaviour was ankle monitors.  While one type was abandoned by the MOJ at a cost of £23m on the same day more funding for transdermal tags was announced in London.

The use of this technology as a monitoring tool, should not of itself be a punishment. It is said compliance is not intended to affect the dignity of individuals who are being monitored, but through the collection of personal and health data  will ensure the deprivation of alcohol – avoiding its abuse for a person’s own good and in the public interest. Is it fair?

Abstinence orders might be applied to those convicted of crimes such as assault, being drunk and disorderly and drunk driving.

We’re yet to see much discussion of how these varying degrees of integration of tech with the human body, and human enhancement will happen through robot elements in our human lives.

How will the boundaries of what is possible and desirable be determined and by whom with what oversight?

What else might be considered as harmful as alcohol to individuals and to  society? Drugs? Nictotine? Excess sugar?

As we wonder about the ethics of how humanoids will act and the aesthetics of how human they look, I wonder how humane are we being, in all our ‘public’ tech design and deployment?

Umberto Eco who died on Friday wrote in ‘The birth of ethics’ that there are universal ideas on constraints, effectively that people should not harm other people, through deprivation, restrictions or psychological torture. And that we should not impose anything on others that “diminishes or stifles our capacity to think.”

How will we as a society collectively agree what that should look like, how far some can impose on others, without consent?

Enhancing the Boundaries of Being Human

Technology might be used to impose bodily boundaries on some people, but tech can also be used for the enhancement of others. retweeted this week, the brilliant Angel Giuffria’s arm.

While the technology in this case is literally hands-on in its application, increasingly it is not the technology itself but the data that it creates or captures which enables action through data-based decision making.

Robots that are tiny may be given big responsibilities to monitor and report massive amounts of data. What if we could swallow them?

Data if analysed and understood, become knowledge.

Knowledge can be used to inform decisions and take action.

So where are the boundaries of what data may be extracted,  information collated, and applied as individual interventions?

Defining the Boundaries of “in the Public Interest”

Where are boundaries of what data may be created, stored, and linked to create a detailed picture about us as individuals, if the purpose is determined to be in the public interest?

Who decides which purposes are in the public interest? What qualifies as research purposes? Who qualifies as meeting the criteria of ‘researcher’?

How far can research and interventions go without consent?

Should security services and law enforcement agencies always be entitled to get access to individuals’ data ‘in the public interest’?

That’s something Apple is currently testing in the US.

Should research bodies always be entitled to get access to individuals’ data ‘in the public interest’?

That’s something care.data tried and failed to assume the public supported and has yet to re-test. Impossible before respecting the opt out that was promised over two years ago in March 2014.

The question how much data research bodies may be ‘entitled to’ will be tested again in the datasharing consultation in the UK.

How data already gathered are used in research may be used differently from it is when we consent to its use at colllection. How this changes over time and its potential for scope creep is seen in Education. Pupil data has gone from passive collection of name to giving it out to third parties, to use in national surveys, so far.

And what of the future?

Where is the boundary between access and use of data not in enforcement of acts already committed but in their prediction and prevention?

If you believe there should be an assumption of law enforcement access to data when data are used for prediction and prevention, what about health?

Should there be any difference between researchers’ access to data when data are used for past analysis and for use in prediction?

If ethics define the boundary between what is acceptable and where actions by one person may impose something on another that “diminishes or stifles our capacity to think” – that takes away our decision making capacity – that nudges behaviour, or acts on behaviour that has not yet happened, who decides what is ethical?

How does a public that is poorly informed about current data practices, become well enough informed to participate in the debate of how data management should be designed today for their future?

How Deeply Mined should our Personal Data be?

The application of technology, non-specific but not yet AI, was also announced this week in the Google DeepMind work in the NHS.

Its first key launch app co-founder provided a report that established the operating framework for the Behavioural Insights Team established by Prime Minister David Cameron.

A number of highly respected public figures have been engaged to act in the public interest as unpaid Independent Reviewers of Google DeepMind Health. It will be interesting to see what their role is and how transparent its workings and public engagement will be.

The recent consultation on the NHS gave overwhelming feedback that the public does not support the direction of current NHS change. Even having removed all responses associated with ‘lefty’ campaigns, concerns listed on page 11, are consistent including a request the Government “should end further involvement of the private sector in healthcare”. It appears from the response that this engagement exercise will feed little into practice.

The strength of feeling should however be a clear message to new projects that people are passionate that equal access to healthcare for all matters and that the public wants to be informed and have their voices heard.

How will public involvement be ensured as complexity increases in these healthcare add-ons and changing technology?

Will Google DeepMind pave the way to a new approach to health research? A combination of ‘nudge’ behavioural insights, advanced neural networks, Big Data and technology is powerful. How will that power be used?

I was recently told that if new research is not pushing the boundaries of what is possible and permissible then it may not be worth doing, as it’s probably been done before.

Should anything that is new that becomes possible be realised?

I wonder how the balance will be weighted in requests for patient data and their application, in such a high profile project.

Will NHS Research Ethics Committees turn down research proposals in-house in hospitals that benefit the institution or advance their reputation, or the HSCIC, ever feel able to say no to data use by Google DeepMind?

Ethics committees safeguard the rights, safety, dignity and well-being of research participants, independently of research sponsors whereas these representatives are not all independent of commercial supporters. And it has not claimed it’s trying to be an ethics panel. But oversight is certainly needed.

The boundaries of ownership between what is seen to benefit commercial and state in modern health investment is perhaps more than blurred to an untrained eye. Genomics England – the government’s flagship programme giving commercial access to the genome of 100K people –  stockholding companies, data analytics companies, genome analytic companies, genome collection, and human tissue research, commercial and academic research,  often share directors, working partnerships and funders. That’s perhaps unsurprising given such a specialist small world.

It’s exciting to think of the possibilities if, “through a focus on patient outcomes, effective oversight, and the highest ethical principles, we can achieve great results for the NHS and everyone who depends on it.”

Where will an ageing society go, if medics can successfully treat more cancer for example? What diseases will be prioritised and others left behind in what is economically most viable to prevent? How much investment will be made in diseases of the poor or in countries where governments cannot afford to fund programmes?

What will we die from instead? What happens when some causes of ‘preventative death’ are deemed more socially acceptable than others? Where might prevention become socially enforced through nudging behaviour into new socially acceptable or ethical norms?

Don’t be Evil

Given the leading edge of the company and its curiosity-by-design to see how far “can we” will reach, “don’t be evil” may be very important. But “be good” might be better. Where is that boundary?

The boundaries of what ‘being human’ means and how Big Data will decide and influence that, are unclear and changing. How will the law and regulation keep up and society be engaged in support?

Data principles such as fairness, keeping data accurate, complete and up-to-date and ensuring data are not excessive retained for no longer than necessary for the purpose are being widely ignored or exempted under the banner of ‘research’.

Can data use retain a principled approach despite this and if we accept commercial users, profit making based on public data, will those principles from academic research remain in practice?

Exempt from the obligation to give a copy of personal data to an individual on request if data are for ‘research’ purposes, data about us and our children, are extracted and stored ‘without us’. Forever. That means in a future that we cannot see, but Google DeepMind among others, is designing.

Lay understanding, and that of many climical professionals is likely to be left far behind if advanced technologies and use of big data decision-making algorithms are hidden in black boxes.

Public transparency of the use of our data and future planned purposes are needed to create trust that these purposes are wise.

Data are increasingly linked and more valuable when identifiable.

Any organisation that wants to future-proof its reputational risk will make sure data collection and use today is with consent, since future outcomes derived are likely to be in interventions for individuals or society. Catching up consent will be hard unless designed in now.

A Dialogue on the Boundaries of Being Human and Big Data

Where the commercial, personal, and public interests are blurred, the highest ethical principles are going to be needed to ensure ‘abuse avoidance’ in the use of new technology, in increased data linkage and resultant data use in research of many different kinds.

How we as a society achieve the benefits of tech and datasharing and where its boundaries lie in “the public interest” needs public debate to co-design the direction we collectively want to partake in.

Once that is over, change needs supported by a method of oversight that is responsive to new technology, data use, and its challenges.

What a channel for ongoing public dialogue, challenge and potentially recourse might look like, should be part of that debate.

Thoughts since #UKHC15. UK health datasharing.

The world you will release your technology into, is the world you are familiar with, which is already of the past. Based on old data.

How can you design tools and systems fit for the future? And for all?

For my 100th post and the first of 2016, here is a summary of some of my thoughts prompted by . Several grains of thought related to UK heath data that have been growing for some time.

1000 words on “Hard things: identity, data sharing and consent.” The fun run version.

Do we confuse hard with complex? Hard does not have to mean difficult. Some things seem to be harder than necessary, because of politics. I’ve found this hard to write. Where to start?

The search to capture solutions has been elusive.

The starting line: Identity

Then my first thoughts on identity got taken care of by Vinay Gupta in this post, better than I could. (If you want a long read about identity, you might want to get a hot drink like I did and read and re-read. It says it’ll take an hour. It took me several, in absorption and thinking time. And worth it.)

That leaves data sharing and consent. Both of which I have written many of my other 99 posts about in the last year. So what’s new?

Why are we doing this: why aren’t we there yet?

It still feels very much that many parts of the health service and broader government thinking on ‘digital’ is we need to do something. Why is missing, and therefore achieving and measuring success is hard.

Often we start with a good idea and set about finding a solution how to achieve it. But if the ‘why’ behind the idea is shaky to start with, the solution may falter, as soon as it gets difficult. No one seems to know what #paperless actually means in practice.

So why try and change things? Fixing problems, rather than coming up with good ideas is another way to think of it as they suggested at  #ukhc15, it was a meet-up for people who want to make things better, usually for others, and sometimes that involves improving the systems they worked with directly, or supported others in.

I no longer work in systems’ introductions, or enhancement processes, although I have a lay role in research and admin data, but regular readers know, most of the last two years has been all about the data.  care.data.

More often than not, in #ukhc2015 discussions that focused on “the data” I would try and bring people back to thinking about what the change is trying to solve, what it wants to “make better” and why.

There’s a broad tendency to simply think more data = better. Not true, and I’ll show later a case study why. We must question why.

Why doesn’t everyone volunteer or not want to join in?

Very many people who have spoken with me over the last two years have shared their concrete concerns over the plans to share GP data and they do not get heard. They did not see a need to share their identifiable personal confidential data, or see why truly anonymous data would not be sufficient for health planning, for example.

Homeless men, and women at risk, people from the travelling community, those with disabilities, questions on patients with stigmatising conditions, minorities, children, sexual orientation – not to mention from lawyers or agencies representing them. Or the 11 million of our adult population not online. Few of whom we spoke about. Few of whom we heard from at #ukhc15. Yet put together, these individuals make up not only a significant number of people, but make up a disproportionately high proportion of the highest demands on our health and social care services.

The inverse care law appears magnified in its potential when applied to digital, and should magnify the importance of thinking about access. How will care.data make things better for them, and how will the risks be mitigated? And are those costs being properly assessed if there is no assessment of the current care.data business case and seemingly, since 2012 at least, no serious effort to look at alternatives?

The finish line? We can’t see what it looks like yet.

The #ukhc2015 event was well run, and I liked the spontaneity of people braver than me who were keen to lead sessions and did it well.  As someone who is white, living in a ‘nice’ area, I am privileged. It was a privilege to spend a day with #UKHC15 and packed with people who clearly think about hard things all the time. People who want to make things better.  People who were welcoming to nervous first-timers at an ‘un’conference over a shared lunch.

I hope the voices of those who can’t attend these events, and outside London, are equally accounted for in all government 2016 datasharing plans.

This may be the last chance after years of similar consultations have failed to deliver workable, consensual public data sharing policies.

We have vast streams of population-wide data stored in the UK, about which, the population is largely ignorant. But while the data may be from 25 years ago, whatever is designed today is going to need to think long term, not how do we solve what we know, but how do we design solutions that will work for what we don’t.

Transparency here will be paramount to trust if future decisions are made for us, or those we make for ourselves are ‘influenced’ by machine learning, by algorithms, machine learning and ‘mindspace’ work.

As Thurgood Marshall said,

“Our whole constitutional heritage rebels at the thought of giving government the power to control men’s minds.”

Control over who we are and who the system thinks we are becomes a whole new level of discussion, if we are being told how to make a decision, especially where the decision is toward a direction of public policy based on political choice. If pensions are not being properly funded, to not allocate taxes differently and fund them, is a choice the current government has made, while the DWP seeks to influence our decison, to make us save more in private pensions.

And how about in data discussions make an effort to start talking a little more clearly in the same terms – and stop packaging ‘sharing’ as if it is something voluntary in population-wide compulsory policy.

It’s done to us, not with us, in far too many areas of government we do not see. Perhaps this consultation might change that, but it’s the ‘nth’ number of consulations and I want to be convinvced this one is intentional of real change. It’s only open for a few weeks, and this meet up for discussion appeared to be something only organised in London.

I hope we’ll hear committment to real change in support of people and the uses of our personal data by the state in the new #UkDigiStrategy, not simply more blue skythinking and drinking the ‘datasharing’ kool-aid.  We’ve been talking in the UK for far too long about getting this right.

Let’s see the government serious about making it happen. Not for government, but in the public interest, in a respectful and ethical partnership with people, and not find changes forced upon us.

No other foundation will be fit for a future in which care.data, the phenotype data, is to be the basis for an NHS so totally personalised.

If you want a longer read, read on below for my ten things in detail.

Comment welcome.

########

Hard things: The marathon version, below.
Continue reading Thoughts since #UKHC15. UK health datasharing.

The Economic Value of Data vs the Public Good? [2] Pay-for-privacy, defining purposes

Differentiation. Telling customers apart and grouping them by similarities is what commercial data managers want.

It enables them to target customers with advertising and sales promotion most effectively. They segment the market into chunks and treat one group differently from another.

They use market research data, our loyalty card data, to get that detailed information about customers, and decide how to target each group for what purposes.

As the EU states debate how research data should be used and how individuals should be both enabled and protected through it, they might consider separating research purposes by type.

While people are happy for the state to use their data without active consent for bona fide research, they are not for commercial consumer research purposes. [ref part 1].

Separating consumer and commercial market research from the definition of research purposes for the public good by the state, could be key to rebuilding people’s trust in government data use.

Having separate purposes would permit separate consent and control procedures to govern them.

But what role will profit make in the state’s definition of ‘in the public interest’ – is it in the public interest if the UK plc makes money from its citizens? and how far along any gauge of public feeling will a government be prepared to go to push making money for the UK plc at our own personal cost?

Pay-for-privacy?

In January this year, the Executive Vice President at Dunnhumby, Nishat Mehta, wrote in this article [7], about how he sees the future of data sharing between consumers and commercial traders:

“Imagine a world where data and services that are currently free had a price tag. You could choose to use Google or Facebook freely if you allowed them to monetize your expressed data through third-party advertisers […]. Alternatively, you could choose to pay a fair price for these services, but use of the data would be forbidden or limited to internal purposes.”

He too, talked about health data. Specifically about its value when accurate expressed and consensual:

“As consumers create and own even more data from health and fitness wearables, connected devices and offline social interactions, market dynamics would set the fair price that would compel customers to share that data. The data is more accurate, and therefore valuable, because it is expressed, rather than inferred, unable to be collected any other way and comes with clear permission from the user for its use.”

What his pay-for-privacy model appears to have forgotten, is that this future consensual sharing is based on the understanding that privacy has a monetary value. And that depends on understanding the status quo.

It is based on the individual realising that there is money made from their personal data by third parties today, and that there is a choice.

The extent of this commercial sharing and re-selling will be a surprise to most loyalty card holders.

“For years, market research firms and retailers have used loyalty cards to offer money back schemes or discounts in return for customer data.”

However despite being signed up for years, I believe most in the public are unaware of the implied deal. It may be in the small print. But everyone knows that few read it, in the rush to sign up to save money.

Most shoppers believe the supermarket is buying our loyalty. We return to spend more cash because of the points. Points mean prizes, petrol coupons, or pounds off.

We don’t realise our personal identity and habits are being invisibly analysed to the nth degree and sold by supermarkets as part of those sweet deals.

But is pay-for-privacy discriminatory? By creating the freedom to choose privacy as a pay-for option, it excludes those who cannot afford it.

Privacy should be seen as a human right, not as a pay-only privilege.

Today we use free services online but our data is used behind the scenes to target sales and ads often with no choice and without our awareness.

Today we can choose to opt in to loyalty schemes and trade our personal data for points and with it we accept marketing emails, and flyers through the door, and unwanted calls in our private time.

The free option is to never sign up at all, but by doing so customers pay a premium by not getting the vouchers and discounts.  Or trading convenience of online shopping.

There is a personal cost in all three cases, albeit in a rather opaque trade off.

 

Does the consumer really benefit in any of these scenarios or does the commercial company get a better deal?

In the sustainable future, only a consensual system based on understanding and trust will work well. That’s assuming by well, we mean organisations wish to prevent PR disasters and practical disruption as resulted for example to NHS data in the last year, through care.data.

For some people the personal cost to the infringement of privacy by commercial firms is great. Others care less. But once informed, there is a choice on offer even today to pay for privacy from commercial business, whether one pays the price by paying a premium for goods if not signed up for loyalty schemes or paying with our privacy.

In future we may see a more direct pay-for-privacy offering along  the lines of Nishat Mehta.

And if so, citizens will be asking ever more about how their data is used in all sorts of places beyond the supermarket.

So how can the state profit from the economic value of our data but not exploit citizens?

‘Every little bit of data’ may help consumer marketing companies.  Gaining it or using it in ways which are unethical and knowingly continue bad practices won’t win back consumers and citizens’ trust.

And whether it is a commercial consumer company or the state, people feel exploited when their information is used to make money without their knowledge and for purposes with which they disagree.

Consumer commercial use and use in bona fide research are separate in the average citizen’s mind and understood in theory.

Achieving differentiation in practice in the definition of research purposes could be key to rebuilding consumers’ trust.

And that would be valid for all their data, not only what data protection labels as ‘personal’. For the average citizen, all data about them is personal.

Separating in practice how consumer businesses are using data about customers to the benefit of company profits, how the benefits are shared on an individual basis in terms of a trade in our privacy, and how bona fide public research benefits us all, would be beneficial to win continued access to our data.

Citizens need and want to be offered paths to see how our data are used in ways which are transparent and easy to access.

Cutting away purposes which appear exploitative from purposes in the public interest could benefit commerce, industry and science.

By reducing the private cost to individuals of the loss of control and privacy of our data, citizens will be more willing to share.

That will create more opportunity for data to be used in the public interest, which will increase the public good; both economic and social which the government hopes to see expand.

And that could mean a happy ending for everyone.

The Economic Value of Data vs the Public Good?  They need not be mutually exclusive. But if one exploits the other, it has the potential to continue be corrosive. The UK plc cannot continue to assume its subjects are willing creators and repositories of information to be used for making money. [ref 1] To do so has lost trust in all uses, not only those in which citizens felt exploited.[6]

The economic value of data used in science and health, whether to individual app creators, big business or the commissioning state in planning and purchasing is clear. Perhaps not quantified or often discussed in the public domain perhaps, but it clearly exists.

Those uses can co-exist with good practices to help people understand what they are signed up to.

By defining ‘research purposes’, by making how data are used transparent, and by giving real choice in practice to consent to differentiated data for secondary uses, both commercial and state will secure their long term access to data.

Privacy, consent and separation of purposes will be wise investments for its growth across commercial and state sectors.

Let’s hope they are part of the coming ‘long-term economic plan’.

****

Related to this:

Part one: The Economic Value of Data vs the Public Good? [1] Concerns and the cost of Consent

Part two: The Economic Value of Data vs the Public Good? [2] Pay-for-privacy and Defining Purposes.

Part three: The Economic Value of Data vs the Public Good? [3] The value of public voice.

****

image via Tesco media

[6] Ipsos MORI research with the Royal Statistical Society into the Trust deficit with lessons for policy makers https://www.ipsos-mori.com/researchpublications/researcharchive/3422/New-research-finds-data-trust-deficit-with-lessons-for-policymakers.aspx

[7] AdExchanger Janaury 2015 http://adexchanger.com/data-driven-thinking/the-newest-asset-class-data/

[8] Tesco clubcard data sale https://jenpersson.com/public_data_in_private_hands/  / Computing 14.01.2015 – article by Sooraj Shah: http://www.computing.co.uk/ctg/feature/2390197/what-does-tescos-sale-of-dunnhumby-mean-for-its-data-strategy

[9] Direct Marketing 2013 http://www.dmnews.com/tesco-every-little-bit-of-customer-data-helps/article/317823/

 

Wearables: patients will ‘essentially manage their data as they wish’. What will this mean for diagnostics, treatment and research and why should we care? [#NHSWDP 3]

 

Consent to data sharing appears to be a new choice firmly available on the NHS England patient menu if patient ownership of our own records, is clearly acknowledged as ‘the operating principle legally’.

Simon Stevens, had just said in his keynote speech:

“..smartphones; […] the single most important health treatment and diagnostic tool at our disposal over the coming decade and beyond ” Simon Stevens, March 18 2015.

Tim Kelsey, Director Patients and Information, NHS England, then talked about consent in the Q&A:

“We now acknowledge the patient’s ownership of the record […] essentially, it’s always been implied, it’s still not absolutely explicit but it is the operating principle now legally for the NHS.

“So, let’s get back to consent and what it means for clinical professionals, because we are going to move to a place where people will make those decisions as they currently do with wearable devices, and other kinds of mobile, and we need to get to a point where people can plug their wearable device into their medical record, and essentially manage their data as they wish.

“It is essentially, their data.”

How this principle has been applied in the past, is being now, and how it may change matters, as it will affect many other areas.

Our personal health data is the business intelligence of the health industry’s future.

Some parts of that industry will say we don’t share enough data. Or don’t use it in the right way.  For wearables designed as medical devices, it will be vital to do so.

But before some launch into polemics on the rights and wrongs of blanket ‘data sharing’ we should be careful what types of data we mean, and for what purposes it is extracted.It matters when discussing consent and sharing.

We should be clear to separate consent to data sharing for direct treatment from consent for secondary purposes other than care (although Mr Kelsey hinted at a conflation of the two in a later comment). The promised opt-out from sharing for secondary uses is pending legal change. At least that’s what we’ve been told.

Given that patient data from hospital and range of NHS health settings today, are used for secondary purposes without consent – despite the political acknowledgement that patients have an opt out – this sounded a bold new statement, and contrasted with his past stance.

Primary care data extraction for secondary uses, in the care.data programme, was not intended to be consensual. Will it become so?

Its plan so far has an assumed opt-in model, despite professional calls from some, such as at the the BMA ARM to move to an opt-in model, and the acknowledged risk of harm that it will do to patient trust.

The NHS England Privacy Assessment said: ‘The extraction of personal confidential data from providers without consent carries the risk that patients may lose trust in the confidential nature of the health service.’

A year into the launch, Jan 2014, a national communications plan should have solved the need for fair processing, but another year on, March 2015, there is postcode lottery, pilot approach.

If in principle, datasharing is to be decided by consensual active choice,  as it “is the operating principle now legally for the NHS” then why not now, for care.data, and for all?

When will the promised choice be enacted to withhold data from secondary uses and sharing with third parties beyond the HSCIC?

“we are going to move to a place where people will make those decisions as they currently do with wearable devices” [Widening digital participation, at the King’s Fund March 2015]

So when will we see this ‘move’ and what will it mean?

Why plan to continue to extract more data under the ‘old’ assumption principle, if ownership of data is now with the individual?

And who is to make the move first – NHS patients or NHS patriarchy – if patients use wearables before the NHS is geared up to them?

Looking back or forward thinking?

Last year’s programme has become outdated not only in principle, but digital best practice if top down dictatorship is out, and the individual is now to “manage their data as they wish.”

What might happen in the next two years, in the scope of the Five Year Forward Plan or indeed by 2020?

This shift in data creation, sharing and acknowledged ownership may mean epic change for expectations and access.

It will mean that people’s choice around data sharing; from patients and healthy controls, need considered early on in research & projects. Engagement, communication and involvement will be all about trust.

For the ‘worried well’, wearables could ‘provide digital “nudges” that will empower us to live healthier and better lives‘ or perhaps not.

What understanding have we yet, of the big picture of what this may mean and where apps fit into the wider digital NHS application and beyond?

Patients right to choose

The rights to information and decision making responsibility is shifting towards the patient in other applied areas of care.

But what data will patients truly choose to apply and what to share, manipulate or delete? Who will use wearables and who will not, and how will that affect the access to and delivery of care?

What data will citizens choose to share in future and how will it affect the decision making by their clinician, the NHS as an organisation, research, public health, the state, and the individual?

Selective deletion could change a clinical history and clinician’s view.

Selective accuracy in terms of false measurements [think diabetes], or in medication, could kill people quickly.

How are apps to be regulated? Will only NHS ‘approved’ apps be licensed for use in the NHS and made available to choose from and what happens to patients’ data who use a non-approved app?

How will any of their data be accessed and applied in primary care?

Knowledge is used to make choices and inform decisions. Individuals make choices about their own lives, clinicians make decisions for and with their patients in their service provision, organisations make choices about their business model which may include where to profit.

Our personal health data is the business intelligence of the health industry’s future.

Who holds the balance of power in that future delivery model for healthcare in England, is going to be an ongoing debate of epic proportions but it will likely change in drips rather than a flood.

It has already begun. Lobbyists and companies who want access to data are apparently asking for significant changes to be made in the access to micro data held at the ONS. EU laws are changing.

The players who hold data, will hold knowledge, will hold power.

If the NHS were a monopoly board game, data intermediaries would be some of the wealthiest sites, but the value they create from publicly funded NHS data, should belong in the community chest.

If consent is to be with the individual for all purposes other than direct care, then all data sharing bodies and users had best set their expectations accordingly. Patients will need to make wise decisions, for themselves and in the public interest.

Projects for research and sharing must design trust and security into plans from the start or risk failure through lack of participants.

It’s enormously exciting.  I suspect some apps will be rather well hyped and deflate quickly if not effective. Others might be truly useful. Others may kill us.

As twitter might say, what a time to be alive.

Digital opportunities for engaging citizens as far as apps and data sharing goes, is not only not about how the NHS will engage citizens, but how citizens will engage with what NHS offering.

Consent it seems will one day be king.
Will there or won’t there be a wearables revolution?
Will we be offered or choose digital ‘wellness tools’ or medically approved apps? Will we trust them for diagnostics and treatment? Or will few become more than a fad for the worried well?
Control for the individual over their own data and choice to make their own decisions of what to store, share or deny may rule in practice, as well as theory.
That practice will need to differentiate between purposes for direct clinical care and secondary uses as it does today, and be supported and protected in legislation, protecting patient trust.
“We are going to move to a place where people will make those decisions as they currently do with wearable devices, and other kinds of mobile, and we need to get to a point where people can plug their wearable device into their medical record, and essentially manage their data as they wish.”
However as ‘choice’ was the buzzword for NHS care in recent years – conflated with increasing the use of private providers – will consent be abused to mean a shift of responsibility from the state to the individual, with caveats for how it could affect care?
With that shift in responsibility for decision making, as with personalized budgets, will we also see a shift in responsibility for payment choices from state to citizen?
Will our lifestyle choices in one area exclude choice in another?
Could app data of unhealthy purchases from the supermarket or refusal to share our health data, one day be seen as refusal of care and a reason to decline it? Mr Kelsey hinted at this last question in the meeting.
Add a population stratified by risk groups into the mix, and we have lots of legitimate questions to ask on the future vision of the NHS.
He went on to say:
“we have got some very significant challenges to explore in our minds, and we need to do, quite urgently from a legal and ethical perspective, around the advent of machine learning, and …artificial intelligence capable of handling data at a scale which we don’t currently do […] .
“I happen to be the person responsible in the NHS for the 100K genomes programme[…]. We are on the edge of a new kind of medicine, where we can also look at the interaction of all your molecules, as they bounce around your DNA. […]
“The point is, the principle is, it’s the patient’s data and they must make decisions about who uses it and what they mash it up with.”
How well that is managed will determine who citizens will choose to engage and share data with, inside and outside our future NHS.
Simon Stevens earlier at the event, had acknowledged a fundamental power shift he sees as necessary:
“This has got to be central about what the redesign of care looks like, with a fundamental power shift actually, in the way in which services are produced and co-produced.”

That could affect everyone in the NHS, with or without a wearables revolution.

These are challenges the public is not yet discussing and we’re already late to the party.

We’re all invited. What will you be wearing?

********
[Previous: part one here #NHSWDP 1  – From the event “Digital Participation and Health Literacy: Opportunities for engaging citizens” held at the King’s Fund, London, March 18, 2015]

[Previous: part two #NHSWDP 2: Smartphones: the single most important health treatment & diagnostic tool at our disposal]

********

Apple ResearchKit: http://techcrunch.com/2015/03/09/apple-introduces-researchkit-turning-iphones-into-medical-diagnostic-devices/#lZOCiR:UwOp
Digital nudges – the Tyranny of the Should by Maneesha Juneja http://maneeshjuneja.com/blog/2015/3/2/the-tyranny-of-the-should

You may use these HTML tags and attributes: <blockquote cite="">

smartphones: the single most important health treatment & diagnostic tool at our disposal [#NHSWDP 2]

After Simon Stevens big statement on smartphones at the #NHSWDP event, I’d asked what sort of assessment had the NHS done on how wearables’ data would affect research.

#digitalinclusion is clearly less about a narrow focus on apps than applied skills and online access.

But I came away wondering how apps will work in practice, affect research and our care in the NHS in the UK, and much more.

What about their practical applications and management?

NHS England announced a raft of regulated apps for mental health this week, though it’s not the first approved.  

This one doesn’t appear to have worked too well.

The question needs an answer before many more are launched: how will these be catalogued, indexed and stored ? Will it be just a simple webpage? I’m sure we can do better to make this page user friendly and intuitive.

This British NHS military mental health app is on iTunes. Will iTunes carry a complete NHS approved library and if so, where are the others?

We don’t have a robust regulation model for digital technology, it was said at a recent WHF event, and while medical apps are sold as wellness or fitness or just for fun, patients could be at risk.

In fact, I’m convinced that while medical apps are being used by consumers as medical devices, for example as tests, or tools which make recommendations, and they are not thoroughly regulated, we *are* at risk.

If Simon Stevens sees smartphones as: “going to be the single most important health treatment and diagnostic tool at our disposal over the coming decade and beyond,” then we’d best demand the tools that work on them, work safely. [speech in full]

And if his statement on their importance is true, then when will our care providers be geared up to accepting extracts of data held on a personal device into the local health record at a provider – how will interoperability, testing and security work?

And who’s paying for them? those on the library right now, have price tags. The public should be getting lots of answers to lots of questions.

“Over the coming decade”  has already started.

What about Research?: I know the Apple ResearchKit had a big reaction, and I’m sure there’s plenty of work already done on expectations of how data sharing in wearables affect research participation. (I just haven’t read it yet, but am interested to do so,  feel free to point any my way).

I was interested in the last line in this article: “ResearchKit is a valiant effort by Apple, and if its a hit with scientists, it could make mass medical research easier than ever.”

How do we define ‘easier’? Has Apple hit on a mainstream research app? How is ‘mass medical research’ in public health for example, done today and how may it change?

Will more people be able to participate in remote trials?

Will more people choose to share their well-being data and share ‘control’ phenotype data more in depth than in the past?

Are some groups under- or not-at-all represented?

How will we separate control of datasharing for direct care and for other secondary uses like research?

Quality: Will all data be good data or do we risk research projects drowning in a data tsunami of quantity not quality? Or will apps be able to target very specific trial data better than before?

How: One size will not fit all. How will data stored in wearables affect research in the UK? Will those effects differ between the UK and the US, and will app designs need different approaches due to the NHS long history and take into account single standards and be open? How will research take historical data into account if apps are all ‘now’? How will research based on that data be peer reviewed?

Where: And as we seek to close the digital divide here at home, what gulf may be opening up in the research done in public health, the hard to reach, and even between ‘the west’ and ‘developing’ countries?

In the UK will the digital postcode lottery affect care? Even with a wish for wifi in every part of the NHS estate, the digital differences are vast. Take a look at Salford – whose digital plans are worlds apart from my own Trust which has barely got rid of Lloyd George folders on trolleys.

Who: Or will in fact the divide not be by geography, but by accessibility based on wealth?  While NHS England talks about digital exclusion, you would hope they would be doing all they can to reduce it. However, the mental health apps announced just this week each have a price tag if ‘not available’ to you on the NHS.

Why: on what basis will decisions be made on who gets them prescribed and who pays for the,  where apps are to be made available for which area of diagnosis or treatment, or at all if the instructions are “to find out if it’s available in your area email xxx or call 020 xxx. Or you could ask your GP or healthcare professional.”

The highest intensity users of the NHS provision, are unlikely to be the greatest users of growing digital trends.

Rather the “worried well” would seem the ideal group who will be encouraged to stay away from professionals, self-care with self-paid support from high street pharmacies. How much could or will this measurably benefit the NHS, the individual and make lives better? As increasingly the population is risk stratified and grouped into manageable portions, will some be denied care based on data?

Or will the app providers be encouraged to promote their own products, make profits, benefit the UK plc regardless of actual cost and measurable benefits to patients?

In 2013, IMS Health reported that more than 43,000 health-related apps were available for download from the Apple iTunes app store. Of those, the IMS Institute found that only 16,275 apps are directly related to patient health and treatment, and there was much to be done to move health apps from novelty to mainstream.

Reactionary or Realistic – and where’s the Risks Assessment before NHS England launches even more approved apps?

At the same time as being exciting,  with this tempting smörgåsbord of shiny new apps comes a set of new risks which cannot responsibly be ignored. In patient safety, cyber security, and on what and who will be left out.

Given that basic data cannot in some places be shared between GP and hospital due for direct care to local lack of tech and the goal is another five years away, how real is the hype of the enormous impact of wearables going to be for the majority or at scale?

On digital participation projects: “Some of the work that has already been done by the Tinder Foundation, you take some of the examples here, with the Sikh community in  Leicester around diabetes, and parenting in other parts of the country, you can see that this is an agenda which can potentially get real quite quickly and can have quite a big impact.”
(Simon Stevens)

These statements, while each on different aspects of digital inclusion, by Simon Stevens on smartphones, and scale, and on consent by Tim Kelsey, are fundamentally bound together.

What will wearables mean for diagnostics, treatment and research in the NHS? For those who have and those who have not?

How will sharing data be managed for direct care and for other purposes?

What control will the patriarchy of the NHS reasonably expect to have over patients choice of app by any provider? Do most patients know at all, what effect their choice may have for their NHS care?

How will funding be divided into digital and non-digital, and be fair?

How will we maintain the principles and practice of a ‘free at the point of access’ digital service available to all in the NHS?

Will there really be a wearables revolution? Or has the NHS leadership just jumped on a bandwagon as yet without any direction?

****

[Next: part three  – on consent – #NHSWDP 3: Wearables: patients will ‘essentially manage their data as they wish’. What will this mean for diagnostics, treatment and research and why should we care?] 

[Previous: part one – #NHSWDP 1: Thoughts on Digital Participation and Health Literacy: Opportunities for engaging citizens in the NHS – including Simon Stevens full keynote speech]

A review of NHS news in 2014, from ‘the Spirit of the NHS Future’.

Respectful of all the serious, current news and that of the past year, this is a lighthearted look back at some of the stories of 2014. ‘The Spirit of the NHS Future’ looks forwards into 2015 & at what may still be changed.

***

The Spirit of the NHS Future  visits the Powers-at-be
(To the tune of The 12 Days of Christmas)

[click to open music in another window]

On the first day of Christmas
the Spirit said to me:
I’m the ghost of the family GP.

On the second day of Christmas
the Spirit said to me: a
two-tiered system,
in the future I foresee.

On the third day of Christmas
the Spirit said to me:
You told GPs,
merge or hand in keys,
feder-ate or salaried please.

On the fourth day of Christmas
the Spirit said, I hear:
“Save our surgeries”,
MPIG freeze,
partners on their knees,
blame commissioning on local CCGs.

On the fifth day of Christmas
the Spirit said to me:
Five Ye-ar Plan!
Call it Forward View,
digital or screwed.
Let’s have a new review,
keep ‘em happy at PWC.

On the sixth day of Christmas
the Spirit said to me:
Ill patients making,
out-of-Ho-urs-rings!
Callbacks all delayed,
six hours wait,
one one one mistakes.
But must tell them not to visit A&E.

On the seventh day of Christmas
the Spirit said, GPs:
see your service contract,
with the QOF they’re trimming,
what-will-this-bring?
Open Christmas Eve,
New Year’s no reprieve,
please don’t cheat our Steve,
or a breach notice will you see.

On the eighth day of Christmas
the Spirit said to me:
Population’s ageing,
social care is straining,
want is pro-creating,
obe-si-ty’s the thing!
Cash to diagnose,
statins no one knows,
indicator woes,
and Doc Foster staff employed at CQC.

On the ninth day of Christmas
the Spirit said to me:
Cash for transforming,
seven days of working.
Think of emigrating,
ten grand re-registration.
Four-teen hour stints!
DES and LES are fixed.
Called to heal the sick,
still they love the gig,
being skilled, conscientious GPs.

On the tenth day of Christmas
the Spirit said to me:
Many Lords a-leaping,
Owen’s not been sleeping,
private contracts creeping,
Circle’s ever growing.
Care home sales not slowing.
Merge-eve-ry-thing!
New bidding wars,
tenders are on course
top nine billion, more,
still you claim to run it nation-al-ly.

On the eleventh day of Christmas
the Spirit said to me:
Patient groups are griping,
records you’ve been swiping,
listening while sharing,
data firms are buying,
selling it for mining,
opt-out needs defining,
block Gold-acre tweets!
The care dot data* board
minutes we shall hoard,
troubled pilots loom.
Hi-de Partridge’s report behind a tree?

On the twelfth day of Christmas
the Spirit said to me:
disabled are protesting
sanctions, need arresting,
mental health is failing,
genomes we are trading,**
staff all need more paying,
boundaries set for changing,
top-down re-arranging,
All-this-to-come!
New hires, no absurd,
targets rule the world,
regulation first.
What’s the plan to save our service, Jeremy?

– – – – – –

Thanks to the NHS staff, whose hard work, grit and humour, continues to offer the service we know. You keep us and our loved ones healthy and whole whenever possible, and deal with us & our human frailty, when it is not.

Dear GPs & other NHS staff who’ve had a Dickens of a year. Please, don’t let the system get you down.

You are appreciated, & not just at Xmas. Happy New Year everyone.

“It is a fair, even-handed, noble adjustment of things, that while there is infection in disease and sorrow, there is nothing in the world so irresistibly contagious as laughter and good humour.”
Charles Dickens,   A Christmas Carol, 1843

– – – – –

*New Statesman, Dr Phil Whitaker’s Health Matters column, 20th March 2014, ‘Hunt should be frank about the economic imperative behind the urgency to establish the [care.data] database and should engage in a sensible discussion about what might be compromised by undue haste.’

**Genomics England Kickstarting a Genomics Industry

Launching genomics, lifeboats, & care.data [part 2]

“On Friday 1st August the media reported the next giant leap in the genomics programme in England, suggesting the 100K Genomics Project news was akin to Kennedy launching the Space Race. [1] [from 2:46.30].”

[Part one of this post is in this link, and includes thinking about care.data & genomics interaction].

Part two:

What is the expectation beyond 2017?

The investment to date may seem vast if, like me, you are unfamiliar with the amounts of money that are spent in research [in 2011 an £800M announcement, last summer £90M in Oxford as just two examples], and Friday revealed yet more money, a new £300M research package.  It is complex how it all adds up, and from mixed sourcing. But the stated aim of the investment is relatively simple: the whole genomes of 75,000 people [40K patients and 35K healthy relatives] are to be mapped by 2017.

Where the boundary lies between participation for clinical care and for research is less clear in the media presentation. If indeed participants’ results will be fed back into their NHS care pathway,  then both aims seem to be the intent of the current wave of participants.

It remains therefore perhaps unclear, how this new offering interacts with the existing NHS genetic services for direct clinical care, or the other research projects such as the UK Biobank for example, particularly when aims appear to overlap:.

“The ultimate aim is to make genomic testing a routine part of clinical practice – but only if patients and clinicians want it.” [Genomics England, how we work]

The infrastructure of equipment is enormous to have these sequences running 24/7 as was indicated in media TV coverage. I’m no maths whizz, but it appears to me they’re building Titantic at Genomics England and the numbers of actual people planned to take part (75K) would fit on the lifeboats. So with what, from whom, are they expecting to fill the sequencing labs after 2017?  At Genomics England events it has been stated that the infrastructure will then be embedded in the NHS. How is unclear, if commercial funding has been used to establish it. But at its most basic, there will be  no point building the infrastructure and finding no volunteers want to take part. You don’t build the ship and sail without passengers. What happens, if the English don’t volunteer in the desired numbers?

What research has been done to demonstrate the need or want for this new WGS project going forwards at scale, compared with a) present direct care or b) existing research facilities?

I cannot help but think of the line in the film, Field of Dreams. If you build it they will come. So who will come to be tested? Who will come to exploit the research uses for public good? Who will come in vast numbers in our aging population to exploit the resulting knowledge for their personal benefit vs companies who seek commercial profit? How will the commercial and charity investors, make it worth their while? Is the cost/benefit to society worth it?

All the various investors in addition to the taxpayer; Wellcome Trust, the MRC, Illumina, and others, will want to guarantee they are not left with an empty shell. There is huge existing and promised investment. Wellcome for example, has already “invested more than £1 billion in genomic research and has agreed to spend £27 million on a world class sequencing hub at its Genome Campus near Cambridge. This will house Genomics England’s operations alongside those of the internationally respected Sanger Institute.”

Whilst the commercial exploitation by third parties is explicit, there may also be another possibility to consider: would the Government want:

a) some cost participation by the participants? and

b) will want to sell the incidental findings’ results to the participants?

[ref: http://www.phgfoundation.org/file/10363 ref. #13]

“Regier et al. 345 have estimated the willingness-to-pay (WTP) for a diagnostic test to find the genetic cause of idiopathic developmental disability from families with an affected child. They used a discrete choice experiment to obtain WTP values and found that these families were willing to pay CDN$1118 (95% CI CDN$498-1788) for the expected benefit of twice as many diagnoses using aCGH and a reduction in waiting time of 1 week when compared to conventional cytogenetic analysis.”

“Moreover, it is advisable to minimise incidental findings where possible; health care professionals should not have an obligation to feedback findings that do not relate to the clinical question, except in cases where they are unavoidably discovered and have high predictive value. It follows that the NHS does not have an obligation to provide patients with their raw genome sequence data for further analysis outside of the NHS. We make no judgement here about whether the individual should be able to purchase and analyse their genome sequence independently; however, if this course of action is pursued, the NHS should provide follow-up advice and care only when additional findings are considered to be of significant clinical relevance in that individual…” [13]

How much is that cost, per person to be mapped? What is the expected return on the investment?

What are the questions which are not being asked of this huge state investment, particularly at a time when we are told he NHS is in such financial dire straits?

Are we measuring the costs and benefits?

Patient and medical staff support is fundamental to the programme, not an optional extra. It should not be forgotten that the NHS is a National Service owned by all of us. We should know how it runs. We should know what is spends. Ultimately, it is we who pay for it.

So let’s see on paper, what are the actual costs vs benefits? Where is the overall and long term cost benefit business case covering the multi-year investment, both of tangible and intangible benefits? In my personal research, I’m yet to find one. There is however, some discussion in this document:

“The problem for NGS is that very little ‘real’ information is available on the actual costs for NGS from the NHS perspective and the NHS Department of Health Reference Costs Database and PSSRU, where standard NHS costings are listed, are generally not helpful.” [13 – PHG, 2011]

Where are the questions being asked if this is really what we should be doing for the public good and for the future of the NHS?

Research under good ethics and bona fide transparent purposes is a public asset. This rollout, has potential to become a liability.

To me, yet again it seems, politics has the potential to wreck serious research aims and the public good.

Perhaps more importantly, the unrestrained media hype carries the very real risk of creating unfounded hope for an immediate diagnosis or treatment, for vulnerable individuals and families who in reality will see no personal benefit. This is not to undermine what may be possible in future. It is simply a plea to rein in hype to reality.

Politicians and civil servants in NHS England appear to use both research and the notion of the broad ‘public good’, broadly in speeches to appear to be doing ‘the right thing to do’, but without measurable substance. Without a clear cost-benefit analysis, I admit, I am skeptical. I would like to see more information in the public domain.

Has the documentation of the balance of patient/public good and  expected “major contribution to make to wealth creation and economic growth in this country” been examined?

Is society prepared for this?

I question whether the propositions of the initiative have been grasped by Parliament and society as a whole, although I understand this is not a ‘new’ subject as such. This execution however, does appear at least, massive in its practical implications, not least for GPs if it is to become so mainstream, as quickly as plans predict. It raises a huge number of ethical questions. Not least of which will be around incidental findings, as the Radio 4 interview raised.

The first I have is consideration of pre-natal testing plans:

“Aside from WGS of individuals, other applications using NGS could potentially be more successful in the DTC market. For example, the use of NGS for non-invasive prenatal testing would doubtless be very popular if it became available DTC prior to being offered by the NHS, particularly for relatively common conditions such as Down syndrome…” [

and then the whole question of consent, particularly from children:

“…it may be almost impossible to mitigate the risk that individuals may have their genome sequenced without their consent. Some genome scan companies (e.g. 23andMe) have argued that the risks of covert testing are reduced by their sample collection method, which requires 2ml of saliva; in addition, individuals are asked to sign to confirm that the sample belongs to them (or that they have gained consent from the individual to whom it belongs). However, neither of these methods will have any effect on the possibility of sequencing DNA from children, which is a particularly contentious issue within DTC genomics.” [13]

“two issues have emerged as being particularly pressing: first is the paradox that individuals cannot be asked to consent to the discovery of risks the importance of which is impossible to assess. Thus from a legal perspective, there is no ‘meeting of minds’ and contractually the contract between researcher and participant might be void. It is also unclear whether informed consent is sufficient to deal with the feedback of incidental findings which are not pertinent to the initial research or clinical question but that may have either clinical or personal significance…” [PHG page 94]

And thirdly, we should not forget the elderly. In February 2014 the Department of Health proposed that a patient’s economic value should be taken into account when deciding on healthcare. Sir Andrew Dillon, head of the National Institute for Healthcare and Excellence (NICE, who set national healthcare budgets and priorities), disagreed saying:
“What we don’t want to say is those 10 years you have between 70 and 80, although clearly you are not going to be working, are not going to be valuable to somebody.

Clearly they are. You might be doing all sorts of very useful things for your family or local society. That’s what we are worried about and that’s the problem with the Department of Health’s calculation.

There are lots of people who adopt the fair-innings approach; ‘you’ve had 70 years of life you’ve got to accept that society is going to bias its investments in younger people.”

[14 – see Channel 4] Yet our population is ageing and we need to find a balance of where roles, rules and expectations meet. And question, how do we measure human value, should we, and on what basis are we making cost-based care decisions?

The Department of Health proposed that a patient’s economic value should be taken into account when deciding on healthcare. What is their thinking on genomics for the care of the elderly?

Clinical environment changes make engagement and understanding harder to achieve

All this, is sitting on shifting, fundamental questions on how decision making and accountability will be set, in a world of ever fragmenting NHS structure:

“More problematic will be the use of specific genomic technologies such as NGS in patient pathways for inherited disorders that are delivered outside the clinical genetics services (such as services for FH, haemophilia and sickle cell disease) and NGS that is used for non-inherited disease conditions. These will be commissioned by GP consortia within established care pathways. Such commissioning of companion diagnostics would, in theory be evaluated first by NICE. However, it is not clear what capacity NICE will have across a broad range of uses. In practice it seems likely that GP consortia may make a variety of different decisions influenced by local experts and pressure, funding and different priorities. Particular questions for NGS will include: How will commissioners be provided with the necessary evidence for decision-making and can this be developed and coordinated at a national level? How will commissioners prioritise particularly when it may be necessary to invest early in order to achieve savings later? What (if any) influence may commissioners be able to exert over the configuration of test providers (for example the rationalisation of laboratories or the use of private testing companies)? [13]
Today (August 8th) the public row between Roche and the Government through NICE became apparant on cancer treatment. And again I found myself asking, what are we not funding, whilst we spend on genomics?  If you did not you hear Sir Andrew Dillon & the discussion, you can listen again on BBC Radio 2 iPlayer here. [It’s in the middle of the programme, and begins at 01:09.06.]

Questions, in search of an answer
Where has the population indicated that this is the direction of travel we wish our National Health Service to take? What preparation has been made for the significant changes in society it will bring? When was Parliament asked before this next step in policy and huge public spend were signed off and where is the periodic check against progress and public sign off, of the next step? Who is preparing the people and processes for this explosive change, announced with sparklers, at arms length and a long taper? Are the challenges being shared honestly between policy, politicians and scientists, being shared with patients and public: as discussed at the stakeholder meeting at St.Barts London, 3rd October 2013 (a key panel presentation: 45 minute video with slides)? When will that be shared with the public and NHS staff in full? Why does NHS England feel this is so fundamental to the future of the NHS? Must we abandon a scuppered and sinking NHS for personalised medicine on personal budgets and expectations of increased use of private health insurance?

Is genomics really the lifeboat to which the NHS is inextricably bound?

The Patients and Information Directorate nor wider NHS England Board does not discuss these questions in public.  At the July 3rd 2014 Board Meeting, in the discussion of the genomics programme I understood the discussion as starting to address the inevitable future loss of equity of access because of genomic stratification, dividing the population into risk pool classifications [10.42] . To my mind, that is the end of the free-to-all NHS as we know it. And IF it is so, through planned policy. More people paying for their own care under ‘personalisation;  is in line with ISCG expectations set out  earlier in 2014: “there will be increasing numbers of people funding their own care and caring for others.”

Not everyone may have understood it that way, but if not, I’d like to know what was meant.

I would like to understand what is meant when Genomics England spokespeople  say the future holds:

“Increasingly to select most appropriate treatment strategy. In the longer term, potential shift to prevention based on risk-based information.”
or
“Review the role of sequencing in antenatal and adult screening.”

I would welcome the opportunity to fully understand what was suggested at that Board meeting as a result of our shared risk pool, and readers should view it and make up their own mind. Even better, a frank public and/or press board meeting with Q&A could be rewarding.

The ethical questions that are thrown up by this seem yet to have little public media attention.

Not least, incidental findings: if by sequencing someone’s DNA, you establish there is something for their health that they ought to be doing soon, will you go to that patient and say look, you should be doing this…. these are incidental findings, and may be quite unexpected and separate from the original illness under investigation in say, a family member, and may also only suggest risk indicators, not clear facts.

If this is expected to be mainstream by 2018, what training plans are in place as indicated needed as a “requirement for professionals across the NHS to be trained in genetics and its implications”? [presentation by Mark Bale, DoH, July 2014]

When will we get answers to these questions, and more?

Because there is so much people like me don’t know, but should, if this is our future NHS under such fundamental change as is hyped.

Because even the most esteemed in our land can get things wrong. One of them at the St.Bart’s events quotes on of my favourite myths attributed wrongly to Goethe. It cannot be attributed to him, that he said, ” “Whatever you can do or dream you can, begin it. Boldness has genius, power and magic in it.” You see, we just hear something which sounds plausible, from someone who seems to know what they are talking about. It isn’t always right.

Because patients of rare disease in search of clinical care answers should be entitled to have expectations set appropriately, and participants in research know to what they, and possibly family members indirectly, are committed.

Because if the NHS belongs to all of us, we should be able to ask questions and expect answers about its planning,  how we choose to spend its budget and how it will look in future.

These are all questions we should be asking as society

Fundamentally, in what kind of society will my children grow up?

With the questions of pre-natal intervention, how will we shape our attitudes towards our disabled and those who are sick, or vulnerable or elderly? Are we moving towards the research vision Mr.Hunt, Cameron and Freeman appear to share, only for good, or are we indeed to look further head to a Gattacan vision of perfection?

As we become the first country in the world to permit so called ‘three parent children’ how far will we go down the path of ‘fixing’ pre-natal genetic changes, here or in PGD?

How may this look in a society where ‘some cornflakes get to the top‘ and genetic advantage seen as a natural right over those without that ability? In a state where genetics could be considered as part of education planning? [16]

For those with lifelong conditions, how may genetic screening affect their life insurance when the Moratorium expires*  in 2017 (*any shift in date TBC pending discussion) ? How will it affect their health care, if the NHS England Board sees a potential effect on equity of access? How will it affect those of us who choose not to have screening – will we be penalised for that?

And whilst risk factors may include genomic factors, lifestyle factors some argue are even more important, but these change over time. How would those, who may have had past genetic screening be affected in future requirements?

After the August 1st announcement, [11] The Wellcome Trust‘s reporting was much more balanced and sensible than the political championing had been. It grasps the challenges ahead:

“Genomics England has ambitious plans to sequence 100,000 genomes from 75,000 people, some of whom will also have cancer cells sequenced. The sheer scale of the plans is pretty daunting. The genetic information arising from this project will be immense and a huge challenge for computational analysis as well as clinical interpretation. It will also raise a number of issues regarding privacy of patient data. Ensuring that these genetic data can be used maximally for patient benefit whilst protecting the rights of the individual participant must be at the heart of this project.

At the beginning of the Human Genome Project, scientists and funders like the Wellcome Trust knew they were on a journey that would be fraught with difficulties and challenges, but the long-term vision was clear. And so it is with the plans for Genomics England, it will most certainly not be easy…”

Managing change

Reality is that yet again, Change Management and Communications have been relegated to the bottom of the boarding priorities list.

This is not only a research technology or health programme. Bigger than all of that is the change it may bring. Not only in NHS practice, should the everyday vision of black boxes in GP surgeries become reality, but for the whole of society. For the shape of society, in age and diversity. Indeed if we are to be world leaders, we have potential to start to sling the world on a dangerous orbit if the edges of scope are ill defined. Discussing only with interested parties, those who have specific personal or business interests in genomic research and data sharing, whilst at Board meetings not clearly discussing the potential effects of risk stratification and personalisation on a free at the point of delivery health service is in my opinion, not transparent, and requires more public discussion.

After all, there are patients who are desperate for answers, who are part of the NHS and need our fair treatment and equity of access for rare disease. There is the majority who may not have those needs but knows someone who does. And we all fund and support the structure and staff in our world class service, we know and love. We want this to work well.

Future research participation depends on current experience and expectations. It is the latter I fear are being currently mishandled in public and the media.

Less than a month ago, at the NHS England Board Meeting on July 3rd,  Lord Adebowale very sensibly asked, “how do we lead people from where we are, and how we take the public with us? We need to be a world leader in engaging all the public”

Engagement is not rocket science. But don’t forget the ethics.

If this project is meant to be, according to MP George Freeman [George 2], akin to Kennedy launching the Space Race, then, by Fenyman [12], why can they not get their public involvement at big launches sorted out?

Is it because there are such large gaps and unknowns that questioning will not stand up to scrutiny? Is it because suggesting a programme will end the NHS as we know it, would be fatal for any politician or party who supports that programme in the coming year? Or do the leading organisations possibly paternalistically believe the public is too dim or uninterested or simply working to make ends meet to care [perhaps part of the 42% of the population who expected to struggle as a result of universal welfare changes,  one in three main claimants (34 per cent) said in 2012 they ‘run out of money before the end of the week/month always or most of the time’] ? But why bother will the big press splash, if it should not make waves?

In the words of Richard Feynman after the Challenger launch disaster in 1986:

“Let us make recommendations to ensure that NASA officials deal in a world of reality in understanding technological weaknesses and imperfections well enough to be actively trying to eliminate them. They must live in reality in comparing the costs and utility of the Shuttle to other methods of entering space. And they must be realistic in making contracts, in estimating costs, and the difficulty of the projects.

Only realistic flight schedules should be proposed, schedules that have a reasonable chance of being met.

If in this way the government would not support them, then so be it. NASA owes it to the citizens from whom it asks support to be frank, honest, and informative, so that these citizens can make the wisest decisions for the use of their limited resources. For a successful technology, reality must take precedence over public relations… [June 6th 1986. Six months after the disaster, the Report to the Presidential Commission (Appendix F)]

Just like the Rosetta spacecraft is getting ever closer to actually landing on the comet, its goal, [15 – BBC Newsround has an excellent little summary] after over ten years, so too is genomics close to the goal of many. It is within grasp that the long-planned mainstreaming of genomic intervention, will touch down in the NHS. My hope is that in its ever closer passes, we get hard factual evidence and understand exactly where we have come from, and where we intend going. What will who do with the information once collected?

The key is not the landing, it’s understanding why we launched in the first place.

Space may not be the most significant final frontier out there in the coming months that we should be looking at up close. Both in health and science.  Our focus in England must surely be to examine these plans with a microscope, and ask what frontiers have we reached in genomics, health data sharing and ethics in the NHS?

******  image source: ESA via Nature

[1] “It’s a hugely ambitious project, it’s on a par with the space race how Kennedy launched 40 years ago.” [from 2:46.30 BBC Radio 4 Int. Sarah Montague w/ George Freeman]

[2] Downing Street Press Release 1st August – genomics https://www.gov.uk/government/news/human-genome-uk-to-become-world-numb

[3] 6th December “Transcript of a speech given by Prime Minister at the FT Global Pharmaceutical and Biotechnology Conference” [https://www.gov.uk/government/speeches/pm-speech-on-life-sciences-and-opening-up-the-nhs]

[4] 10th December 2012 DNA Database concerns Channel 4 http://www.channel4.com/news/dna-cancer-database-plan-prompts-major-concerns

[5] Wellcome Trust- comment by Jeremy Farrar http://news.sky.com/story/1311189/pm-hails-300m-project-to-unlock-power-of-dna

[6] Strategic Priorities in Rare Diseases June 2013 http://www.genomicsengland.co.uk/wp-content/uploads/2013/06/GenomicsEngland_ScienceWorkingGroup_App2rarediseases.pdf

[7] NHS England Board paper presentation July 2013 http://www.england.nhs.uk/wp-content/uploads/2013/07/180713-item16.pdf

[8] ICO and HSCIC on anonymous and pseudonymous data in Computing Magazine http://www.computing.co.uk/ctg/news/2337679/ico-says-anonymous-data-not-covered-by-data-protection-act-until-its-de-anonymised

[9] HSCIC Pseudonymisation Review August 2014 http://www.hscic.gov.uk/article/4896/Data-pseudonymisation-review

[10] November 2013 ISCG – political pressure on genomics schedule http://www.england.nhs.uk/iscg/wp-content/uploads/sites/4/2014/01/ISCG-Paper-Ref-ISCG-009-001-ISCG-Meeting-Minutes-and-Actions-26-November-2013-v1.1.pdf

[11] Wellcome Trust August 1st 2014 The Genetic Building Blocks of Future Healthcare

[12] Fenyan – For successful technology reality must take precedence over PR https://jenpersson.com/successful-technology-reality-precedence-public-relations/

[13] Next Steps in the Sequence – the implications for whole genome sequencing in the UK – PHG Foundation, funded by the PHG Foundation, with additional financial support from Illumina. The second expert workshop for the project was supported by the University of Cambridge Centre for Science and Policy (CSaP) and the Wellcome Trust http://www.phgfoundation.org/file/10363

[14] Anti-elderly drugs proposals rejected by NICE: Channel 4 http://www.channel4.com/news/nice-assessment-elderly-health-drugs-rejected-contribution

[15] BBC Newsround: Rosetta spacecraft and the comet chasing

[16] Education committee, December 4th 2013 including Prof. Plomin From 11.09:30 education and social planning  http://www.parliamentlive.tv/Main/Player.aspx?meetingId=14379

*****

For avoidance of confusion [especially for foreign readership and considering one position is so new], there are two different Ministers mentioned here, both called George:

One. George Osborne [George 1] MP for Tatton, Cheshire and the Chancellor

Two. George Freeman [George 2] MP – The UK’s first-ever Minister for Life Sciences, appointed to this role July 15th 2014 [https://www.gov.uk/government/ministers/parliamentary-under-secretary-of-state–42]

 

*****