Category Archives: empowerment

Datasharing, lawmaking and ethics: power, practice and public policy

“Lawmaking is the Wire, not Schoolhouse Rock. It’s about blood and war and power, not evidence and argument and policy.”

"We can't trust the regulators," they say. "We need to be able to investigate the data for ourselves." Technology seems to provide the perfect solution. Just put it all online - people can go through the data while trusting no one.  There's just one problem. If you can't trust the regulators, what makes you think you can trust the data?" 

Extracts from The Boy Who Could Change the World: The Writings of Aaron Swartz. Chapter: ‘When is Technology Useful? ‘ June 2009.

The question keeps getting asked, is the concept of ethics obsolete in Big Data?

I’ve come to some conclusions why ‘Big Data’ use keeps pushing the boundaries of what many people find acceptable, and yet the people doing the research, the regulators and lawmakers often express surprise at negative reactions. Some even express disdain for public opinion, dismissing it as ignorant, not ‘understanding the benefits’, yet to be convinced. I’ve decided why I think what is considered ‘ethical’ in data science does not meet public expectation.

It’s not about people.

Researchers using large datasets, often have a foundation in data science, applied computing, maths, and don’t see data as people. It’s only data. Creating patterns, correlations, and analysis of individual level data are not seen as research involving human subjects.

This is embodied in the nth number of research ethics reviews I have read in the last year in which the question is asked, does the research involve people? The answer given is invariably ‘no’.

And these data analysts using, let’s say health data, are not working in a subject that is founded on any ethical principle, contrasting with the medical world the data come from.

The public feels differently about the information that is about them, and may be known, only to them or select professionals. The values that we as the public attach to our data  and expectations of its handling may reflect the expectation we have of handling of us as people who are connected to it. We see our data as all about us.

The values that are therefore put on data, and on how it can and should be used, can be at odds with one another, the public perception is not reciprocated by the researchers. This may be especially true if researchers are using data which has been de-identified, although it may not be anonymous.

New legislation on the horizon, the Better Use of Data in Government,  intends to fill the [loop]hole between what was legal to share in the past and what some want to exploit today, and emphasises a gap in the uses of data by public interest, academic researchers, and uses by government actors. The first incorporate by-and-large privacy and anonymisation techniques by design, versus the second designed for applied use of identifiable data.

Government departments and public bodies want to identify and track people who are somehow misaligned with the values of the system; either through fraud, debt, Troubled Families, or owing Student Loans. All highly sensitive subjects. But their ethical data science framework will not treat them as individuals, but only as data subjects. Or as groups who share certain characteristics.

The system again intrinsically fails to see these uses of data as being about individuals, but sees them as categories of people – “fraud” “debt” “Troubled families.” It is designed to profile people.

Services that weren’t built for people, but for government processes, result in datasets used in research, that aren’t well designed for research. So we now see attempts to shoehorn historical practices into data use  by modern data science practitioners, with policy that is shortsighted.

We can’t afford for these things to be so off axis, if civil service thinking is exploring “potential game-changers such as virtual reality for citizens in the autism spectrum, biometrics to reduce fraud, and data science and machine-learning to automate decisions.”

In an organisation such as DWP this must be really well designed since “the scale at which we operate is unprecedented: with 800 locations and 85,000  colleagues, we’re larger than most retail operations.”

The power to affect individual lives through poor technology is vast and some impacts seem to be being badly ignored. The ‘‘real time earnings’ database improved accuracy of benefit payments was widely agreed to have been harmful to some individuals through the Universal Credit scheme, with delayed payments meaning families at foodbanks, and contributing to worse.

“We believe execution is the major job of every business leader,” perhaps not the best wording in on DWP data uses.

What accountability will be built-by design?

I’ve been thinking recently about drawing a social ecological model of personal data empowerment or control. Thinking about visualisation of wants, gaps and consent models, to show rather than tell policy makers where these gaps exist in public perception and expectations, policy and practice. If anyone knows of one on data, please shout. I think it might be helpful.

But the data *is* all about people

Regardless whether they are in front of you or numbers on a screen, big or small datasets using data about real lives are data about people. And that triggers a need to treat the data with an ethical approach as you would people involved face-to-face.

Researchers need to stop treating data about people as meaningless data because that’s not how people think about their own data being used. Not only that, but if the whole point of your big data research is to have impact, your data outcomes, will change lives.

Tosh, I know some say. But, I have argued, the reason being is that the applications of the data science/ research/ policy findings / impact of immigration in education review / [insert purposes of the data user’s choosing] are designed to have impact on people. Often the people about whom the research is done without their knowledge or consent. And while most people say that is OK, where it’s public interest research, the possibilities are outstripping what the public has expressed as acceptable, and few seem to care.

Evidence from public engagement and ethics all say, hidden pigeon-holing, profiling, is unacceptable. Data Protection law has special requirements for it, on autonomous decisions. ‘Profiling’ is now clearly defined under article 4 of the GDPR as ” any form of automated processing of personal data consisting of using those data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.”

Using big datasets for research that ‘isn’t interested in individuals’ may still intend to create results profiling groups for applied policing, or discriminate, to make knowledge available by location. The data may have been deidentified, but in application becomes no longer anonymous.

Big Data research that results in profiling groups with the intent for applied health policy impacts for good, may by the very point of research, with the intent of improving a particular ethnic minority access to services, for example.

Then look at the voting process changes in North Carolina and see how that same data, the same research knowledge might be applied to exclude, to restrict rights, and to disempower.

Is it possible to have ethical oversight that can protect good data use and protect people’s rights if they conflict with the policy purposes?

The “clear legal basis”is not enough for public trust

Data use can be legal and can still be unethical, harmful and shortsighted in many ways, for both the impacts on research – in terms of withholding data and falsifying data and avoiding the system to avoid giving in data – and the lives it will touch.

What education has to learn from health is whether it will permit the uses by ‘others’ outside education to jeopardise the collection of school data intended in the best interests of children, not the system. In England it must start to analyse what is needed vs wanted. What is necessary and proportionate and justifies maintaining named data indefinitely, exposed to changing scope.

In health, the most recent Caldicott review suggests scope change by design – that is a red line for many: “For that reason the Review recommends that, in due course, the opt-out should not apply to all flows of information into the HSCIC. This requires careful consideration with the primary care community.”

The community spoke out already, and strongly in Spring and Summer 2014 that there must be an absolute right to confidentiality to protect patients’ trust in the system. Scope that ‘sounds’ like it might sneakily change in future, will be a death knell to public interest research, because repeated trust erosion will be fatal.

Laws change to allow scope change without informing people whose data are being used for different purposes

Regulators must be seen to be trusted, if the data they regulate is to be trustworthy. Laws and regulators that plan scope for the future watering down of public protection, water down public trust from today. Unethical policy and practice, will not be saved by pseudo-data-science ethics.

Will those decisions in private political rooms be worth the public cost to research, to policy, and to the lives it will ultimately affect?

What happens when the ethical black holes in policy, lawmaking and practice collide?

At the last UK HealthCamp towards the end of the day, when we discussed the hard things, the topic inevitably moved swiftly to consent, to building big databases, public perception, and why anyone would think there is potential for abuse, when clearly the intended use is good.

The answer came back from one of the participants, “OK now it’s the time to say. Because, Nazis.” Meaning, let’s learn from history.

Given the state of UK politics, Go Home van policies, restaurant raids, the possibility of Trump getting access to UK sensitive data of all sorts from across the Atlantic, given recent policy effects on the rights of the disabled and others, I wonder if we would hear the gentle laughter in the room in answer to the same question today.

With what is reported as Whitehall’s digital leadership sharp change today, the future of digital in government services and policy and lawmaking does indeed seem to be more “about blood and war and power,” than “evidence and argument and policy“.

The concept of ethics in datasharing using public data in the UK is far from becoming obsolete. It has yet to begin.

We have ethical black holes in big data research, in big data policy, and big data practices in England. The conflicts between public interest research and government uses of population wide datasets, how the public perceive the use of our data and how they are used, gaps and tensions in policy and practice are there.

We are simply waiting for the Big Bang. Whether it will be creative, or destructive we are yet to feel.

*****

image credit: LIGO – graphical visualisation of black holes on the discovery of gravitational waves

References:

Report: Caldicott review – National Data Guardian for Health and Care Review of Data Security, Consent and Opt-Outs 2016

Report: The OneWay Mirror: Public attitudes to commercial access to health data

Royal Statistical Society Survey carried out by Ipsos MORI: The Data Trust Deficit

The front door to our children’s personal data in schools

“EdTech UK will be a pro-active organisation building and accelerating a vibrant education and learning technology sector and leading new developments with our founding partners. It will also be a front door to government, educators, companies and investors from Britain and globally.”

Ian Fordham, CEO, EdTech UK

This front door is a gateway to access our children’s personal data and through it some companies are coming into our schools and homes and taking our data without asking.  And with that, our children lose control over their safeguarded digital identity. Forever.

Companies are all “committed to customer privacy” in those privacy policies which exist at all. However, typically this means they also share your information with ‘our affiliates, our licensors, our agents, our distributors and our suppliers’ and their circles are wide and often in perpetuity. Many simply don’t have a published policy.

Where do they store any data produced in the web session? Who may access it and use it for what purposes? Or how may they use the personal data associated with staff signing up with payment details?

According to research from London & Partners, championed by Boris Johnson, Martha Lane-Fox and others in EdTech, education is one of the fastest growing tech sectors in Britain and is worth £45bn globally; a number set to reach a staggering £129bn by 2020. And perhaps the EdTech diagrams in US dollars shows where the UK plan to draw companies from. If you build it, they will come.

The enthusiasm that some US EdTech type entrepreneurs I have met or listened to speak, is akin to religious fervour. Such is their drive for tech however, that they appear to forget that education is all about the child. Individual children. Not cohorts, or workforces. And even when they do it can be sincerely said, but lacks substance when you examine policies in practice.

How is the DfE measuring the cost and benefit of tech and its applications in education?

Is anyone willing to say not all tech is good tech, not every application is a wise application? Because every child is unique, not every app is one size fits all?

My 7-yo got so caught up in the game and in the mastery of the app their class was prescribed for homework in the past, that she couldn’t master the maths and harmed her confidence. (Imagine something like this, clicking on the two correct sheep with numbers stamped on them, that together add up to 12, for example, before they fall off and die.)

She has no problem with maths. Nor doing sums under pressure. She told me happily today she’d come joint second in a speed tables test. That particular app style simply doesn’t suit her.

I wonder if other children and parents find the same and if so, how would we know if these apps do more harm than good?

Nearly 300,000 young people in Britain have an anxiety disorder according to the Royal College of Psychiatrists. Feeling watched all the time on-and offline is unlikely to make anxiety any better.

How can the public and parents know that edTech which comes into the home with their children, is behaviourally sound?

How can the public and parents know that edTech which affects their children, is ethically sound in both security and application?

Where is the measured realism in the providers’ and policy makers fervour when both seek to marketise edTech and our personal data for the good of the economy, and ‘in the public interest’.

Just because we can, does not always mean we should. Simply because data linkage is feasible, even if it brings public benefit, cannot point blank mean it will always be in our best interest.

In whose best Interest is it anyway?

Right now, I’m not convinced that the digital policies at the heart of the Department for Education, the EdTech drivers or many providers have our children’s best interests at heart at all. It’s all about the economy; when talking if at all about children using the technology, many talk only of ‘preparing the workforce’.

Are children and parents asked to consent at individual level to the terms and conditions of the company and told what data will be extracted from the school systems about their child? Or do schools simply sign up their children and parents en masse, seeing it as part of their homework management system?

How much ‘real’ personal data they use varies. Some use only pseudo-IDs assigned by the teacher. Others log, store and share everything they do assigned to their ID or real email address , store performance over time and provide personalised reports of results.

Teachers and schools have a vital role to play in understanding data ethics and privacy to get this right and speaking to many, it doesn’t seem something they feel well equipped to do. Parents aren’t always asked. But should schools not always have to ask before giving data to a commercial third party or when not in an ’emergency’ situation?

I love tech. My children love making lego robots move with code. Or driving drones with bananas. Or animation. Technology offers opportunity for application in and outside schools for children that are fascinating, and worthy, and of benefit.

If however all parents are to protect children’s digital identity for future, and to be able to hand over any control and integrity over their personal data to them as adults,  we must better accommodate children’s data privacy in this 2016 gold rush for EdTech.

Pupils and parents need to be assured their software is both educationally and ethically sound.  Who defines those standards?

Who is in charge of Driving, Miss Morgan?

Microsoft’s vice-president of worldwide education, recently opened the BETT exhibition and praised teachers for using technology to achieve amazing things in the classroom, and urged innovators to  “join hands as a global community in driving this change”.

While there is a case to say no exposure to technology in today’s teaching would be neglectful, there is a stronger duty to ensure exposure to technology is positive and inclusive, not harmful.

Who regulates that?

We are on the edge of an explosion of tech and children’s personal data ‘sharing’ with third parties in education.

Where is its oversight?

The community of parents and children are at real risk of being completely left out these decisions, and exploited.

The upcoming “safeguarding” policies online are a joke if the DfE tells us loudly to safeguard children’s identity out front, and quietly gives their personal data away for cash round the back.

The front door to our children’s data “for government, educators, companies and investors from Britain and globally” is wide open.

Behind the scenes  in pupil data privacy, it’s a bit of a mess. And these policy makers and providers forgot to ask first,  if they could come in.

If we build it, would you come?

My question now is, if we could build something better on pupil data privacy AND better data use, what would it look like?

Could we build an assessment model of the collection, use and release of data in schools that could benefit pupils and parents, AND educational establishments and providers?

This could be a step towards future-proofing public trust which will be vital for companies who want a foot-in-the door of EdTech. Design an ethical framework for digital decision making and a practical data model for use in Education.

Educationally and ethically sound.

If together providers, policy makers, schools at group Trust level, could meet with Data Protection and Privacy civil society experts to shape a tool kit of how to assess privacy impact, to ensure safeguarding and freedoms, enable safe data flow and help design cybersecurity that works for them and protects children’s privacy that is lacking today, designing for tomorrow, would you come?

Which door will we choose?

*******

image credit: @ Ben Buschfeld Wikipedia

*added February 13th: Oftsed Chair sought from US

Monitoring software in schools: the Department for Education’s digital dream or nightmare? (1)

Nicky Morgan, the Education Secretary,  gave a speech [1] this week and shared her dream of the benefits technology for pupils.

It mentioned two initiatives to log children’s individual actions; one is included in a consultation on new statutory guidance, and the other she praised, is a GPS based mobile monitoring app.

As with many new applications of technology, how the concept is to be implemented in practice is important to help understand how intrusive any new use of data is going to be.

Unfortunately for this consultation there is no supporting code of practice what the change will mean, and questions need asked.

The most significant aspects in terms of changes to data collection through required monitoring are in the areas of statutory monitoring, systems, and mandatory teaching of ‘safeguarding’:

Consultation p11/14: “We believe including the requirement to ensure appropriate filtering and monitoring are in place, in statutory guidance, is proportional and reasonable in order to ensure all schools and colleges are meeting this requirement. We don’t think including this requirement will create addition burdens for the vast majority of schools, as they are already doing this, but we are keen to test this assumption.”

Consultation:  paragraph 75 on page 22 introduces this guidance section and ends with a link to “Buying advice for schools.” “Governing bodies and proprietors should be confident that systems are in place that will identify children accessing or trying to access harmful and inappropriate content online. Guidance on e-security is available from the National Education Network.

Guidance: para 78 “Whilst it is essential that governing bodies and proprietors ensure that appropriate filters and monitoring systems are in place they should be careful  that “over blocking” does not lead to unreasonable restrictions as to what children can be taught with regards to online teaching  and safeguarding.” —

Consultation: “The Opportunities to teach safeguarding” section (para 77-78) has been updated and now says governing bodies and  “should ensure” rather than “should consider” that children are taught about safeguarding, including online, through teaching and learning opportunities. This is an important topic and the assumption is the vast majority of governing bodies and proprietors will already be ensuring the children in their school are suitably equipped with regards to safeguarding. But we are keen to hear views as to the change in emphasis.”

Paragraph 88 on p24  is oddly phrased: “Governing bodies and proprietors should ensure that staff members do not agree confidentiality and always act in the best interests of the child.”

What if confidentiality may sometimes be in the best interests of the child? What would that mean in practice?

 

Keeping Children Safe in Education – Questions on the Consultation and Use in practice

The consultation [2] is open until February 16th, and includes a new requirement to have web filtering and monitoring systems.

Remembering that 85% of children’s waking hours are spent outside school, and in a wide range of schools our children aged 2 -19, not every moment is spent unsupervised and on-screen, is it appropriate that this 24/7 monitoring would be applied to all types of school?

This provider software is reportedly being used in nearly 1,400 secondary schools in the UK.  We hear little about its applied use.

The cases of cyber bullying or sexting in schools I hear of locally, or read in the press, are through smartphones. Unless the school snoops on individual devices I wonder therefore if the cost, implementation and impact is proportionate to the benefit?

  1. Necessary and proportionate? How does this type of monitoring compare with other alternatives?
  2. Privacy impact assessment? Has any been done – surely required as a minimum measure?
  3. Cost benefit risk assessment of the new guidance in practice?
  4. Problem vs Solution: What problem is it trying to solve and how will they measure if it is successful, or stop its use if it is not?  Are other methods on offer?
  5. Due diligence: how do parents know that the providers have undergone thorough vetting and understand who they are? After all, these providers have access to millions of our children’s  online interactions.
  6. Evidence: If it has been used for years in school, how has it been assessed against other methods of supervision?
  7. The national cash cost: this must be enormous when added up for every school in the country, how is cost balanced against risk?
  8. Intangible costs – has anyone asked our children’s feeling on this? Where is the boundary between what is constructive and creepy? Is scope change documented if they decide to collect more data?

Are we Creating a Solution that Solves or creates a Problem?

The private providers would have no incentive to say their reports don’t work and schools, legally required to be risk averse, would be unlikely to say stop if there is no outcome at all.

Some providers  include “review of all incidents by child protection and forensic experts; freeing up time for teachers to focus on intervention” and “trends such as top users can be viewed.” How involved are staff who know the child as a first point of information sharing?

Most tools are multipurposed and I understand the reasons given behind them, but how it is implemented concerns me.

If the extent of these issues really justify this mass monitoring in every school – what are we doing to fix the causes, not simply spy on every child’s every online action in school? (I look at how it extends outside in part two.)

Questions on Public engagement: How are children and families involved in the implementation and with what oversight?

  1. Privacy and consent: Has anyone asked pupils and parents what they think and what rights they have to say no to sharing data?
  2. Involvement: Are parents to be involved and informed in software purchasing and in all data sharing decisions at local or regional level? Is there consistency of message if providers vary?
  3. Transparency: Where are the data created through the child’s actions stored, and for how long? Who has access to the data? What actions may result from it? And with what oversight?
  4. Understanding: How will children and parents be told what is “harmful and inappropriate content” as loosely defined by the consultation, and what they may or may not research? Children’s slang changes often, and “safeguarding” terms are subjective.
  5. Recourse: Will it include assessment of unintended consequences from misinterpretation of information gathered?
  6. Consent: And can I opt my child out from data collection by these unknown and ‘faceless’ third parties?

If children and parents are told their web use is monitored, what chilling effect may that have on their trust of the system, of teaching staff, and their ability to look for content to support their own sensitive concerns or development  that they may not be able to safe to look for at home? What limitation will that put on their creativity?

These are all questions that should be asked to thoroughly understand the consultation and requires wide public examination.

Since key logging is already common practice (according to provider websites) and will effectively in practice become statutory, where is the public discussion? If it’s not explicitly statutory, should pupils be subject to spying on their web searches in a postcode lottery?

What exactly might this part of the new guidance mean for pupils?

In part two, I include the other part of her speech, the GPS app and ask whether if we track every child in and outside school, are we moving closer to the digital dream, or nightmare, in the search to close the digital skills gap?

****

References:

[1] Nicky Morgan’s full speech at BETT

[2] Consultation: Keeping Children Safe in Education – closing Feb 16thThe “opportunities to teach safeguarding” section (para 77-78) has been updated and now says governing bodies and proprieties “should ensure” rather than “should consider” that children are taught about safeguarding, including online, through teaching and learning opportunities.

The Consultation Guidance: most relevant paragraphs 75 and 77 p 22

“Governing bodies and proprietors should be confident that systems are in place that will identify children accessing or trying to access harmful and inappropriate content online. [Proposed statutory guidance]

Since “guidance on procuring appropriate ICT” from the National Education Network NEN* is offered, it is clearly intended that this ‘system’ to be ‘in place’, should be computer based. How will it be applied in practice? A number of the software providers for schools already provide services that include key logging, using “keyword detection libraries” that “provides a complete log of all online activity”.

(*It’s hard to read more about as many of NEN’s links are dead.)  

Digital revolution by design: infrastructures and the world we want

1. Digital revolution by design: building for change and people
2. Digital revolution by design: barriers by design
3. Digital revolution by design: infrastructures and the fruits of knowledge

This is Part 4.  Infrastructures and the world we want

At high level physical network infrastructures enable data transfer from one place to another and average users perceive little of it.

In the wider world Internet infrastructure today, this week might be looked back on as, to use a horrible cliché, a game changer. A two-tier Internet traffic system could be coming to Europe which would destroy a founding principle of equality – all traffic is created equal.

In other news, Facebook announced it will open an office in the toe of Africa, a foothold on a potential market of a billion people.

Facebook’s Internet.org initiative sees a further ‘magnificent seven’ companies working together. Two of whom Ericsson and Nokia will between them have “an effective lock down on the U.S market,” unless another viable network competitor emerges.  And massive reach worldwide.

In Africa today there is a hodge podge of operators and I’ll be interested to see how much effect the boys ganging up under the protection of everybody’s big brother ‘Facebook” will have on local markets.

And they’re not alone in wanting in on African action.

Whatever infrastructures China is building on and under the ground of the African continent, or donating ludicrous showcase gifts, how they are doing it has not gone unnoticed. The Chinese ethics of working and their environmental standards can provoke local disquiet.

Will Facebook’s decision makers shape up to offer Africa an ethical package that could include not only a social network, but physical one managing content delivery in the inner workings of tubes and pipes?

In Europe the data connections within and connecting the continent are shifting, as TTIP, CETA and TISA shape how our data and knowledge will be shared or reserved or copyrighted by multinational corporations.

I hope we will ensure transparency designed it these supra-national agreements on private ownership of public firms.

We don’t want to find commercial companies withhold information such as their cyber security planning, and infrastructure investments in the name of commercial protectionism, but at a public cost.

The public has opportunities now as these agreements are being drawn up, we may not get soon again.

Not only for the physical constructions, the often networked infrastructures, but intangible infrastructures of principles and power, co-dependencies around a physical system; the legal and ethical infrastructures of ownership, governance and accountability.

The Open Data institute has just launched a call for the promotion of understanding around our own data infrastructures:

“A strong data infrastructure will increase interoperability and collaboration, efficiency and productivity in public and private sectors, nationally and internationally.”

Sounds like something we want to get right in, and outside, the UK.

Governance of data is physically geographical through country unique legislation, as well as supra national such as European-wide data protection.

In some ways outdated legal concepts in a borderless digital age but one way at least over which there is manageable oversight and citizens should be able to call companies and State to account.

Yet that accountability is questionable when laws seem to be bypassed under the banner of surveillance.

As a result people have somewhat lost trust in national bodies to do the right thing. We want to share data for public good but not for commercial exploitation. And we’re not sure who to trust with it.

Data governance of contractual terms is part of the infrastructure needed to prevent exploitation and enable not restrict sharing. And it needs to catch up with apps whose terms and conditions can change after a user has enrolled.

That comes back down to the individual and some more  ideas on those personal infrastructures are in the previous post.

Can we build lasting foundations fit for a digital future?

Before launching into haphazard steps of a digital future towards 2020, the NIB/NHS decision makers need to consider the wider infrastructures in which it is set and understand under what ethical compass they are steering by.

Can there be oversight to make national and supra-national infrastructures legally regulated, bindingly interoperable and provider and developer Ts and Cs easily understood?

Is it possible to regulate only that which is offered or sold through UK based companies or web providers and what access should be enabled or barriers designed in?

Whose interests should data and knowledge created from data serve?

Any state paid initiative building a part of the digital future for our citizens must decide, is it to be for public good or for private profit?

NHS England’s digital health vision includes: “clinical decision support to be auto populated with existing healthcare information, to take real time feeds of biometric data, and to consider genomics data in the future.”  [NIB plans, Nov 2014]

In that 66 page document while it talks of data and trust and cyber security, ethics is not mentioned once.  The ambition is to create ‘health-as-a-platform’ and its focus is on tech, not on principles.

‘2020’ is the goal and it’s not a far away future at all if counted as 1175 working days from now.

By 2020 we may have moved on or away in a new digital direction entirely or to other new standards of network or technology. On what can we build?

Facebook’s founder sees a futuristic role for biometric data used in communication. Will he drive it? Should we want him to?

Detail will change, but ethical principles could better define the framework for development promoting the best of innovation long term and protect citizens from commercial exploitation. We need them now.

When Tim Berners-Lee called for a Magna Carta on the world wide web he asked for help to achieve the web he wants.

I think it’s about more than the web he wants. This fight is not only for net neutrality. It’s not only challenging the internet of things to have standards, ethics and quality that shape a fair future for all.

While we shape the web we want, we shape the world we want.

That’s pretty exciting, and we’d better try to get it right.

******

1. Digital revolution by design: building for change and people
2. Digital revolution by design: barriers by design
3. Digital revolution by design: infrastructures and the fruits of knowledge
4. Digital revolution by design: infrastructures and the world we want

 

Driving digital health, revolution by design

This follows on from: 1. Digital revolution by design: building for change and people.

***

Talking about the future of digital health in the NHS, Andy Williams went on to ask, what makes the Internet work?

In my head I answered him, freedom.

Freedom from geographical boundaries. Freedom of speech to share ideas and knowledge in real time with people around the world.  The freedom to fair and equal use. Cooperation, creativity, generosity…

Where these freedoms do not exist or are regulated the Internet may not work well for its citizens and its potential is restricted, as well as its risks.

But the answer he gave, was standards.

And of course he was right.  Agreed standards are needed when sharing a global system so that users, their content and how it works behind the screen cooperate and function as intended.

I came away wondering what the digital future embodied in the NHS NIB plans will look like, who has their say in its content and design and who will control  it?

What freedoms and what standards will be agreed upon for the NHS ‘digital future’ to function and to what purpose?

Citizens help shape the digital future as we help define the framework of how our data are to be collected and used, through what public feeling suggests is acceptable and people actually use.

What are some of the expectations the public have and what potential barriers exist to block achieving its benefits?

It’s all too easy when discussing the digital future of the NHS to see it as a destination. Perhaps we could shift the conversation focus to people, and consider what tools digital will offer the public on their life journey, and how those tools will be driven and guided.

Expectations

One key public expectation will be of trust, if something digital is offered under the NHS brand, it must be of the rigorous standard we expect.

Is every app a safe, useful tool or fun experiment and how will users [especially for mental health apps where the outcomes may be less tangibly measured than say, blood glucose] know the difference?

A second expectation must be around universal equality of access.

A third expectation must be that people know once the app is downloaded or enrolment done, what they have signed up to.

Will the NHS England / NIB digital plans underway create or enable these barriers and expectations?

What barriers exist to the NHS digital vision and why?

Is safety regulation a barrier to innovation?

The ability to broadly share innovation at speed is one of the greatest strengths of digital development, but can also risk spreading harm quickly. Risk management needs to be upfront.

We  assume that digital designs will put at their heart the core principles in the spirit of the NHS.  But if apps are not available on prescription and are essentially a commercial product with no proven benefit, does that exploit the NHS brand trust?

Regulation of quality and safety must be paramount, or they risk doing harm as any other treatment could to the person and regulation must further consider reputational risk to the NHS and the app providers.

Regulation shouldn’t be seen as a barrier, but as an enabler to protect and benefit both user and producer, and indirectly the NHS and state.

Once safety regulation is achieved, I hope that spreading benefits will not be undermined by creating artificial boundaries that restrict access to the tools by affordability, in a postcode lottery,  or in language.

But are barriers being built by design in the NHS digital future?

Cost: commercial digital exploitation or digital exclusion?

There appear to be barriers being built by design into the current NHS apps digital framework. The first being cost.

For the poorest even in the UK today in maternity care, exclusion is already measurable in those who can and cannot afford the data allowance it costs on a smart phone for e-red book access, attendees were told by its founder at #kfdigital15.

Is digital participation and its resultant knowledge or benefit to become a privilege reserved for those who can afford it? No longer free at the point of service?

I find it disappointing that for all the talk of digital equality, apps are for sale on the NHS England website and many state they may not be available in your area – a two-tier NHS by design. If it’s an NHS app, surely it should be available on prescription and/or be free at the point of use and for all like any other treatment? Or is yet another example of  NHS postcode lottery care?

There are tonnes of health apps on the market which may not have much proven health benefit, but they may sell well anyway.

I hope that decision makers shaping these frameworks and social contracts in health today are also looking beyond the worried well, who may be the wealthiest and can afford apps leaving the needs of those who can’t afford to pay for them behind.

At home, it is some of the least wealthy who need the most intervention and from whom there may be little profit to be made There is little in 2020 plans I can see that focuses on the most vulnerable, those in prison and IRCs, and those with disabilities.

Regulation in addition to striving for quality and safety by design, can ensure there is no commercial exploitation of purchasers.  However it is a  question of principle that will decide for or against exclusion for users based on affordability.

Geography: crossing language, culture and country barriers

And what about our place in the wider community, the world wide web, as Andy Williams talked about: what makes the Internet work?

I’d like to think that governance and any “kite marking” of digital tools such as apps, will consider this and look beyond our bubble.

What we create and post online will be on the world wide web.  That has great potential benefits and has risks.

I feel that in the navel gazing focus on our Treasury deficit, the ‘European question’ and refusing refugees, the UK government’s own insularity is a barrier to our wider economic and social growth.

At the King’s Fund event and at the NIB meeting the UK NHS leadership did not discuss one of the greatest strengths of online.

Online can cross geographical boundaries.

How are NHS England approved apps going to account for geography and language and cross country regulation?

What geographical and cultural barriers to access are being built by design just through lack of thought into the new digital framework?

Barriers that will restrict access and benefits both in certain communities within the UK, and to the UK.

One of the three questions asked at the end of the NIB session, was how the UK Sikh community can be better digitally catered for.

In other parts of the world both traditional and digital access to knowledge are denied to those who cannot afford it.

school

This photo reportedly from Indonesia, is great [via Banksy on Twitter, and apologies I cannot credit the photographer] two boys on the way to school, pass their peers on their way to work.

I wonder if one of these boys has the capability to find the cure for cancer?
What if he is one of the five, not one of the two?

Will we enable the digital infrastructure we build today to help global citizens access knowledge and benefits, or restrict access?

Will we enable broad digital inclusion by design?

And what of  data sharing restrictions: Barrier or Enabler?

Organisations that talk only of legal, ethical or consent ‘barriers’ to datasharing don’t understand human behaviour well enough.

One of the greatest risks to achieving the potential benefits from data is the damage done to it by organisations that are paternalistic and controlling. They exploit a relationship rather than nurturing it.

The data trust deficit from the Royal Statistical Society has lessons for policymakers. Including finding that: “Health records being sold to private healthcare companies to make money for government prompted the greatest opposition (84%).”

Data are not an abstract to be exploited, but personal information. Unless otherwise informed, people expect that information offered for one purpose, will not be used for another. Commercial misuse is the greatest threat to public trust.

Organisations that believe behavioural barriers to data sharing are an obstacle,  have forgotten that trust is not something to be overcome, but to be won and continuously reviewed and protected.

The known barrier without a solution is the lack of engagement that is fostered where there is a lack of respect for the citizen behind the data. A consensual data charter could help to enable a way forward.

Where is the wisdom we have lost in knowledge?

Once an app is [prescribed[, used, data exchanged with the NHS health provider and/or app designer, how will users know that what they agreed to in an in-store app, does not change over time?

How will ethical guidance be built into the purposes of any digital offerings we see approved and promoted in the NHS digital future?

When the recent social media experiment by Facebook only mentioned the use of data for research after the experiment, it caused outcry.

It crossed the line between what people felt acceptable and intrusive, analysing the change in behaviour that Facebook’s intervention caused.

That this manipulation is not only possible but could go unseen, are both a risk and cause for concern in a digital world.

Large digital platforms, even small apps have the power to drive not only consumer, but potentially social and political decision making.

“Where is the knowledge we have lost in information?” asks the words of T S Elliott in Choruses, from the Rock. “However you disguise it, this thing does not change: The perpetual struggle of Good and Evil.”

Knowledge can be applied to make a change to current behaviour, and offer or restrict choices through algorithmic selection. It can be used for good or for evil.

‘Don’t be evil’ Google’s adoptive mantra is not just some silly slogan.

Knowledge is power. How that power is shared or withheld from citizens matters not only today’s projects, but for the whole future digital is helping create. Online and offline. At home and abroad.

What freedoms and what standards will be agreed upon for it to function and to what purpose? What barriers can we avoid?

When designing for the future I’d like to see discussion consider not only the patient need, and potential benefits, but also the potential risk for exploitation and behavioural change the digital solution may offer. Plus, ethical solutions to be found for equality of access.

Regulation and principles can be designed to enable success and benefits, not viewed as barriers to be overcome

There must be an ethical compass built into the steering of the digital roadmap that the NHS is so set on, towards its digital future.

An ethical compass guiding app consumer regulation,  to enable fairness of access and to know when apps are downloaded or digital programmes begun, that users know to what they are signed up.

Fundamental to this the NIB speakers all recognised at #kfdigital15 is the ethical and trustworthy extraction, storage and use of data.

There is opportunity to consider when designing the NHS digital future [as the NIB develops its roadmaps for NHS England]:

i making principled decisions on barriers
ii. pro-actively designing ethics and change into ongoing projects, and,
iii. ensuring engagement is genuine collaboration and co-production.

The barriers do not need got around, but solutions built by design.

***

Part 1. Digital revolution by design: building for change and people
Part 3. Digital revolution by design: building infrastructures

NIB roadmaps: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/384650/NIB_Report.pdf

Digital revolution by design: building for change and people (1)

Andy Williams said* that he wants not evolution, but a revolution in digital health.

It strikes me that few revolutions have been led top down.

We expect revolution from grass roots dissent, after a growing consensus in the population that the status quo is no longer acceptable.

As the public discourse over the last 18 months about the NHS use of patient data has proven, we lack a consensual agreement between state, organisations and the public how the data in our digital lives should be collected, used and shared.

The 1789 Declaration of the Rights of Man and Citizen as part of the French Revolution set out a charter for citizens, an ethical and fair framework of law in which they trusted their rights would be respected by fellow men.

That is something we need in this digital revolution.

We are told hand by government that it is necessary to share all our individual level data in health from all sorts of sources.

And that bulk data collection is vital in the public interest to find surveillance knowledge that government agencies want.

At the same time other government departments plan to restrict citizens’ freedom of access to knowledge that could be used to hold the same government  and civil servants to account.

On the consumer side, there is public concern about the way we are followed around on the web by companies including global platforms like Google and Facebook, that track our digital footprint to deliver advertising.

There is growing objection to the ways in which companies scoop up data to build profiles of individuals and groups and personalising how they get treated. Recent objection was to marketing misuse by charities.

There is little broad understanding yet of the power of insight that organisations can now have to track and profile due to the power of algorithms and processing capability.

Technology progress that has left legislation behind.

But whenever you talk to people about data there are two common threads.

The first, is that although the public is not happy with the status quo of how paternalistic organisations or consumer companies ‘we can’t live without’ manage our data, there is a feeling of powerlessness that it can’t change.

The second, is frustration with organisations that show little regard for public opinion.

What happens when these feelings both reach tipping point?

If Marie Antoinette were involved in today’s debate about the digital revolution I suspect she may be the one saying: “let them eat cookies.”

And we all know how that ended.

If there is to be a digital revolution in the NHS where will it start?

There were marvelous projects going on at grassroots discussed over the two days: bringing the elderly online and connected and in housing and deprivation. Young patients with rare diseases are designing apps and materials to help consultants improve communications with patients.

The NIB meeting didn’t have real public interaction and or any discussion of those projects ‘in the room’ in the 10 minutes offered. Considering the wealth of hands-on digital health and care experience in the audience it was a missed opportunity for the NIB to hear common issues and listen to suggestions for co-designed solutions.

While white middle class men (for the most part) tell people of their grand plans from the top down, the revolutionaries of all kinds are quietly getting on with action on the ground.

If a digital revolution is core to the NHS future, then we need to ask to understand the intended change and outcome much more simply and precisely.

We should all understand why the NHS England leadership wants to drive change, and be given proper opportunity to question it, if we are to collaborate in its achievement.

It’s about the people, stoopid

Passive participation will not be enough from the general public if the revolution is to be as dramatic as it is painted.

Consensual co-design of plans and co-writing policy are proven ways to increase commitment to change.

Evidence suggests citizen involvement in planning is more likely to deliver success. Change done with, not to.

When constructive solutions have been offered what impact has engagement if no change is made to any  plans?

If that’s engagement, you’re doing it wrong.

Struggling with getting themselves together on the current design for now, it may be hard to invite public feedback on the future.

But it’s only made hard if what the public wants is ignored.  If those issues were resolved in the way the public asked for at listening events it could be quite simple to solve.

The NIB leadership clearly felt nervous to have debate, giving only 10 minutes of three hours for public involvement, yet that is what it needs. Questions and criticism are not something to be scared of, but opportunities to make things better.

The NHS top-down digital plans need public debate and dissected by the clinical professions to see if it fits the current and future model of healthcare, because if not involved in the change, the ride will be awfully bumpy to get there.

For data about us, to be used without us, is certainly an outdated model incompatible with a digital future.

The public needs to fight for citizen rights in a new social charter that demands change along lines we want, change that doesn’t just talk of co-design but that actually means it.

If unhappy about today’s data use, then the general public has to stop being content to be passive cash cows as we are data mined.

If we want data used only for public benefit research and not market segmentation, then we need to speak up. To the Information Commissioner’s Office if the organisation itself will not help.

“As Nicole Wong, who was one of President Obama’s top technology advisors, recently wrote, “[t]here is no future in which less data is collected and used.”

“The challenge lies in taking full advantage of the benefits that the Internet of Things promises while appropriately protecting consumers’ privacy, and ensuring that consumers are treated fairly.” Julie Brill, FTC, May 4 2015, Berlin

In the rush to embrace the ‘Internet of Things’ it can feel as though the reason for creating them has been forgotten. If the Internet serves things, it serves consumerism. AI must tread an enlightened path here. If the things are designed to serve people, then we would hope they offer methods of enhancing our life experience.

In the dream of turning a “tsunami of data” into a “tsunami of actionable business intelligence,” it seems all too often the person providing the data is forgotten.

While the Patient and Information Directorate, NHS England or NIB speakers may say these projects are complex and hard to communicate the benefits, I’d say if you can’t communicate the benefits, its not the fault of the audience.

People shouldn’t have to either a) spend immeasurable hours of their personal time, understanding how these projects work that want their personal data, or b) put up with being ignorant.

We should be able to fully question why it is needed and get a transparent and complete explanation. We should have fully accountable business plans and scrutiny of tangible and intangible benefits in public, before projects launch based on public buy-in which may misplaced. We should expect  plans to be accessible to everyone and make documents straightforward enough to be so.

Even after listening to a number of these meetings and board meetings, I am  not sure many would be able to put succinctly: what is the NHS digital forward view really? How is it to be funded?

On the one hand new plans are to bring salvation, while the other stops funding what works already today.

Although the volume of activity planned is vast, what it boils down to, is what is visionary and achievable, and not just a vision.

Digital revolution by design: building for change and people

We have opportunity to build well now, avoiding barriers-by-design, pro-actively designing ethics and change into projects, and to ensure it is collaborative.

Change projects must map out their planned effects on people before implementing technology. For the NHS that’s staff and public.

The digital revolution must ensure the fair and ethical use of the big data that will flow for direct care and secondary uses if it is to succeed.

It must also look beyond its own bubble of development as they shape their plans in the ever changing infrastructures in which data, digital, AI and ethics will become important to discuss together.

That includes in medicine.

Design for the ethics of the future, and enable change mechanisms in today’s projects that will cope with shifting public acceptance, because that shift has already begun.

Projects whose ethics and infrastructures of governance were designed years ago, have been overtaken in the digital revolution.

Projects with an old style understanding of engagement are not fit-for-the-future. As Simon Denegri wrote, we could have 5 years to get a new social charter and engagement revolutionised.

Tim Berners-Lee when he called for a Magna Carta on the Internet asked for help to achieve the web he wants:

“do me a favour. Fight for it for me.”

The charter as part of the French Revolution set out a clear, understandable, ethical and fair framework of law in which they trusted their rights would be respected by fellow citizens.

We need one for data in this digital age. The NHS could be a good place to start.

****

It’s exciting hearing about the great things happening at grassroots. And incredibly frustrating to then see barriers to them being built top down. More on that shortly, on the barriers of cost, culture and geography.

****

* at the NIB meeting held on the final afternoon of the Digital Conference on Health & Social Care at the King’s Fund, June 16-17.

NEXT>>
2. Driving Digital Health: revolution by design
3. Digital revolution by design: building infrastructure

Refs:
Apps for sale on the NHS website
Whose smart city? Resident involvement
Data Protection and the Internet of Things, Julie Brill FTC
A Magna Carta for the web

Off the record – a case study in NHS patient data access

Patient online medical records’ access in England was promised by April 2015.

HSCIC_statsJust last month headlines abounded “GPs ensure 97% of patients can access summary record online“. Speeches carried the same statistics.  So what did that actually mean? The HSCIC figures released in May 2015 showed that while around 57 million patients can potentially access something of their care record only 2.5 million or 4.5% of patients had actively signed up for the service.

In that gap lies a gulf of a difference. You cannot access the patient record unless you have signed up for it, so to give the impression that 97% of patients can access a summary record online is untrue.  Only 4.5% can, and have done so. While yes, this states patients must request access, the impression is somewhat misrepresentative.

Here’s my look at what that involved and once signed up, what ‘access your medical records’ actually may mean in practice.

The process to getting access

First I wrote a note to the practice manager about a month ago, and received a phone call a few days later to pop in any time. A week later, I called to check before I would ‘pop in’ and found that the practice manager was not in every day, and it would actually have to be when she was.

I offered to call back and arrange a suitable date and time. Next call, we usefully agreed the potential date I could go in, but I’d have to wait to be sure that the paper records had first been retrieved from the external store (since another practice closed down ours had become more busy than ever and run out of space.) I was asked whether I had already received permission from the practice manager and to confirm that I knew there would be a £10 charge.

So, one letter, four phone calls and ten pounds in hard cash later, I signed a disclosure form this morning to say I was me and I had asked to see my records, and sat in a corner of the lovely practice manager’s office with a small thinly stuffed Lloyd George envelope, and a few photocopied or printed-out A4 pages  (so I didn’t get to actually look at my own on-screen record the GP uses) and a receipt.

What did my paper records look like?

My oldest notes on paper went back as far as 1998 and were for the most part handwritten. Having lived abroad since there was then a ten year gap until my new registration and notes moved onto paper prints of electronic notes.

These included referral for secondary care, correspondence between consultants and my GP and/or to and from me.

The practice manager was very supportive and tolerant of me taking up a corner of her office for half an hour. Clutching a page with my new log-in for the EMIS web for patient records access, I put the papers back, said my thank yous and set off home.

Next step: online

I logged on at home to the patient access system. Having first had it in 2009 when I registered, I hadn’t used the system since as it had very limited functionality, and I had had good health. Now I took the opportunity to try it again.

By asking the GP practice reception, I had been assigned a PIN, given the Practice ID, an Access ID and confirmation of my NHS number all needed entry in Step 1:

emis1

 

Step 2: After these on screen 2, I was asked for my name, DOB, and to create a password.

emis2

 

Step 3: the system generated a long number user ID which I noted down.

Step 4: I looked for the data sharing and privacy policy. Didn’t spot with whom data entered would be shared or for what purposes and any retention or restrictions of purposes. I’d like to see that added.

emis3
Success:

Logged on using my new long user ID and password, I could see an overview page with personal contact details, which were all accurate.  Sections for current meds, allergies, appointments, medical record, personal health record and repeats prescriptions. There was space for overview of height, BMI and basic lifestyle (alcohol and smoking) there too.

emis4c

 

A note from 2010 read: “refused consent to upload national. sharing. electronic record.” Appropriately some may perhaps think, this was recorded in the “problems” section, which was otherwise empty.

Drilling down to view the medication record,  the only data held was the single most recent top line prescription without any history.

emis4b

 

And the only other section to view was allergies, similarly and correctly empty:

emis5

The only error I noted was a line to say I was due an MMR immunization in June 2015. [I will follow up to check whether one of my children should be due for it, rather than me.]

What else was possible?

Order repeat prescription: If your practice offers this service there is a link called Make a request in the Repeat Prescriptions section of the home page after you have signed in. This was possible. Our practice already does it direct with the pharmacy.

Book an appointment: with your own GP from dates in a drop down.

Apple Health app integration: The most interesting part of the online access was this part that suggested it could upload a patient’s Apple health app data, and with active patient consent, that would be shared with the GP.

emis6

 

It claims: “You can consent to the following health data types being shared to Patient Access and added to your Personal Health Record (PHR):”

  • Height
  • Weight
  • BMI
  • Blood Glucose
  • Blood Pressure (Diastolic & Systolic)
  • Distance (walked per day)
  • Forced expired volume
  • Forced Vital capacity
  • Heart Rate
  • Oxygen Saturation
  • Peak Expiratory Flow
  • Respiratory rate
  • Steps (taken per day)

“This new feature is only available to users of IOS8 who are using the Apple Health app and the Patient Access app.”

 

With the important caveat for some: IOS 8.1 has removed the ability to manually enter Blood Glucose data via the Health app. Health will continue to support Blood Glucose measurements added via 3rd party apps such as MySugr and iHealth.

Patient Access will still be able to collect any data entered and we recommend entering Blood Glucose data via one of those free apps until Apple reinstate the capability within Health.

What was not possible:

To update contact details: The practice configures which details you are allowed to change. It may be their policy to restrict access to change some details only in person at the practice.

Viewing my primary care record: other than a current medication there was nothing of my current records in the online record. Things like test results were not in my online record at all, only on paper. Pulse noted sensible concerns about this area in 2013.

Make a correction: clearly the MMR jab note is wrong, but I’ll need to ask for help to remove it.

“Currently the Patient Access app only supports the addition of new information; however, we envisage quickly extending this functionality to delete information via the Patient Access service.” How this will ensure accuracy and avoid self editing I am unsure.

Questions: Who can access this data?

While the system stated that “the information is stored securely in our accredited data centre that deals solely with clinical data. ” there is no indication of where, who manages it and who may access it and why.

In 2014 it was announced that pharmacies would begin to have access to the summary care record.

“A total of 100 pharmacies across Somerset, Northampton, North Derbyshire, Sheffield and West Yorkshire will be able to view a patient’s summary care record (SCR), which contains information such as a patient’s current medications and allergies.”

Yet clearly in the Summary Care Record consent process in 2010 from my record, pharmacists were not mentioned.

Does the patient online access also use the Summary Care Record or not? If so, did I by asking for online access, just create a SCR without asking for one? Or is it a different subset of data? If they are different, which is the definitive record?

Overall:

From stories we read it could appear that there are broad discrepancies between what is possible in one area of the country from another, and between one practice and another.

Clearly to give the impression that 97% of patients can access summary records online is untrue to date if only 4.5% actually can get onto an electronic system, and see any part of their records, on demand today.

How much value is added to patients and practitioners in that 4.5% may vary enormously depending upon what functionality they have chosen to enable at different locations.

For me as a rare user of the practice, there is no obvious benefit right now. I can book appointments during the day by telephone and meds are ordered through the chemist. It contained no other information.

I don’t know what evidence base came from patients to decide that Patient Online should be a priority.

How many really want and need real time, online access to their records? Would patients not far rather the priority in these times of austerity, the cash and time and IT expertise be focused on IT in direct care and visible by their medics? So that when they visit hospital their records would be available to different departments within the hospital?

I know which I would rather have.

What would be good to see?

I’d like to get much clearer distinction between the data purposes we have of what data we share for direct and indirect purposes, and on what legal basis.

Not least because it needs to be understandable within the context of data protection legislation. There is often confusion in discussions of what consent can be implied for direct care and where to draw its limit.

The consultation launched in June 2014 is still to be published since it ended in August 2014, and it too blurred the lines between direct care and secondary purposes.  (https://www.gov.uk/government/consultations/protecting-personal-health-and-care-data).

Secondly, if patients start to generate potentially huge quantities of data in the Apple link and upload it to GP electronic records, we need to get this approach correct from the start. Will that data be onwardly shared by GPs through care.data for example?

But first, let’s start with tighter use of language on communications. Not only for the sake of increased accuracy, but so that as a result expectations are properly set for policy makers, practitioners and patients making future decisions.

There are many impressive visions and great ideas how data are to be used for the benefit of individuals and the public good.

We need an established,  easy to understand, legal and ethical framework about our datasharing in the NHS to build on to turn benefits into an achievable reality.

smartphones: the single most important health treatment & diagnostic tool at our disposal [#NHSWDP 2]

After Simon Stevens big statement on smartphones at the #NHSWDP event, I’d asked what sort of assessment had the NHS done on how wearables’ data would affect research.

#digitalinclusion is clearly less about a narrow focus on apps than applied skills and online access.

But I came away wondering how apps will work in practice, affect research and our care in the NHS in the UK, and much more.

What about their practical applications and management?

NHS England announced a raft of regulated apps for mental health this week, though it’s not the first approved.  

This one doesn’t appear to have worked too well.

The question needs an answer before many more are launched: how will these be catalogued, indexed and stored ? Will it be just a simple webpage? I’m sure we can do better to make this page user friendly and intuitive.

This British NHS military mental health app is on iTunes. Will iTunes carry a complete NHS approved library and if so, where are the others?

We don’t have a robust regulation model for digital technology, it was said at a recent WHF event, and while medical apps are sold as wellness or fitness or just for fun, patients could be at risk.

In fact, I’m convinced that while medical apps are being used by consumers as medical devices, for example as tests, or tools which make recommendations, and they are not thoroughly regulated, we *are* at risk.

If Simon Stevens sees smartphones as: “going to be the single most important health treatment and diagnostic tool at our disposal over the coming decade and beyond,” then we’d best demand the tools that work on them, work safely. [speech in full]

And if his statement on their importance is true, then when will our care providers be geared up to accepting extracts of data held on a personal device into the local health record at a provider – how will interoperability, testing and security work?

And who’s paying for them? those on the library right now, have price tags. The public should be getting lots of answers to lots of questions.

“Over the coming decade”  has already started.

What about Research?: I know the Apple ResearchKit had a big reaction, and I’m sure there’s plenty of work already done on expectations of how data sharing in wearables affect research participation. (I just haven’t read it yet, but am interested to do so,  feel free to point any my way).

I was interested in the last line in this article: “ResearchKit is a valiant effort by Apple, and if its a hit with scientists, it could make mass medical research easier than ever.”

How do we define ‘easier’? Has Apple hit on a mainstream research app? How is ‘mass medical research’ in public health for example, done today and how may it change?

Will more people be able to participate in remote trials?

Will more people choose to share their well-being data and share ‘control’ phenotype data more in depth than in the past?

Are some groups under- or not-at-all represented?

How will we separate control of datasharing for direct care and for other secondary uses like research?

Quality: Will all data be good data or do we risk research projects drowning in a data tsunami of quantity not quality? Or will apps be able to target very specific trial data better than before?

How: One size will not fit all. How will data stored in wearables affect research in the UK? Will those effects differ between the UK and the US, and will app designs need different approaches due to the NHS long history and take into account single standards and be open? How will research take historical data into account if apps are all ‘now’? How will research based on that data be peer reviewed?

Where: And as we seek to close the digital divide here at home, what gulf may be opening up in the research done in public health, the hard to reach, and even between ‘the west’ and ‘developing’ countries?

In the UK will the digital postcode lottery affect care? Even with a wish for wifi in every part of the NHS estate, the digital differences are vast. Take a look at Salford – whose digital plans are worlds apart from my own Trust which has barely got rid of Lloyd George folders on trolleys.

Who: Or will in fact the divide not be by geography, but by accessibility based on wealth?  While NHS England talks about digital exclusion, you would hope they would be doing all they can to reduce it. However, the mental health apps announced just this week each have a price tag if ‘not available’ to you on the NHS.

Why: on what basis will decisions be made on who gets them prescribed and who pays for the,  where apps are to be made available for which area of diagnosis or treatment, or at all if the instructions are “to find out if it’s available in your area email xxx or call 020 xxx. Or you could ask your GP or healthcare professional.”

The highest intensity users of the NHS provision, are unlikely to be the greatest users of growing digital trends.

Rather the “worried well” would seem the ideal group who will be encouraged to stay away from professionals, self-care with self-paid support from high street pharmacies. How much could or will this measurably benefit the NHS, the individual and make lives better? As increasingly the population is risk stratified and grouped into manageable portions, will some be denied care based on data?

Or will the app providers be encouraged to promote their own products, make profits, benefit the UK plc regardless of actual cost and measurable benefits to patients?

In 2013, IMS Health reported that more than 43,000 health-related apps were available for download from the Apple iTunes app store. Of those, the IMS Institute found that only 16,275 apps are directly related to patient health and treatment, and there was much to be done to move health apps from novelty to mainstream.

Reactionary or Realistic – and where’s the Risks Assessment before NHS England launches even more approved apps?

At the same time as being exciting,  with this tempting smörgåsbord of shiny new apps comes a set of new risks which cannot responsibly be ignored. In patient safety, cyber security, and on what and who will be left out.

Given that basic data cannot in some places be shared between GP and hospital due for direct care to local lack of tech and the goal is another five years away, how real is the hype of the enormous impact of wearables going to be for the majority or at scale?

On digital participation projects: “Some of the work that has already been done by the Tinder Foundation, you take some of the examples here, with the Sikh community in  Leicester around diabetes, and parenting in other parts of the country, you can see that this is an agenda which can potentially get real quite quickly and can have quite a big impact.”
(Simon Stevens)

These statements, while each on different aspects of digital inclusion, by Simon Stevens on smartphones, and scale, and on consent by Tim Kelsey, are fundamentally bound together.

What will wearables mean for diagnostics, treatment and research in the NHS? For those who have and those who have not?

How will sharing data be managed for direct care and for other purposes?

What control will the patriarchy of the NHS reasonably expect to have over patients choice of app by any provider? Do most patients know at all, what effect their choice may have for their NHS care?

How will funding be divided into digital and non-digital, and be fair?

How will we maintain the principles and practice of a ‘free at the point of access’ digital service available to all in the NHS?

Will there really be a wearables revolution? Or has the NHS leadership just jumped on a bandwagon as yet without any direction?

****

[Next: part three  – on consent – #NHSWDP 3: Wearables: patients will ‘essentially manage their data as they wish’. What will this mean for diagnostics, treatment and research and why should we care?] 

[Previous: part one – #NHSWDP 1: Thoughts on Digital Participation and Health Literacy: Opportunities for engaging citizens in the NHS – including Simon Stevens full keynote speech]

On Being Human – moral and material values

The long running rumours of change afoot on human rights political policy were confirmed recently, and have been in the media and on my mind since.

Has human value become not just politically acceptable, but politically valuable?

Paul Bernal in his blog addressed the subject which has been on my mind, ‘Valuing the Human’ and explored the idea, ‘Many people seem to think that there isn’t any value in the human, just in certain kinds of human.’

Indeed, in recent months there appears to be the creation of a virtual commodity, making this concept of human value “not just politically acceptable, but politically valuable.” The concept of the commodity of human value, was starkly highlighted by Lord Freud’s recent comments, on human worth. How much a disabled person should earn was the focus of the remarks, but conflated the price of labour and human value.

European Rights undermined

Given the party policy announcements and the response by others in government or lack of it, it is therefore unsurprising that those familiar with human rights feel they will be undermined in the event that the policy proposals should ever take effect. As the nation gears up into full electioneering mode for May 2015, we have heard much after party speeches, about rights and responsibilities in our dealings with European partners, on what Europe contributes to, or takes away from our sovereignty in terms of UK law. There has been some inevitable back-slapping and generalisation in some quarters that everything ‘Europe’ is bad.

Whether or not our state remains politically within the EU may be up for debate, but our tectonic plates are not for turning. So I find it frustrating when politicians speak of or we hear of in the media, pulling out of Europe’ or similar.

This conflation of language is careless,  but I fear it is also dangerous in a time when the right wing fringe is taking mainstream votes and politicians in by-elections. Both here in the UK and in other European countries this year, far right groups have taken significant votes.

Poor language on what is ‘Europe’ colours our common understanding of what ‘Europe’ means, the nuances of the roles organisational bodies have, for example the differences between the European Court of Human Rights and the European Court of Justice, and their purposes are lost entirely.

The values imposed in the debate are therefore misaligned with the organisations’ duties, and all things ‘European’ and organisations  are tarred with the same ‘interfering’ brush and devalued.

Human Rights were not at their heart created by ‘Europe’ nor are they only some sort of treaty to be opted out from, [whilst many are enshrined in treaties and Acts which were, and are] but their values risk being conflated with the structures which support them.

“A withdrawal from the convention could jeopardise Britain’s membership of the EU, which is separate to the Council of Europe whose members are drawn from across the continent and include Russia and Ukraine. Membership of the Council of Europe is a requirement for EU member states.” [Guardian, October 3rd – in a clearly defined article]

The participation in the infrastructure of ‘Brussels’ however, is convenient to conflate with values; a loss of sovereignty, loss of autonomy, frivoulous legislation. Opting out of a convention should not mean changing our values. However it does seem the party attitude now on show, is seeking to withdraw from the convention. This would mean withdrawing the protections the structure offers. Would it mean withdrawing rights offered to all citizens equally as well?

Ethical values undermined

Although it varies culturally and with few exceptions, I think we do have in England a collective sense of what is fair, and how we wish to treat each others as human beings. Increasingly however, it feels as though through loose or abuse of language in political debate we may be giving ground on our ethics. We are being forced to bring the commodity of human value to the podium, and declare on which side we stand in party politics. In a time of austerity, there is a broad range of ideas how.

Welfare has become branded ‘benefits’. Migrant workers, ‘foreigners’ over here for ‘benefit tourism’. The disabled labeled ‘fit for work’ regardless of medical fact. It appears, increasingly in the UK, some citizens are being measured by their economic material value to contribute or take away from ‘the system’.

I’ve been struck by the contrast coming from 12 years abroad, to find England a place where the emphasis is on living to work, not working to live. If we’re not careful, we see our personal output in work as a measure of our value. Are humans to be measured only in terms of our output, by our productivity, by our ‘doing’ or by our intrinsic value as an individual life? Or simply by our ‘being’? If indeed we go along with the concept, that we are here to serve some sort of productive goal in society on an economic basis, our measurement of value of our ‘doing’, is measured on a material basis.

“We hear political speeches talking about ‘decent, hardworking people’ – which implies that there are some people who are not as valuable.”

I strongly agree with this in Paul’s blog. And as he does, disagree with its value statement.

Minority Rights undermined

There are minorities and segments of society whose voice is being either ignored, or actively quietened. Those on the outer edge of the umbrella ‘society’ offers us, in our collective living, are perhaps least easily afforded its protections. Travelers, those deemed to lack capacity, whether ill, old or young, single parents, or ‘foreign’ workers, to take just some examples.

I was told this week that the UK has achieved a  first. It was said, we are the first ‘first-world’ country under review by the CPRD for human rights abuse of the disabled. Which cannot be confirmed nor denied by the UN but a recent video indicated.

This is appalling in 21st century Britain.

Recently on Radio 4 news I heard of thousands of ESA claimants assigned to work, although their medical records clearly state they are long term unfit.

The group at risk highlighted on October 15th in the Lords, in debate on electoral records’ changes [col 206]  is women in refuges, women who feel at risk. As yet I still see nothing to assure me that measures have been taken to look after this group, here or for care.data.{*}

These are just simplified sample groups others have flagged at risk. I feel these groups’ basic rights are being ignored, because they can be for these minorities. Are they viewed as of less value than the majority of ‘decent, hardworking people’ perhaps, as having less economic worth to the state?

Politicians may say that any change will continue to offer assurances:
“We promote the values of individual human dignity, equal treatment and fairness as the foundations of a democratic society.”

But I simply don’t see it done fairly for all.

I see society being quite deliberately segmented into different population groups, weak and strong. Some groups need more looking after than others, and I am attentive when I hear of groups portrayed as burdens to society, the rest who are economically ‘productive’.

Indeed we seem to have reached a position in which the default position undermines the rights of the vulnerable, far from offering additional responsibilities to those who should protect them.

This stance features often in the media discussion and in political debate, on health and social care. DWP workfare, JSA, ‘bedroom tax’ to name but a few.


How undermining Rights undermines access

So, as the NHS England five year forward plan was announced recently, I wonder how the plan for the NHS and the visions for the coming 5 year parliamentary terms will soon align?

There is a lot of talking about plans, but more important is what happens as a result not of what we say, but of what we do, or don’t do. Not only for future, but what is already, today.

Politically, socially and economically we do not exist in silos. So too, our human rights which overlap in these areas, should be considered together.

Recent years has seen a steady reduction of rights to access for the most vulnerable in society. Access to a lawyer or judicial review has been made more difficult through charging for it.  The Ministry of Justice is currently pushing for, but losing it seems their quest in the Lords, for changes to the judicial review law.

If you are a working-age council or housing association tenant, the council limits your housing benefit claim if it decides you have ‘spare’ bedrooms. Changes have hit the disabled and their families hardest. These segments of the population are being denied or given reduced access to health, social and legal support.

Ethical Values need Championed

Whilst it appears the state increasingly measures everything in economic value, I believe the public must not lose sight of our ethical values, and continue to challenge and champion their importance.

How we manage our ethics today is shaping our children. What do we want their future to be like? It will also be our old age. Will we by then be measured by our success in achievement, by what we ‘do’, by what we financially achieved in life, by our health, or by who we each are? Or more intrinsically, values judged even, based on our DNA?

Will it ever be decided by dint of our genes, what level of education we can access?

Old age brings its own challenges of care and health, and we are an aging population. Changes today are sometimes packaged as shaping our healthcare fit for the 21st century.

I’d suggest that current changes in medical research and the drivers behind parts of the NHS 5YP vision will shape society well beyond that.

What restrictions do we place on value and how are moral and material values to play out together? Are they compatible or in competition?

Because there is another human right we should remember in healthcare, that of striving to benefit from scientific improvement.

This is an area in which the rights of the vulnerable and the responsibilities to uphold them must be clearer than clear.

In research if Rights are undermined, it may impact Responsibilities for research

I would like to understand how the boundary is set of science and technology and who sets them on what value basis in ethics committees and more. How does it control or support the decision making processes which runs in the background of NHS England which has shaped this coming 5 year policy?

It appears there are many decisions on rare disease, on commissioning,  for example, which despite their terms of reference, see limited or no public minutes, which hinders a transparency of their decision making.

The PSSAG has nothing at all. Yet they advise on strategy and hugely significant parts of the NHS budget.

Already we see fundamental changes of approach which appear to have economic rather than ethical reasons behind them. This in stem-cell banking, is a significant shift for the state away from the absolute belief in the non-commercialisation of human tissue, and yet little public debate has been encouraged.

There is a concerted effort from research bodies, and from those responsible for our phenotype data {*}, to undermine the coming-in-2015, stronger, European data protection and regulation, with attempt to amend EU legislation in line with [less stringent] UK policy. Policy which is questioned by data experts on the use of pseudonymisation for example.

How will striving to benefit from scientific improvement overlap with material values of ‘economic function’ is clear when we hear often that UK Life Sciences are the jewel in the crown of the UK economy? Less spoken of, is how this function overlaps with our moral values.

“We’ve got to change the way we innovate, the way that we collaborate, and the way that we open up the NHS.” [David Cameron, 2011]

The care.data engagement – is it going to jilt citizens after all? A six month summary in twenty-five posts.

[Note update Sept 19th: after the NHS England AGM in the evening of Sept 18th – after this care.data engagement post published 18hrs earlier – I managed to ask Mr.Kelsey, National Director for Patients and Information, in person what was happening with all the engagement feedback and asked why it had not been made publicly available.

He said that the events’ feedback will be published before the pathfinder rollout begins, so that all questions and concerns can be responded to and that they will be taken into account before the pathfinders launch.

When might that be, I asked? ‘Soon’.

Good news? I look forward to seeing that happen. My open questions on commercial uses and more, and those of many others I have heard, have been captured in previous posts, in particular the most recent at the end of this post. – end of update.]

Medical data has huge power to do good, but it presents risks too. When leaked, it cannot be unleaked. When lost, public trust cannot be easily regained. That’s what broken-hearted Ben Goldacre wrote about care.data on February 28th of this year, ten days after the the pause was announced on February 18th [The Guardian] .

Fears and opinions, facts and analysis, with lots and lots of open questions. That’s what I’ve written up in the following posts related to care.data since then, including my own point-of-view and feedback from other citizens, events and discussions. All my care.data posts are listed here below, in one post, to give an overview of the whole story, and any progress in the six months ‘listening’ and ‘engagement’.

So what of that engagement? If there really have been all these events and listening, why has there been not one jot of public feedback published? This is from September 2014, I find it terrifyingly empty of anything but discussing change in communications of the status quo programme.

I was at that workshop, hosted by Mencap on communicating

with vulnerable and excluded groups the article mentions. It was carefully managed, with little open room discussion to share opinions cross groups (as the Senior Policy Adviser at Signature pointed out.) Whilst we got the NHS England compilation of the group feedback afterwards, it was not published. Maybe I should do that and ask how each concern will be addressed? I didn’t want to stand on the NHS England national comms. toes, assuming it would be, but you know, what? If the raw feedback says from all these meetings, these are our concerns and we want these changes, and none are forthcoming, then the public should justifiably question the whole engagement process.

It’s public money, and the public’s data. How both are used and why, is not to be hidden away in some civil service spreadsheet. Publish the business case. Publish the concerns. Publish how they are to be addressed.

From that meeting and the others I have been to, many intelligent questions from the public remain unanswered. The most recent care.data advisory workshop summarised many from the last year, and brought out some minority voices as well.

 

On the day of NHS Citizen, the new flagship of public involvement, people like me who attended the NHS England Open Day on June 17th, or care.data listening events, may be understandably frustrated that there is no publicly available feedback or plan of any next steps.
care.data didn’t make it into the NHS Citizen agenda for discussion for the 18th. [Many equally other worthy subjects did, check them out here if not attending or watch it online.] So from where will we get any answers? Almost all the comment, question and feedback I have heard at events has been constructively critical, and worthy of response. None is forthcoming.

 

Instead, the article above, this reported speech by Mr.Kelsey and its arguments, make me think engagement is going nowhere. No concerns are addressed. PR is repeated. More facts and figures which are a conflation of data use for clinical treatment and all sorts of other uses, are presented as an argument for gathering more data.

Citizens do not need told of the benefits. We need concrete steps taken in policy, process and practice, to demonstrate why we can now trust the new  system.

Only then is it worthwhile to come back to communications.

How valued is patient engagement in reality, if it is ignored?

How will involvement continue to be promoted in NHS Citizen and other platforms, if it is seen to be ineffective?

How might this affect future programmes and our willingness to get involved in clinical research?

I sincerely hope to see the raw feedback published very soon, which NHS England has gathered in their listening events. How that will be incorporated into any programme changes, as well as  communications, will go a long way to assuring the quantity in numbers and quality of cross-population participation.

The current care.data status is in limbo, as we await to see if and when any ‘pathfinder’ CCGs will be announced that will guinea pig the patient records from the GP practices in a trial rollout, in whatever form that may take. The latest official statements from Mr.Kelsey have been on 100-500 practices, but without any indicator of where or when. He suggests ‘shortly’.

What next for care.data? I’ll keep asking the questions and hope we hear some answers from the NHS England Patients and Information Directorate. Otherwise, what was the [&88!@xY!] point of a six month pause and all these efforts and listening?

Publish the business case. Publish the concerns. Publish how they are to be addressed.

What is there to hide?

After this six-month engagement, will there be a happy ending? I feel patients are about to be left jilted at the eleventh hour.
******

You’ll find my more recent posts [last] have more depth and linked document articles if you are looking for more detailed information.

******

March 31st: A mother’s journey – intro

March 31st: Transparency

April 3rd: Communication & Choice

April 4th: Fears & Facts

April 7th: What is care.data? Defined Scope is vital for Trust

April 10th: Raw Highlights from the Health Select Committee

April 12th: care.data Transparency & Truth, Remit & Responsibility

April 15th: No Security Blanket : why consent packages fail our kids

April 18th: care.data : Getting the Ducks in a Row

April 23rd: an Ode to care.data (on Shakespeare’s anniversary)

May 3rd: care.data, riding the curve: Change Management

May 15th: care.data the 4th circle: Empowerment

May 24th: Flagship care.data – commercial uses in theory [1]

June 6th: Reality must take Precedence over Public Relations

June 14th: Flagship care.data – commercial use with brokers [2]

June 20th: The Impact of the Partridge Review on care.data

June 24th: On Trying Again – Project Lessons Learned

July 1st: Communications & Core Concepts [1] Ten Things Learned at the Open House on care.data and part two: Communications and Core Concepts [2] – Open House 17th June Others’ Questions

July 12th: Flagship care.data – commercial use in Practice [3]

July 25th: care.data should be like playing Chopin – review after the HSCIC Data Sharing review ‘Driving Positive Change’ meeting

July 25th: Care.data should be like playing Chopin – but will it be all the right notes, in the wrong order? Looking forwards.

August 9th: care.data and genomics : launching lifeboats [Part One] the press, public reaction and genomics & care.data interaction

August 9th: care.data and genomics : launching lifeboats [Part Two] Where is the Engagement?

September 3rd: care.data – a Six Month Pause, Anniversary round up [Part one] Open questions: What and Who?

September 3rd: care.data – a Six Month Pause, Anniversary round up [Part two] Open questions: How, Why, When?

September 16th: care.data cutouts – Listening to Minority Voices Includes questions from those groups.

September 16th: care.data – “Anticipating Things to Come” means Confidence by Design

October 30th: patient questions on care.data – an open letter

November 19th: questions remain unanswered: what do patients do now?

December 9th: Rebuilding trust in care.data

December 24th: A care.data wish list for 2015

2015 (updated after this post was published, throughout the year)

January 5th 2015: care.data news you may have missed

January 21st 2015: care.data communications – all change or the end of the line?

February 25th 2015: care.data – one of our Business Cases is Missing.

March 14th 2015: The future of care.data in recent discussions

March 26th 2015: Wearables: patients will ‘essentially manage their data as they wish’. What will this mean for diagnostics, treatment and research and why should we care? [#NHSWDP 3]

May 10th 2015: The Economic Value of Data vs the Public Good? [1] care.data, Concerns and the cost of Consent

The Economic Value of Data vs the Public Good? [2] Pay-for-privacy, defining purposes

The Economic Value of Data vs the Public Good? [3] The value of public voice.

May 14th 2015: Public data in private hands – should we know who manages our data?

June 20th 2015: Reputational risk. Is NHS England playing a game of public confidence?

June 25th 2015: Digital revolution by design: building for change and people (1)

July 13th 2015: The nhs.uk digital platform: a personalised gateway to a new NHS?

July 27th 2015: care.data : the economic value of data versus the public interest? (First published in StatsLife)

August 4th 2015: Building Public Trust in care.data sharing [1]: Seven step summary to a new approach

August 5th, 2015: Building Public Trust [2]: a detailed approach to understanding Public Trust in data sharing

August 6th 2015: Building Public Trust in care.data datasharing [3]: three steps to begin to build trust

August 12th 2015: Building Public Trust [4]: “Communicate the Benefits” won’t work for care.data

August 17th 2015: Building Public Trust [5]: Future solutions for health data sharing in care.data

September 12th 2015: care.data: delayed or not delayed? The train wreck that is always on time

****

Questions, ideas, info & other opinions continue to be all welcome. I’ll do my best to provide answers, or point to source sites.

For your reference and to their credit, I’ve found the following three websites useful and kept up to date with news and information:

Dr. Bhatia, GP in Hampshire’s care.data info site

HSCIC’s care.data site

medConfidential – campaign for confidentiality and consent in health and social care – seeking to ensure that every flow of data into, across and out of the NHS and care system is consensual, safe and transparent