Tag Archives: engagement

A data sharing fairytale (3): transformation and impact

Part three: It is vital that the data sharing consultation is not seen in a silo, or even a set of silos each particular to its own stakeholder. To do it justice and ensure the questions that should be asked are answered, we must look instead at the whole story and the background setting. And we must ask each stakeholder, what does your happy ending look like?

Parts one and two to follow address public engagement and ethics, this focuses on current national data practice, tailored public services, and local impact of the change and transformation that will result.

What is your happy ending?

This data sharing consultation is gradually revealing to me how disjoined government appears in practice and strategy. Our digital future, a society that is more inclusive and more just, supported by better uses of technology and data in ‘dot everyone’ will not happen if they cannot first join the dots across all of Cabinet thinking and good practice, and align policies that are out of step with each other.

Last Thursday night’s “Government as a Platform Future” panel discussion (#GaaPFuture) took me back to memories of my old job, working in business implementations of process and cutting edge systems. Our finest hour was showing leadership why success would depend on neither. Success was down to local change management and communications, because change is about people, not the tech.

People in this data sharing consultation, means the public, means the staff of local government public bodies, as well as the people working at national stakeholders of the UKSA (statistics strand), ADRN (de-identified research strand), Home Office (GRO strand), DWP (Fraud and Debt strands), and DECC (energy) and staff at the national driver, the Cabinet Office.

I’ve attended two of the 2016 datasharing meetings,  and am most interested from three points of view  – because I am directly involved in the de-identified data strand,  campaign for privacy, and believe in public engagement.

Engagement with civil society, after almost 2 years of involvement on three projects, and an almost ten month pause in between, the projects had suddenly become six in 2016, so the most sensitive strands of the datasharing legislation have been the least openly discussed.

At the end of the first 2016 meeting, I asked one question.

How will local change management be handled and the consultation tailored to local organisations’ understanding and expectations of its outcome?

Why? Because a top down data extraction programme from all public services opens up the extraction of personal data as business intelligence to national level, of all local services interactions with citizens’ data.  Or at least, those parts they have collected or may collect in future.

That means a change in how the process works today. Global business intelligence/data extractions are designed to make processes more efficient, through reductions in current delivery, yet concrete public benefits for citizens are hard to see that would be different from today, so why make this change in practice?

What it might mean for example, would be to enable collection of all citizens’ debt information into one place, and that would allow the service to centralise chasing debt and enforce its collection, outsourced to a single national commercial provider.

So what does the future look like from the top? What is the happy ending for each strand, that will be achieved should this legislation be passed?  What will success for each set of plans look like?

What will we stop doing, what will we start doing differently and how will services concretely change from today, the current state, to the future?

Most importantly to understand its implications for citizens and staff, we should ask how will this transformation be managed well to see the benefits we are told it will deliver?

Can we avoid being left holding a pumpkin, after the glitter of ‘use more shiny tech’ and government love affair with the promises of Big Data wear off?

Look into the local future

Those with the vision of the future on a panel at the GDS meeting this week, the new local government model enabled by GaaP, also identified, there are implications for potential loss of local jobs, and “turkeys won’t vote for Christmas”. So who is packaging this change to make it successfully deliverable?

If we can’t be told easily in consultation, then it is not a clear enough policy to deliver. If there is a clear end-state, then we should ask what the applied implications in practice are going to be?

It is vital that the data sharing consultation is not seen in a silo, or even a set of silos each particular to its own stakeholder, about copying datasets to share them more widely, but that we look instead at the whole story and the background setting.

The Tailored Reviews: public bodies guidance suggests massive reform of local government, looking for additional savings, looking to cut back office functions and commercial plans. It asks “What workforce reductions have already been agreed for the body? Is there potential to go further? Are these linked to digital savings referenced earlier?”

Options include ‘abolish, move out of central government, commercial model, bring in-house, merge with another body.’

So where is the local government public bodies engagement with change management plans in the datasharing consultation as a change process? Does it not exist?

I asked at the end of the first datasharing meeting in January and everyone looked a bit blank. A question ‘to take away’ turned into nothing.

Yet to make this work, the buy-in of local public bodies is vital. So why skirt round this issue in local government, if there are plans to address it properly?

If there are none, then with all the data in the world, public services delivery will not be improved, because the issues are friction not of interference by consent, or privacy issues, but working practices.

If the idea is to avoid this ‘friction’ by removing it, then where is the change management plan for public services and our public staff?

Trust depends on transparency

John Pullinger, our National Statistician, this week also said on datasharing we need a social charter on data to develop trust.

Trust can only be built between public and state if the organisations, and all the people in them, are trustworthy.

To implement process change successfully, the people involved in these affected organisations, the staff, must trust that change will mean positive improvement and risks explained.

For the public, what defined levels of data access, privacy protection, and scope limitation that this new consultation will permit in practice, are clearly going to be vital to define if the public will trust its purposes.

The consultation does not do this, and there is no draft code of conduct yet, and no one is willing to define ‘research’ or ‘public interest’.

Public interest models or ‘charter’ for collection and use of research data in health, concluded that ofr ethical purposes, time also mattered. Benefits must be specific, measurable, attainable, relevant and time-bound. So let’s talk about the intended end state that is to be achieved from these changes, and identify how its benefits are to meet those objectives – change without an intended end state will almost never be successful, if you don’t know start knowing what it looks like.

For public trust, that means scope boundaries. Sharing now, with today’s laws and ethics is only fully meaningful if we trust that today’s governance, ethics and safeguards will be changeable in future to the benefit of the citizen, not ever greater powers to the state at the expense of the individual. Where is scope defined?

There is very little information about where limits would be on what data could not be shared, or when it would not be possible to do so without explicit consent. Permissive powers put the onus onto the data controller to share, and given ‘a new law says you should share’ would become the mantra, it is likely to mean less individual accountability. Where are those lines to be drawn to support the staff and public, the data user and the data subject?

So to summarise, so far I have six key questions:

  • What does your happy ending look like for each data strand?
  • How will bad practices which conflict with the current consultation proposals be stopped?
  • How will the ongoing balance of use of data for government purposes, privacy and information rights be decided and by whom?
  • In what context will the ethical principles be shaped today?
  • How will the transformation from the current to that future end state be supported, paid for and delivered?
  • Who will oversee new policies and ensure good data science practices, protection and ethics are applied in practice?

This datasharing consultation is not entirely for something new, but expansion of what is done already. And in some places is done very badly.

How will the old stories and new be reconciled?

Wearing my privacy and public engagement hats, here’s an idea.

Perhaps before the central State starts collecting more, sharing more, and using more of our personal data for ‘tailored public services’ and more, the government should ask for a data amnesty?

It’s time to draw a line under bad practice.  Clear out the ethics drawers of bad historical practice, and start again, with a fresh chapter. Because current practices are not future-proofed and covering them up in the language of ‘better data ethics’ will fail.

The consultation assures us that: “These proposals are not about selling public or personal data, collecting new data from citizens or weakening the Data Protection Act 1998.”

However it does already sell out personal data from at least BIS. How will these contradictory positions across all Departments be resolved?

The left hand gives out de-identified data in safe settings for public benefit research while the right hands out over 10 million records to the Telegraph and The Times without parental or schools’ consent. Only in la-la land are these both considered ethical.

Will somebody at the data sharing meeting please ask, “when will this stop?” It is wrong. These are our individual children’s identifiable personal data. Stop giving them away to press and charities and commercial users without informed consent. It’s ludicrous. Yet it is real.

Policy makers should provide an assurance there are plans for this to change as part of this consultation.

Without it, the consultation line about commercial use, is at best disingenuous, at worst a bare cheeked lie.

“These powers will also ensure we can improve the safe handling of citizen data by bringing consistency and improved safeguards to the way it is handled.”

Will it? Show me how and I might believe it.

Privacy, it was said at the RSS event, is the biggest concern in this consultation:

“includes proposals to expand the use of appropriate and ethical data science techniques to help tailor interventions to the public”

“also to start fixing government’s data infrastructure to better support public services.”

The techniques need outlined what they mean, and practices fixed now, because many stand on shaky legal ground. These privacy issues have come about over cumulative governments of different parties in the last ten years, so the problems are non-partisan, but need practical fixes.

Today, less than transparent international agreements push ‘very far-reaching chapters on the liberalisation of data trading’ while according to the European Court of Justice these practices lack a solid legal basis.

Today our government already gives our children’s personal data to commercial third parties and sells our higher education data without informed consent, while the DfE and BIS both know they fail processing and its potential consequences: the European Court reaffirmed in 2015 “persons whose personal data are subject to transfer and processing between two public administrative bodies must be informed in advance” in Judgment in Case C-201/14.

In a time that actively cultivates universal public fear,  it is time for individuals to be brave and ask the awkward questions because you either solve them up front, or hit the problems later. The child who stood up and said The Emperor has on no clothes, was right.

What’s missing?

The consultation conversation will only be genuine, once the policy makers acknowledge and address solutions regards:

  1. those data practices that are currently unethical and must change
  2. how the tailored public services datasharing legislation will shape the delivery of government services’ infrastructure and staff, as well as the service to the individual in the public.

If we start by understanding what the happy ending looks like, we are much more likely to arrive there, and how to measure success.

The datasharing consultation engagement, the ethics of data science, and impact on data infrastructures as part of ‘government as a platform’ need seen as a whole joined up story if we are each to consider what success for us as stakeholders, looks like.

We need to call out current data failings and things that are missing, to get them fixed.

Without a strong, consistent ethical framework you risk 3 things:

  1. data misuse and loss of public trust
  2. data non-use because your staff don’t trust they’re doing it right
  3. data is becoming a toxic asset

The upcoming meetings should address this and ask practically:

  1. How the codes of conduct, and ethics, are to be shaped, and by whom, if outwith the consultation?
  2. What is planned to manage and pay for the future changes in our data infrastructures;  ie the models of local government delivery?
  3. What is the happy ending that each data strand wants to achieve through this and how will the success criteria be measured?

Public benefit is supposed to be at the heart of this change. For UK statistics, for academic public benefit research, they are clear.

For some of the other strands, local public benefits that outweigh the privacy risks and do not jeopardise public trust seem like magical unicorns dancing in the land far, far away of centralised government; hard to imagine, and even harder to capture.

*****

Part one: A data sharing fairytale: Engagement
Part two: A data sharing fairytale: Ethics
Part three: A data sharing fairytale: Impact (this post)

Tailored public bodies review: Feb 2016

img credit: Hermann Vogel illustration ‘Cinderella’

EU do the Hokey Cokey. In out, In out, shake it all about.

The IN and the OUT circles have formed and Boris is standing in the middle.

Speculation as to whether he means to be on the No team is agreed he doesn’t really mean no.

His comments on staying in have been more consistently in the past pointed to his head saying staying in is better for business.

Some are suggesting that his stance is neither in nor out, but ‘an unconvincing third way‘,  no doesn’t mean no to staying in but no to Cameron’s deal and would in fact mean a second vote along the lines of Dominic Cumming’s .

If so, is this breathtaking arrogance and a u-turn of unforeseeable magnitude? Had our PM not said before the last GE he planned in stepping down before the end of the next Parliament, you could think so. But this way they cannot lose.

This is in fact bloody brilliant positioning by the whole party.

A yes vote underpins Cameron’s re-negotiation as ‘the right thing to do’, best for business and his own statesmanship while showing that we’re not losing sovereignty because staying in is on our terms.

Renegotiating our relationship with the EU was a key Conservative election promise.

This pacifies the majority of that part of the population who wants out of some of the EU ‘controlling out country’ and beholden to EU law, but keeps us stable and financially secure.

The hardline Out campaigners are seen as a bag of all-sorts that few are taking that seriously. But then comes Boris.

So now there is some weight in the out circle and if the country votes ‘No’ a way to manage the outcome with a ready made leader in waiting. But significantly, it’s not a consistent call for Out across the group. Boris is not for spinning in the same clear ‘out’ direction as the Galloway group.

Boris can keep a foot in the circle saying his heart is pro-In and really wants In, but on better terms. He can lead a future party for Outers Inners and whatever the outcome, be seen to welcome all. Quite a gentleman’s agreement perhaps.

His Out just means out of some things, not others. Given all his past positioning and role as Mayor in the City of London, out wouldn’t mean wanting to risk any of the financial and business related bits.

So what does that leave?  Pay attention in his speech to the three long paragraphs on the Charter of Fundamental Human Rights.

His rambling explanation indirectly explained quite brilliantly the bits he wants ‘out’ to mean. Out means in the Boris-controlled circle, only getting out from those parts of EU directives that the party players don’t like. The bits when they get told what to do, or told off for doing something wrong, or not playing nicely with the group.

The human rights rulings and oversight from the CJEU or views which are not aligned with the ECHR for example.

As Joshua Rozenberg wrote on sovereignty, “Human rights reform has been inextricably intertwined with renegotiating the UK’s membership of the EU. And it is all the government’s fault.”

Rozenberg writes that Mr Gove told the Lords constitution committee in December that David Cameron asked him whether “we should use the British Bill of Rights in order to create a constitutional long stop […] and whether the UK Supreme Court should be that body.”

“Our top judges were relabelled a “Supreme Court” not long ago; they’ve been urged to assert themselves against the European Court of Human Rights, and are already doing so against EU law”, commented Carl Gardner elsewhere.

The Gang of Six cabinet ministers are known for their anti EU disaffectation and most often its attachment to human rights  – Michael Gove, Iain Duncan Smith, Chris Grayling, Theresa Villiers, Priti Patel and John Whittingdale plus a further 20 junior ministers and potentially dozens of backbenchers.

We can therefore expect the Out campaign to present situations in which British ‘sovereignty’ was undermined by court rulings that some perceive as silly or seriously flawed.

Every case in which a British court ever convicted someone and was overturned ‘by Europe’ that appeared nonsensical will be wheeled out by current justice Secretary Mr Gove.

Every tougher ‘terrorist’ type case, whose human rights were upheld that had been denied them by a UK ruling might be in the more ‘extreme’ remit of the former justice secretary mention whenever Grayling makes his case for Out, especially where opinions may conflict with interpretations and the EU Charter.

Priti Patel has tough views of crime and punishment, reportedly in favour of the the death penalty.

IDS isn’t famous for a generous application of disability rights.

John Whittingdale gave his views on the present relationship with the EU and CJEU here in debate in 2015 and said (22:10) he was profoundly “concerned the CJEU is writing laws which we consider to be against our national interest.”

Data protection and privacy is about to get a new EU directive that will strengthen some aspects of citizens’ data rights. Things like the right to be informed what information is stored about us, or have mistakes corrected.

Don’t forget after all that Mr Gove is the Education SoS who signed off giving away the confidential personal data of now 20 million children to commercial third parties from the National Pupil Database. Clearly not an Article 8 fan.

We are told that we are being over reactive to our loss of rights to privacy. Over generous in protections to people who don’t deserve it. Ignoring that rights are universal and indivisible, we are encouraged to see them as something that must be earned. As such, something which may or may not be respected. And therefore can be removed.

Convince the majority of that, and legislation underpinning our rights will be easier to take away without enough mass outcry that will make a difference.

To be clear, a no vote would make no actual legal difference, “Leaving the EU (if that’s what the people vote for) is NOT at all inconsistent with the United Kingdom’s position as a signatory to the European Convention on Human Rights (ECHR), a creature of the COUNCIL of EUROPE and NOT the European Union.” [ObiterJ]

But by conflating ‘the single market’, ‘the Lisbon Treaty’, and the ‘ability to vindicate people’s rights under the 55-clause “Charter of Fundamental Human Rights”, Boris has made the no vote again equate conflated things: European Union membership = loss of sovereignty = need to reduce the control or influence of all organisations seen as ‘European’ (even if like the ECHR it’s to do with the Council of Europe Convention signed post WWII and long before EU membership) and all because we are a signatory to a package deal.

Boris has reportedly no love of ‘lefty academics’ standing up for international and human rights laws and their uppity lawyers in the habit of “vindicating the rights of their clients.”

Boris will bob in and out of both the IN group for business and the OUT group for sovereignty, trying not to fall out with anyone too much and giving serious Statesmanship to the injustices inflicted on the UK. There will be banter and back biting, but the party views will be put ahead of personalities.

And the public? What we vote, won’t really matter.I think we’ll be persuaded to be IN, or to get a two step Out-In.

Either way it will give the relevant party leader, present or future, the mandate to do what he wants. Our engagement is optional.

Like the General Election, the people’s majority viewed as a ‘mandate’ seems to have become a little confused with sign-off to dictate a singular directive, rather than represent a real majority. It cannot do anything but this, since the majority didn’t vote for the government that we have.

In this EU Referendum No wont mean No. It’ll mean a second vote to be able to split the package of no-to-all-things into a no-to-some-things wrapped up in layers of ‘sovereignty’ discussion. Unpack them, and those things will be for the most part, human rights things. How they will then be handled at a later date is utterly unclear but the mandate will have been received.

Imagine if Boris can persuade enough of the undecideds that he is less bonkers than some of the EU rulings on rights, he’ll perhaps get an Out mandate, possibly meaning a second vote just to be sure, splitting off the parts everyone obviously wants to protect, the UK business interests, and allowing the government to negotiate the opt out from legislation of human rights’ protections. Things that may appear to make more people dependent on the state, and contrary to the ideology of shrinking state support.

A long-promised review of the British Human Rights Act 1998 will inevitably follow, and only makes sense if we are first made exempt from the European umbrella.

Perhaps we will hear over the next four months more about what that might mean.

Either way, the Out group will I’m sure take the opportunity to air their views and demand the shake up of where Human Rights laws are out of line for the shape of the UK future nation they wish to see us become.

Some suggest Boris has made a decision that will cost him his political career. I disagree. I think it’s incredibly clever. Not a conspiracy, simply clever party planning to make every outcome a win for the good of the party and the good of the nation,  and a nod to Boris as future leader in any case. After all, he didn’t actually say he wanted #Brexit, just reform.

It’s not all about Boris, but is staging party politics at its best, and simultaneously sets the scene for future change in the human rights debate.

Destination smart-cities: design, desire and democracy (Part two)

Smart cities: private reach in public space and personal lives

Smart-cities are growing in the UK through private investment and encroachment on public space. They are being built by design at home, and supported by UK money abroad, with enormous expansion plans in India for example, in almost 100 cities.

With this rapid expansion of “smart” technology not only within our living rooms but my living space and indeed across all areas of life, how do we ensure equitable service delivery, (what citizens generally want, as demonstrated by strength of feeling on the NHS) continues in public ownership, when the boundary in current policy is ever more blurred between public and private corporate ownership?

How can we know and plan by-design that the values we hope for, are good values, and that they will be embedded in systems, in policies and planning? Values that most people really care about. How do we ensure “smart” does not ultimately mean less good? That “smart” does not in the end mean, less human.

Economic benefits seem to be the key driver in current government thinking around technology – more efficient = costs less.

While using technology progressing towards replacing repetitive work may be positive, how will we accommodate for those whose skills will no longer be needed? In particular its gendered aspect, and the more vulnerable in the workforce, since it is women and other minorities who work disproportionately in our part-time, low skill jobs. Jobs that are mainly held by women, even what we think of as intrinsically human, such as carers, are being trialed for outsourcing or assistance by technology. These robots monitor people, in their own homes and reduce staffing levels and care home occupancy. We’ll no doubt hear how good it is we need fewer carers because after all, we have a shortage of care staff. We’ll find out whether it is positive for the cared, or whether they find it it less ‘human'[e]. How will we measure those costs?

The ideal future of us all therefore having more leisure time sounds fab, but if we can’t afford it, we won’t be spending more of our time employed in leisure. Some think we’ll simply be unemployed. And more people live in the slums of Calcutta than in Soho.

One of the greatest benefits of technology is how more connected the world can be, but will it also be more equitable?

There are benefits in remote sensors monitoring changes in the atmosphere that dictate when cars should be taken off the roads on smog-days, or indicators when asthma risk-factors are high.

Crowd sourcing information about things which are broken, like fix-my-street, or lifts out-of-order are invaluable in cities for wheelchair users.

Innovative thinking and building things through technology can create things which solve simple problems and add value to the person using the tool.

But what of the people that cannot afford data, cannot be included in the skilled workforce, or will not navigate apps on a phone?

How this dis-incentivises the person using the technology has not only an effect on their disappointment with the tool, but the service delivery, and potentially wider still even to societal exclusion or stigma.These were the findings of the e-red book in Glasgow explained at the Digital event in health, held at the King’s Fund in summer 2015.

Further along the scale of systems and potential for negative user experience, how do we expect citizens to react to finding punishments handed out by unseen monitoring systems, finding out our behaviour was ‘nudged’ or find decisions taken about us, without us?

And what is the oversight and system of redress for people using systems, or whose data are used but inaccurate in a system, and cause injustice?

And wider still, while we encourage big money spent on big data in our part of the world how is it contributing to solving problems for millions for whom they will never matter? Digital and social media makes increasingly transparent our one connected world, with even less excuse for closing our eyes.

Approximately 15 million girls worldwide are married each year – that’s one girl, aged under 18, married off against her will every two seconds. [Huff Post, 2015]

Tinder-type apps are luxury optional extras for many in the world.

Without embedding values and oversight into some of what we do through digital tools implemented by private corporations for profit, ‘smart’ could mean less fair, less inclusive, less kind. Less global.

If digital becomes a destination, and how much it is implemented is seen as a measure of success, by measuring how “smart” we become risks losing sight of seeing technology as solutions and steps towards solving real problems for real people.

We need to be both clever and sensible, in our ‘smart’.

Are public oversight and regulation built in to make ‘smart’ also be safe?

If there were public consultation on how “smart” society will look would we all agree if and how we want it?

Thinking globally, we need to ask if we are prioritising the wrong problems? Are we creating more tech that we already have invented solutions for place where governments are willing to spend on them? And will it in those places make the society more connected across class and improve it for all, or enhance the lives of the ‘haves’ by having more, and the ‘have-nots’ be excluded?

Does it matter how smart your TV gets, or carer, or car, if you cannot afford any of these convenient add-ons to Life v1.1?

As we are ever more connected, we are a global society, and being ‘smart’ in one area may be reckless if at the expense or ignorance of another.

People need to Understand what “Smart” means

“Consistent with the wider global discourse on ‘smart’ cities, in India urban problems are constructed in specific ways to facilitate the adoption of “smart hi-tech solutions”. ‘Smart’ is thus likely to mean technocratic and centralized, undergirded by alliances between the Indian government and hi-technology corporations.”  [Saurabh Arora, Senior Lecturer in Technology and Innovation for Development at SPRU]

Those investing in both countries are often the same large corporations. Very often, venture capitalists.

Systems designed and owned by private companies provide the information technology infrastructure that i:

the basis for providing essential services to residents. There are many technological platforms involved, including but not limited to automated sensor networks and data centres.’

What happens when the commercial and public interest conflict and who decides that they do?

Decision making, Mining and Value

Massive amounts of data generated are being mined for making predictions, decisions and influencing public policy: in effect using Big Data for research purposes.

Using population-wide datasets for social and economic research today, is done in safe settings, using deidentified data, in the public interest, and has independent analysis of the risks and benefits of projects as part of the data access process.

Each project goes before an ethics committee review to assess its considerations for privacy and not only if the project can be done, but should be done, before it comes for central review.

Similarly our smart-cities need ethics committee review assessing the privacy impact and potential of projects before commissioning or approving smart-technology. Not only assessing if they are they feasible, and that we ‘can’ do it, but ‘should’ we do it. Not only assessing the use of the data generated from the projects, but assessing the ethical and privacy implications of the technology implementation itself.

The Committee recommendations on Big Data recently proposed that a ‘Council of Data Ethics’ should be created to explicitly address these consent and trust issues head on. But how?

Unseen smart-technology continues to grow unchecked often taking root in the cracks between public-private partnerships.

We keep hearing about Big Data improving public services but that “public” data is often held by private companies. In fact our personal data for public administration has been widely outsourced to private companies of which we have little oversight.

We’re told we paid the price in terms of skills and are catching up.

But if we simply roll forward in first gear into the connected city that sees all, we may find we arrive at a destination that was neither designed nor desired by the majority.

We may find that the “revolution, not evolution”, hoped for in digital services will be of the unwanted kind if companies keep pushing more and more for more data without the individual’s consent and our collective public buy-in to decisions made about data use.

Having written all this, I’ve now read the Royal Statistical Society’s publication which eloquently summarises their recent work and thinking. But I wonder how we tie all this into practical application?

How we do governance and regulation is tied tightly into the practicality of public-private relationships but also into deciding what should society look like? That is what our collective and policy decisions about what smart-cities should be and may do, is ultimately defining.

I don’t think we are addressing in depth yet the complexity of regulation and governance that will be sufficient to make Big Data and Public Spaces safe because companies say too much regulation risks choking off innovation and creativity.

But that risk must not be realised if it is managed well.

Rather we must see action to manage the application of smart-technology in a thoughtful way quickly, because if we do not, very soon, we’ll have lost any say in how our service providers deliver.

*******

I began my thoughts about this in Part one, on smart technology and data from the Sprint16 session and after this (Part two), continue to look at the design and development of smart technology making “The Best Use of Data” with a UK company case study (Part three) and “The Best Use of Data” used in predictions and the Future (Part four).

Monitoring software in schools: the Department for Education’s digital dream or nightmare? (1)

Nicky Morgan, the Education Secretary,  gave a speech [1] this week and shared her dream of the benefits technology for pupils.

It mentioned two initiatives to log children’s individual actions; one is included in a consultation on new statutory guidance, and the other she praised, is a GPS based mobile monitoring app.

As with many new applications of technology, how the concept is to be implemented in practice is important to help understand how intrusive any new use of data is going to be.

Unfortunately for this consultation there is no supporting code of practice what the change will mean, and questions need asked.

The most significant aspects in terms of changes to data collection through required monitoring are in the areas of statutory monitoring, systems, and mandatory teaching of ‘safeguarding’:

Consultation p11/14: “We believe including the requirement to ensure appropriate filtering and monitoring are in place, in statutory guidance, is proportional and reasonable in order to ensure all schools and colleges are meeting this requirement. We don’t think including this requirement will create addition burdens for the vast majority of schools, as they are already doing this, but we are keen to test this assumption.”

Consultation:  paragraph 75 on page 22 introduces this guidance section and ends with a link to “Buying advice for schools.” “Governing bodies and proprietors should be confident that systems are in place that will identify children accessing or trying to access harmful and inappropriate content online. Guidance on e-security is available from the National Education Network.

Guidance: para 78 “Whilst it is essential that governing bodies and proprietors ensure that appropriate filters and monitoring systems are in place they should be careful  that “over blocking” does not lead to unreasonable restrictions as to what children can be taught with regards to online teaching  and safeguarding.” —

Consultation: “The Opportunities to teach safeguarding” section (para 77-78) has been updated and now says governing bodies and  “should ensure” rather than “should consider” that children are taught about safeguarding, including online, through teaching and learning opportunities. This is an important topic and the assumption is the vast majority of governing bodies and proprietors will already be ensuring the children in their school are suitably equipped with regards to safeguarding. But we are keen to hear views as to the change in emphasis.”

Paragraph 88 on p24  is oddly phrased: “Governing bodies and proprietors should ensure that staff members do not agree confidentiality and always act in the best interests of the child.”

What if confidentiality may sometimes be in the best interests of the child? What would that mean in practice?

 

Keeping Children Safe in Education – Questions on the Consultation and Use in practice

The consultation [2] is open until February 16th, and includes a new requirement to have web filtering and monitoring systems.

Remembering that 85% of children’s waking hours are spent outside school, and in a wide range of schools our children aged 2 -19, not every moment is spent unsupervised and on-screen, is it appropriate that this 24/7 monitoring would be applied to all types of school?

This provider software is reportedly being used in nearly 1,400 secondary schools in the UK.  We hear little about its applied use.

The cases of cyber bullying or sexting in schools I hear of locally, or read in the press, are through smartphones. Unless the school snoops on individual devices I wonder therefore if the cost, implementation and impact is proportionate to the benefit?

  1. Necessary and proportionate? How does this type of monitoring compare with other alternatives?
  2. Privacy impact assessment? Has any been done – surely required as a minimum measure?
  3. Cost benefit risk assessment of the new guidance in practice?
  4. Problem vs Solution: What problem is it trying to solve and how will they measure if it is successful, or stop its use if it is not?  Are other methods on offer?
  5. Due diligence: how do parents know that the providers have undergone thorough vetting and understand who they are? After all, these providers have access to millions of our children’s  online interactions.
  6. Evidence: If it has been used for years in school, how has it been assessed against other methods of supervision?
  7. The national cash cost: this must be enormous when added up for every school in the country, how is cost balanced against risk?
  8. Intangible costs – has anyone asked our children’s feeling on this? Where is the boundary between what is constructive and creepy? Is scope change documented if they decide to collect more data?

Are we Creating a Solution that Solves or creates a Problem?

The private providers would have no incentive to say their reports don’t work and schools, legally required to be risk averse, would be unlikely to say stop if there is no outcome at all.

Some providers  include “review of all incidents by child protection and forensic experts; freeing up time for teachers to focus on intervention” and “trends such as top users can be viewed.” How involved are staff who know the child as a first point of information sharing?

Most tools are multipurposed and I understand the reasons given behind them, but how it is implemented concerns me.

If the extent of these issues really justify this mass monitoring in every school – what are we doing to fix the causes, not simply spy on every child’s every online action in school? (I look at how it extends outside in part two.)

Questions on Public engagement: How are children and families involved in the implementation and with what oversight?

  1. Privacy and consent: Has anyone asked pupils and parents what they think and what rights they have to say no to sharing data?
  2. Involvement: Are parents to be involved and informed in software purchasing and in all data sharing decisions at local or regional level? Is there consistency of message if providers vary?
  3. Transparency: Where are the data created through the child’s actions stored, and for how long? Who has access to the data? What actions may result from it? And with what oversight?
  4. Understanding: How will children and parents be told what is “harmful and inappropriate content” as loosely defined by the consultation, and what they may or may not research? Children’s slang changes often, and “safeguarding” terms are subjective.
  5. Recourse: Will it include assessment of unintended consequences from misinterpretation of information gathered?
  6. Consent: And can I opt my child out from data collection by these unknown and ‘faceless’ third parties?

If children and parents are told their web use is monitored, what chilling effect may that have on their trust of the system, of teaching staff, and their ability to look for content to support their own sensitive concerns or development  that they may not be able to safe to look for at home? What limitation will that put on their creativity?

These are all questions that should be asked to thoroughly understand the consultation and requires wide public examination.

Since key logging is already common practice (according to provider websites) and will effectively in practice become statutory, where is the public discussion? If it’s not explicitly statutory, should pupils be subject to spying on their web searches in a postcode lottery?

What exactly might this part of the new guidance mean for pupils?

In part two, I include the other part of her speech, the GPS app and ask whether if we track every child in and outside school, are we moving closer to the digital dream, or nightmare, in the search to close the digital skills gap?

****

References:

[1] Nicky Morgan’s full speech at BETT

[2] Consultation: Keeping Children Safe in Education – closing Feb 16thThe “opportunities to teach safeguarding” section (para 77-78) has been updated and now says governing bodies and proprieties “should ensure” rather than “should consider” that children are taught about safeguarding, including online, through teaching and learning opportunities.

The Consultation Guidance: most relevant paragraphs 75 and 77 p 22

“Governing bodies and proprietors should be confident that systems are in place that will identify children accessing or trying to access harmful and inappropriate content online. [Proposed statutory guidance]

Since “guidance on procuring appropriate ICT” from the National Education Network NEN* is offered, it is clearly intended that this ‘system’ to be ‘in place’, should be computer based. How will it be applied in practice? A number of the software providers for schools already provide services that include key logging, using “keyword detection libraries” that “provides a complete log of all online activity”.

(*It’s hard to read more about as many of NEN’s links are dead.)  

Building Public Trust [4]: “Communicate the Benefits” won’t work for care.data

care.data communicating the benefits as its response to the failed communications in spring 2014, has failed to deliver public trust, here’s why:

To focus on the benefits is a shortcut for avoiding the real issues

Talking about benefits is about telling people what the organisation wants to tell them. This fails to address what the public and professionals want to know. The result is not communication, but a PR exercise.

Talking about benefits in response to the failed communications in spring 2014 and failing to address criticism since, ignores concerns that public and professionals raised at macro and micro level.  It appears disingenuous about real engagement despite saying ‘we’re listening’ and seems uncaring.

Talking about only the benefits does not provide any solution to demonstrably outweigh the potential risk of individual and public health harm through loss of trust in the confidential GP relationship, or data inaccuracy, or loss, and by ignoring these, seems unrealistic.

Talking about short term benefits and not long term solutions [to the broken opt out, long term security, long term scope change of uses and users and how those will be communicated] does not demonstrate competency or reliability.

Talking about only the benefits of commissioning, and research for the merged dataset CES, doesn’t mention all the secondary uses to which all HSCIC patient level health data are put, [those reflected in Type 2 opt out] including commercial re-use and National Back Office: “2073 releases made from the National Back Office between April 2013 and December 2013. This includes 313 releases to police forces, 1531 to the Home Office and 229 to the National Crime Agency.” [HSCIC, July2,  2014].

This use of hospital records and other secondary data by the back office, without openly telling the public, does not feel  ethical and transparent.

Another example, is the past patient communications that expressly said, ‘we do not collect name’, the intent of which would appear to be to assure patients of anonymity, without saying name is already stored at HSCIC on the Personal Demographics Service, or that name is not needed to be identifiable.

We hear a lot about transparency. But is transparent the same fully accurate, complete and honest? Honest about the intended outcomes of the programme. Honest about all the uses to which health data are put. Honest about potential future scope changes and those already planned.

Being completely truthful in communications is fundamental to future-proofing trust in the programme.

NHS England’s care.data programme through the focus on ‘the benefits’ lacks balance and appears disingenuous, disinterested,  unrealistic and lacking in reliability, competency and honesty. Through these actions it does not demonstrate the organisation is trustworthy.  This could be changed.

care.data fundamentally got it wrong with the intention to not communicate the programme at all.  It got it wrong in the tool and tone of communications in the patient leaflet.  There is a chance to get it right now, if the organisation  would only stop the focus on communicating the benefits.

I’m going to step through with a couple of examples why to-date, some communications on care.data and use of NHS data are not conducive to trust.

Communication designed to ‘future-proof’ an ongoing relationship and trust must be by design, not afterthought.

Communications need to start addressing the changes that are happening and how they make people feel and address the changes that create concern – in the public and professionals – not address the  goals that the organisation has.

Sound familiar? Communications to date have been flawed in the same way that the concept of ‘building trust’ has been flawed. It has aimed to achieve the wrong thing and with the wrong audience.

Communications in care.data needs to stop focussing on what the organisation wants from the public and professionals – the benefits it sees of getting data – and address instead firstly at a macro level, why the change is necessary and why the organisation should be trusted to bring it about.

When explaining benefits there are clearly positives to be had from using primary and secondary data in the public interest. But what benefits will be delivered in care.data that are not already on offer today?

Why if commissioning is done today with less identifiable data, can there be no alternative to the care.data level of identifiable data extraction? Why if the CPRD offers research in both primary and secondary care today, will care.data offer better research possibilities? And secondly at a micro level, must address questions individuals asked up and down the country in 2014.

What’s missing and possible to be done?

  1. aim to meet genuine ongoing communication needs not just legal data protection fair processing tick-boxes
  2. change organisational attitude that encourages people to ask what they each want to know at macro and micro level – why the programme at all, and what’s in it for me? What’s new and a benefit that differs from the status quo? This is only possible if you will answer what is asked.
  3. deliver robust explanations of the reason why the macro and micro benefits demonstrably outweigh the risk of individual potential harms
  4. demonstrate reliability, honesty, competency and you are trustworthy
  5. agree how scope changes will trigger communication to ‘future-proof’ an ongoing relationship and trust by design.

As the NIB work stream on Public Trust says, “This is not merely a technical exercise to counter negative media attention; substantial change and long-term work is needed to deliver the benefits of data use.”

If they’re serious about that long term work, then why continue to roll out pathfinder communications based on a model that doesn’t work, with an opt out that doesn’t work? Communications isn’t an afterthought to public trust. It’s key.

If you’re interested in details and my proposals for success in communications I’ve outlined in depth below:

  • Why Communicate Changes at all?
  • What is change in care.data about?
  • Is NHS England being honest about why this is hard?
  • Communicate the Benefits is not working
  • A mock case study in why ‘communicate the benefits’ will fail
  • Long term trust needs a long term communications solution
  • How a new model for NHS care.data Communication could deliver

Continue reading Building Public Trust [4]: “Communicate the Benefits” won’t work for care.data

Digital revolution by design: infrastructures and the fruits of knowledge

Since the beginning of time and the story of the Garden of Eden, man has found a way to share knowledge and its power.

Modern digital tools have become the everyday way to access knowledge for many across the world, giving quick access to information and sharing power more fairly.

In this third part of my thoughts on digital revolution by design, triggered by the #kfdigi15 event on June 16-17, I’ve been considering some of the constructs we have built; those we accept and those that could be changed, given the chance, to build a better digital future.

Not only the physical constructions, the often networked infrastructures, but intangible infrastructures of principles and power, co-dependencies around a physical system; the legal and ethical infrastructures of ownership, governance and accountability.

Our personal data flow in systems behind the screens, at the end of our fingertips. Controlled in frameworks designed by providers and manufacturers, government and commercial agencies.

Increasingly in digital discussions we hear that the data subject, the citizen, will control their own data.

But if it is on the terms and conditions set by others, how much control is real and how much is the talk of a consenting citizen only a fig leaf behind which any real control is still held by the developer or organisation providing the service?

When data are used, turned into knowledge as business intelligence that adds value to aid informed decision making. By human or machine.

How much knowledge is too much knowledge for the Internet of Things to build about its users? As Chris Matyszczyk wrote:

“We have all agreed to this. We click on ‘I agree’ with no thought of consequences, only of our convenience.”

Is not knowing what we have agreed to our fault, or the responsibility of the provider who’d rather we didn’t know?

Citizens’ rights are undermined in unethical interactions if we are exploited by easy one-click access and exchange our wealth of data at unseen cost. Can it be regulated to promote, not stifle innovation?

How can we get those rights back and how will ‘doing the right thing’ help shape and control the digital future we all want?

The infrastructures we live inside

As Andrew Chitty says in this HSJ article: “People live more mobile and transient lives and, as a result, expect more flexible, integrated, informed health services.

To manage that, do we need to know how systems work, how sharing works, and trust the functionality of what we are not being told and don’t see behind the screens?

At the personal level, whether we sign up for the social network, use a platform for free email, or connect our home and ourselves in the Internet of things, we each exchange our personal data with varying degrees of willingness. There there is often no alternative if we want to use the tool.

As more social and consensual ‘let the user decide’ models are being introduced, we hear it’s all about the user in control, but reality is that users still have to know what they sign up for.

In new models of platform identity sign on, and tools that track and mine our personal data to the nth degree that we share with the system, both the paternalistic models of the past and the new models of personal control and social sharing are merging.

Take a Fitbit as an example. It requires a named account and data sharing with the central app provider. You can choose whether or not to enable ‘social sharing’ with nominated friends whom you want to share your boasts or failures with. You can opt out of only that part.

I fear we are seeing the creation of a Leviathan sized monster that will be impossible to control and just as scary as today’s paternalistic data mis-management. Some data held by the provider and invisibly shared with third parties beyond our control, some we share with friends, and some stored only on our device.

While data are shared with third parties without our active knowledge, the same issue threatens to derail consumer products, as well as commercial ventures at national scale, and with them the public interest. Loss of trust in what is done behind the settings.

Society has somehow seen privacy lost as the default setting. It has become something to have to demand and defend.

“If there is one persistent concern about personal technology that nearly everybody expresses, it is privacy. In eleven of the twelve countries surveyed, with India the only exception, respondents say that technology’s effect on privacy was mostly negative.” [Microsoft survey 2015, of  12,002 internet users]

There’s one part of that I disagree with. It’s not the effect of technology itself, but the designer or developers’ decision making that affects privacy. People have a choice how to design and regulate how privacy is affected, not technology.

Citizens have vastly differing knowledge bases of how data are used and how to best interact with technology. But if they are told they own it, then all the decision making framework should be theirs too.

By giving consumers the impression of control, the shock is going to be all the greater if a breach should ever reveal where fitness wearable users slept and with whom, at what address, and were active for how long. Could a divorce case demand it?

Fitbit users have already found their data used by police and in the courtroom – probably not what they expected when they signed up to a better health tool.  Others may see benefits that could harm others by default who are excluded from accessing the tool.

Some at org level still seem to find this hard to understand but it is simple:
No trust = no data = no knowledge for commercial, public or personal use and it will restrict the very innovation you want to drive.

Google gmail users have to make 10+ clicks to restrict all ads and information sharing based on their privacy and ad account settings. The default is ad tailoring and data mining. Many don’t even know it is possible to change the settings and it’s not intuitive how to.

Firms need to consider their own reputational risk if users feel these policies are not explicit and are exploitation. Those caught ‘cheating’ users can get a very public slap on the wrist.

Let the data subjects rule, but on whose terms and conditions?

The question every citizen signing up to digital agreements should ask, is what are the small print  and how will I know if they change? Fair processing should offer data protection, but isn’t effective.

If you don’t have access to information, you make decisions based on a lack of information or misinformation. Decisions which may not be in your own best interest or that of others. Others can exploit that.

And realistically and fairly, organisations can’t expect citizens to read pages and pages of Ts&Cs. In addition, we don’t know what we don’t know. Information that is missing can be as vital to understand as that provided. ‘Third parties’ sharing – who exactly does that mean?

The concept of an informed citizenry is crucial to informed decision making but it must be within a framework of reasonable expectation.

How do we grow the fruits of knowledge in a digital future?

Real cash investment is needed now for a well-designed digital future, robust for cybersecurity, supporting enforceable governance and oversight. Collaboration on standards and thorough change plans. I’m sure there is much more, but this is a start.

Figurative investment is needed in educating citizens about the technology that should serve us, not imprison us in constructs we do not understand but cannot live without.

We must avoid the chaos and harm and wasted opportunity of designing massive state-run programmes in which people do not want to participate or cannot participate due to barriers of access to tools. Avoid a Babel of digital blasphemy in which the only wise solution might be to knock it down and start again.

Our legislators and regulators must take up their roles to get data use, and digital contract terms and conditions right for citizens, with simplicity and oversight. In doing so they will enable better protection against risks for commercial and non-profit orgs, while putting data subjects first.

To achieve greatness in a digital future we need: ‘people speaking the same language, then nothing they plan to do will be impossible for them’.

Ethics. It’s more than just a county east of London.

Let’s challenge decision makers to plant the best of what is human at the heart of the technology revolution: doing the right thing.

And from data, we will see the fruits of knowledge flourish.

******

1. Digital revolution by design: building for change and people
2. Digital revolution by design: barriers by design
3
. Digital revolution by design: infrastructures and the fruits of knowledge
4. Digital revolution by design: infrastructures and the world we want

smartphones: the single most important health treatment & diagnostic tool at our disposal [#NHSWDP 2]

After Simon Stevens big statement on smartphones at the #NHSWDP event, I’d asked what sort of assessment had the NHS done on how wearables’ data would affect research.

#digitalinclusion is clearly less about a narrow focus on apps than applied skills and online access.

But I came away wondering how apps will work in practice, affect research and our care in the NHS in the UK, and much more.

What about their practical applications and management?

NHS England announced a raft of regulated apps for mental health this week, though it’s not the first approved.  

This one doesn’t appear to have worked too well.

The question needs an answer before many more are launched: how will these be catalogued, indexed and stored ? Will it be just a simple webpage? I’m sure we can do better to make this page user friendly and intuitive.

This British NHS military mental health app is on iTunes. Will iTunes carry a complete NHS approved library and if so, where are the others?

We don’t have a robust regulation model for digital technology, it was said at a recent WHF event, and while medical apps are sold as wellness or fitness or just for fun, patients could be at risk.

In fact, I’m convinced that while medical apps are being used by consumers as medical devices, for example as tests, or tools which make recommendations, and they are not thoroughly regulated, we *are* at risk.

If Simon Stevens sees smartphones as: “going to be the single most important health treatment and diagnostic tool at our disposal over the coming decade and beyond,” then we’d best demand the tools that work on them, work safely. [speech in full]

And if his statement on their importance is true, then when will our care providers be geared up to accepting extracts of data held on a personal device into the local health record at a provider – how will interoperability, testing and security work?

And who’s paying for them? those on the library right now, have price tags. The public should be getting lots of answers to lots of questions.

“Over the coming decade”  has already started.

What about Research?: I know the Apple ResearchKit had a big reaction, and I’m sure there’s plenty of work already done on expectations of how data sharing in wearables affect research participation. (I just haven’t read it yet, but am interested to do so,  feel free to point any my way).

I was interested in the last line in this article: “ResearchKit is a valiant effort by Apple, and if its a hit with scientists, it could make mass medical research easier than ever.”

How do we define ‘easier’? Has Apple hit on a mainstream research app? How is ‘mass medical research’ in public health for example, done today and how may it change?

Will more people be able to participate in remote trials?

Will more people choose to share their well-being data and share ‘control’ phenotype data more in depth than in the past?

Are some groups under- or not-at-all represented?

How will we separate control of datasharing for direct care and for other secondary uses like research?

Quality: Will all data be good data or do we risk research projects drowning in a data tsunami of quantity not quality? Or will apps be able to target very specific trial data better than before?

How: One size will not fit all. How will data stored in wearables affect research in the UK? Will those effects differ between the UK and the US, and will app designs need different approaches due to the NHS long history and take into account single standards and be open? How will research take historical data into account if apps are all ‘now’? How will research based on that data be peer reviewed?

Where: And as we seek to close the digital divide here at home, what gulf may be opening up in the research done in public health, the hard to reach, and even between ‘the west’ and ‘developing’ countries?

In the UK will the digital postcode lottery affect care? Even with a wish for wifi in every part of the NHS estate, the digital differences are vast. Take a look at Salford – whose digital plans are worlds apart from my own Trust which has barely got rid of Lloyd George folders on trolleys.

Who: Or will in fact the divide not be by geography, but by accessibility based on wealth?  While NHS England talks about digital exclusion, you would hope they would be doing all they can to reduce it. However, the mental health apps announced just this week each have a price tag if ‘not available’ to you on the NHS.

Why: on what basis will decisions be made on who gets them prescribed and who pays for the,  where apps are to be made available for which area of diagnosis or treatment, or at all if the instructions are “to find out if it’s available in your area email xxx or call 020 xxx. Or you could ask your GP or healthcare professional.”

The highest intensity users of the NHS provision, are unlikely to be the greatest users of growing digital trends.

Rather the “worried well” would seem the ideal group who will be encouraged to stay away from professionals, self-care with self-paid support from high street pharmacies. How much could or will this measurably benefit the NHS, the individual and make lives better? As increasingly the population is risk stratified and grouped into manageable portions, will some be denied care based on data?

Or will the app providers be encouraged to promote their own products, make profits, benefit the UK plc regardless of actual cost and measurable benefits to patients?

In 2013, IMS Health reported that more than 43,000 health-related apps were available for download from the Apple iTunes app store. Of those, the IMS Institute found that only 16,275 apps are directly related to patient health and treatment, and there was much to be done to move health apps from novelty to mainstream.

Reactionary or Realistic – and where’s the Risks Assessment before NHS England launches even more approved apps?

At the same time as being exciting,  with this tempting smörgåsbord of shiny new apps comes a set of new risks which cannot responsibly be ignored. In patient safety, cyber security, and on what and who will be left out.

Given that basic data cannot in some places be shared between GP and hospital due for direct care to local lack of tech and the goal is another five years away, how real is the hype of the enormous impact of wearables going to be for the majority or at scale?

On digital participation projects: “Some of the work that has already been done by the Tinder Foundation, you take some of the examples here, with the Sikh community in  Leicester around diabetes, and parenting in other parts of the country, you can see that this is an agenda which can potentially get real quite quickly and can have quite a big impact.”
(Simon Stevens)

These statements, while each on different aspects of digital inclusion, by Simon Stevens on smartphones, and scale, and on consent by Tim Kelsey, are fundamentally bound together.

What will wearables mean for diagnostics, treatment and research in the NHS? For those who have and those who have not?

How will sharing data be managed for direct care and for other purposes?

What control will the patriarchy of the NHS reasonably expect to have over patients choice of app by any provider? Do most patients know at all, what effect their choice may have for their NHS care?

How will funding be divided into digital and non-digital, and be fair?

How will we maintain the principles and practice of a ‘free at the point of access’ digital service available to all in the NHS?

Will there really be a wearables revolution? Or has the NHS leadership just jumped on a bandwagon as yet without any direction?

****

[Next: part three  – on consent – #NHSWDP 3: Wearables: patients will ‘essentially manage their data as they wish’. What will this mean for diagnostics, treatment and research and why should we care?] 

[Previous: part one – #NHSWDP 1: Thoughts on Digital Participation and Health Literacy: Opportunities for engaging citizens in the NHS – including Simon Stevens full keynote speech]

The future of care.data in recent discussions

Questions were raised at two health events this week, on the status of the care.data programme.

The most recent NHS England announcement about the care.data rollout progress, was made in October 2014.

What’s the current status of Public Information?

The IIGOP review in December 2014 [1], set 27 criteria for the programme to address.

The public has not yet seen a response, but according to the GPES minutes one was made at the end of January.

Will it be released in the public domain?

An updated privacy impact assessment “was approved by the care.data programme board and will be published in February 2015.” It has not yet been made public.

Limited and redacted programme board materials were released and the public awaits to see if a business case or more will be released in the public interest.

Risks and issues have been redacted or not released at all, such as the risk register.

There is no business case in place, confirmed page 6 of the October 2014 board minutes – I find that astonishing.

It is hard to know if more material will be made public as recommended in their own transparency agenda.

What is the current state of open questions?

Professionals and public are still interested in the current plan, and discussions this week at the Roy Lilley chat with Dr. Sarah Wollaston MP, again raised some open questions.

1. What happened to penalties for misuse and ‘one strike and out’ ?

Promised  in Parliament by Dr. Dan Poulter,  Parliamentary Under Secretary of State at the Department of Health, a year ago – questions on penalties are still being asked and  without a clear public answer of all that has changed since then and what remains to be done:

care.data penalties are unclear

Poulter on care.data penalties

[Hansard, March 25 2014 ] [2]

Some changes are being worked on [written evidence to HSC]*[7] planned for autumn 2015 – but does it clarify what has happened concretely to date and how it will protect patients in the pathfinder?

“The department is working to table these regulations in Parliament in 2015, to come into force in the autumn.”

Did this happen? Are the penalties proportionate for big multi-nationals, or will other safeguards be introduced, such as making misuse a criminal offence, as suggested?

2. What about promises made on opt out?

One year on the public still has no fair processing of personal data released by existing health providers. It was extracted in the past twenty-five years, the use of which by third parties was not public knowledge. (Data from hospital visits (HES), mental health, maternity data etc).

The opt out of all data sharing from secondary care such as A&E, stored at the HSCIC, was promised by Jeremy Hunt, Secretary of State for Health, a year ago, on February 25th 2014.

It has still not come into effect and been communicated:

Jeremy Hunt on care.data opt out

[Hansard February 25 2014, col 148] [3]

Jeremy Hunt MP

 

In fact the latest news reported in the media was that opt out ‘type 2’ was not working, as expected. [4]

Many in the public have not been informed at all that they can request opt out, as the last public communication attempt failed to reach all households, yet their data continues to be released.

3. What about clarifying the purposes of the Programme?

The public remains unclear about the purpose of the whole programme and data sharing, noted at the Roy Lilley event:

A business case, and a risk benefit analysis would improve this.

Flimsy assurances based on how data may be used in the initial extraction will not be enough to assure the public how their data will be used in future and by whom, not just the next six months or so.

Once released, data is not deleted, so a digital health footprint is not just released for care.data, it is given up for life. How much patients trust the anonymous, pseudonymous, and what is ‘de-identified’ data depends on the individual, but in a world where state-held data matching form multiple sources is becoming the norm, many in the public are skeptical.[5]

The controls over future use and assurances that are ‘rock solid’, will only be trustworthy if what was promised, happens.

To date, that is not the case or has not been communicated.

What actions have been taken recently?

Instead of protecting the body, which in my opinion has over the last two years achieved external scrutiny of care.data and ensuring promises made were kept, the independent assurance committee, the IAG, is to be scrapped.

The data extraction and data release functions are to be separated.

This could give the impression that data is no longer to be extracted only when needed for a specific purpose, but lends weight to the impression that all data is to be “sucked up” and purposes defined later. If care.data is purposed to replace SUS, it would not be a surprise.

It would however contravene fair processing data protection which requires the purposes of use to be generally clear before extraction.  Should use change, it must be fair. [For example, to have had consent for data sharing for direct care, but then use the data for secondary uses by third parties,  is such a significant change, one can question whether that falls under ‘fair’ looking at ICOs examples.]

So, what now, I asked Dr. Poulter after the Guardian healthcare debate on Tuesday evening this week on giving opt out legal weight?
(I would have asked during the main session, but there was not enough time for all questions).

care.data opt out open question

 

He was not able to give any concrete commitment to the opt out for HES data, or care.data, and simply did not give any answer at all.

What will happen next? Will the pathfinders be going live before the election in May? I asked.

Without any precise commitment, he said that everything was now dependent on Dame Fiona’s IIGOP response to the proposals [made by NHS England].

cd_metw2 Dan Poulter MP

 

What has happened to Transparency?

The public has not been given access to see what the NHS England response to the IIGOP/ Caldicott December review was.

The public has no visibility of what the risks are, as seen by the programme board.

The public is still unclear on what the expected benefits are, to measure those risks against.

And without a business case, the public does not know how much it is costing.

Without these, the public cannot see how the care.data board and DH is effectively planning, measuring progress, and spending public money, or how they will be accountable for its outcomes.

The sad thing about this, is that transparency and “intelligent grown up debate” as Sir Manning called for last year, would move this programme positively ahead.

Instead it seems secretive, which is not building trust.  The deficit of that trust is widely recognised and still needs solidly rebuilt.

Little seems to have been done since last year to make it so.

“Hetan Shah, executive director of the Royal Statistical Society said, ‘Our research shows a “data trust deficit”. In this data-rich world, companies and government have to earn citizens’ trust in how they manage and use data – and those that get it wrong will pay the price.” [Royal Statistical Society, 22 July 2014][6]

Shame.

Care.data is after all, meant to be for the public good.

care.data purposes are unclear
It would be in the public interest to get answers to these questions from recent events.

 

refs:

1. IIGOP care.data report December 2014 https://www.gov.uk/government/publications/iigop-report-on-caredata

2. Hansard March 25th 2014: http://www.publications.parliament.uk/pa/cm201314/cmhansrd/cm140325/halltext/140325h0002.htm

3. Hansard February 25th 2014: http://www.publications.parliament.uk/pa/cm201314/cmhansrd/cm140225/debtext/140225-0001.htm

4. NHS England statement on Type 2 opt out http://www.england.nhs.uk/2015/01/23/data-opt-out/

5. Ipsos MORI June 2014 survey: https://www.ipsos-mori.com/researchpublications/researcharchive/3407/Privacy-and-personal-data.aspx

6. Royal Statistical Society on the ‘trust deficit’ http://www.statslife.org.uk/news/1672-new-rss-research-finds-data-trust-deficit-with-lessons-for-policymakers

7. *additional note made, Sun 15th incl. reference HSC Letter from HSCIC

The care.data coach ride: communications – all change or the end of the line?

Eleven months ago, care.data was put on hold and promises made to listen to professional and public opinion, which would shape programme improvement.

Today, Sir Bruce Keogh of NHS England said: “an unprecedented shift of resources and care into GP surgeries was necessary to help the NHS withstand the twin pressures of rising demand and tight budgets.”
[The Guardian, 19 Jan 2015]

care.data right now, seems like the straw on the camel’s back that GPs do not need, and that in its current format, many patients do not want.

Why the rush to get it implemented and will the costs of doing so, – to patients, to professionals and to the programme – be worth it?

What has NHS England heard from these listening events?

The high level ‘you said, we did’ document, sharing some of the public concerns raised with care.data, has been published by NHS England.

It is an aggregated, high level presentation, but I wonder if it really offers much more insight than everyone knew a year ago? It’s a good start, but does it suggest any real changes have taken place as a result of listening and public feedback?

Where are we now, what does it tell us, and how will it help?

Some in the media argue, like this article, that a:

“massive privacy campaign effectively put a halt to it last year.”

In reality it was the combination of the flaws in the care.data plans for the GP  data extraction and sharing programme, and past NHS data sharing practices, which was its own downfall.

Campaigners merely pointed these flaws out.

Once they were more apparant, many bodies involved in good data sharing and those with concerns for confidentiality, came together with suggestions to make improvements.

But to date and a year after patients first became aware of the issues, even this collaboration has not yet solved patients’ greatest concern, that data is being given, without the individuals’ knowledge or consent, to third parties for non-clinical care, without oversight once they receive it.

The HSCIC 2013-15 Roadmap outlined HSCIC  would ‘agree a plan for addressing the barriers to entry into the market for new commercial ventures’ using our data provided by the HSCIC and:

“Help stimulate the market through dynamic relationships with commercial organisations, especially those who expect to use its data and outputs to design new information-based services.”

 

Working with care.data was first promised, to ‘innovators of all kinds’  just as HES was delivered to commercial businesses, [including reportedly Google, and PA Consulting getting 15 years of NHS data], all with unclear and  unproven patient benefit or UK plc economic development and gain.

 

Patients are concerned about this.

 

They have asked about the assurance given that the purposes are more defined but still don’t rule out commercial users, re-use licences have not been categorically ruled out, and patients have asked further, detailed questions, which are still open.

View some of them for yourself here:  including coercion, disability inclusion, and time and time again concerns over the accuracy and quality of records, which may be uploaded, and mistakes never deleted upon which judgments are made, from records which the patient may never have seen.

care.data events have been hosted by and held for a group of charities, other care.data listening events held by the care.data advisory group, [include Peterborough and Coin Street, London]  [you can view the 26th November Manchester event with questions from 33 minutes in] and those held as part of the NHS Open House event in June [from 01:13.06 in the NHS Open House video], all asked sensible detailed questions on process and practice which are still to be addressed, which are not in the high level ‘you said, we did.’

Technical and practical processes of oversight have been changed to improve the way in which data was shared, but what about data use that has been the crux of patient concern?

How will the questions that remain unanswered be addressed? – because it seems the patient letter, posters and flyers won’t do it.

What now?

Communications are rolling out in pathfinders

All year the message has been the same: communication was poor.

“We have heard, loud and clear, that we need to be clearer about the care.data programme and that we need to provide more support to GPs to communicate the benefits and the risks of data sharing with their patients, including their right to opt out.” [October 2014, Mr. Kelsey, NHS England]

The IIGOP report on care.data outlined in December 2014 what still remains to be done and the measures required for a success.

These go far beyond communications issues.

But if pathfinders are being asked to spend time and money now, it must be analysed now, what will new communications materials look like, compared with those from a year ago.

Whilst I would agree that communications were poor, the question that remains to be asked is why? Why was communication poor? Why did a leaflet that was criticised by ICO, criticised by the GPES advisory group, criticised by many more and glaringly a failed piece of communication to outsiders, why was all that advice and criticism ignored and it got sent [or not sent] to patients across England?

[Sept 2013 GPES Advisory] “The Group also had major concerns about the process for making most patients aware of the contents of the leaflets before data extraction for care.data commenced”.

We could say it doesn’t matter. However it is indicative of the same issues now, as then, and throughout the year. There has been lots of positive advice given, shared, and asked for at patient listening events. If this is the extent of “you said, we did”, feedback is still being ignored. That matters.

Because if it continues to be, any new communications will have the same failure-to-launch that they did a year ago.

In the last year we have heard repeatedly, that the pause will enable the reshaping of communications materials.

Sadly, the bell hasn’t rung yet, on what really needs done. It looks to me as though the communications people have done their best, dealing with glaring gaps in content.

Communications materials are not ready, because it’s not clear where care.data is going, or what’s the point of the trip.

bellbroken

 

All change?

It has failed to address the programme as a change issue.

That is what it is at its core, and it is this failure which explains why it has met so much resistance.

If the 26th November Manchester questions are anything to go by the reason for the change as to why our data is needed at all, remains very unclear, for professionals and patients.

How patients will be empowered to manage its ongoing changes into the future, is also undefined.

In addition, there has been little obvious, measurable change in the substance of the programme communication in the last 12 months.

New materials suggest no real changes have taken place as a direct result of listening to public feedback at all. They may have from feedback that was given before the pause, but what impact has the pause had?

If you disagree, look over the GP care.data leaflet from 2013 and see what changes you would make now. Look at the 2013 patient leaflet and see what substantial improvement there is. Look at the basic principles of data protection and see if the care.data programme communications clearly and simply address them any better now.

What are the new plans for new communications, and how do they pick up on the feedback given at ‘hundreds’ of listening events?

The communications documents are a good start at addressing a complex set of questions.

However, whilst they probably meet their spec it doesn’t meet their stated objective: to show clear ‘we did’ nor a clear future action plan.

The listening feedback may have been absorbed, but hasn’t generated any meaningful new communications output.

It shows as far as listening goes, real communications in this one-way format, may have reached the end of the line.

How can patients make a decision on an unknown?

The new communications in posters and the ‘you said, we did’, state that access to the information collected will be limited in the pathfinder – but it does not address the question in the longer term.

This is a key question for patients.

It should be simple. Who will have access to my data and why?

No caveats, no doubts, no lack of clarity.

Patients should be properly informed how ALL their data is being used that is held by HSCIC. The opt out talked in February 2014 of two options; for data to be extracted under care.data at GPs and all the other data already stored at the HSCIC from hospitals and elsewhere. To explain those two different options patients first need told about all the data which is stored, and how it is used.

Talk about the linkage with other datasets, the future extraction and use of social care data, the access given via the back office to police and other non-health government departments. Stop using ‘your name will not be used’ in materials like the original patient leaflet – It may be factual for care.data per se, but is misleading on what of our personal data is extracted and used without our consent or awareness – most of us don’t know the PDS extracts name at all.

Being cagey does not  build trust. Incomplete explanation of uses would surely not meet the ICO data protection requirements of fair processing either. And future uses remain unexplained.

For care.data this is the unknown.

NHS England is yet to publish any defined future use and scope change process, though its plan is clearly mapped:

caredatatimeline

 

There must be a process of how to notify patients either of what will be extracted, or who will be given access to use it > a change process. A basic building block for fair processing. Not a back door.

It needs to address: how is a change identified, who will be notified within what time frame before the extraction, how will the training and access changes be given, and how will patients be informed of the change in what may be extracted or who may be using it and be given the right to change their opt in / out selection. The law requires fair processing BEFORE the change happens.

We patients should also be made aware what impact this choice has on data already extracted, and that nothing will be deleted from our history. Even if its clearly a mistake. How does that affect reports?

Communication is impossible whilst the content & scope is moving.

I’ve been banging on, quite frankly,  about scope, since March.

This is what needs done. Pull over, and get the fixes done.

> Don’t roll out any comms in a pathfinder yet. They’re not ready.

> First sort out the remaining substance so you know what it is that materials are communicating.  What, who, why, when, how?

The IIGOP report lists clearly all that needs done and how to measure their success: it’s not communications, it’s content.

The final technical, security and purposes pieces still need resolved; practical questions on opt out,  legislation needed to make sure the  opt out really is robust, that the so-called ‘one strike and out’ isn’t just a verbal assurance but actually happens, and that future access is defined beyond the pathfinder – who will have access at and outside the new secure lab – not only for the pilot, but future.

Get the definition of scope limited so as to meet fair processing, and get the future scope change communication process ironed out.

How will patients be communicated to not only now, not in a pathfinder, but for every change that happens in the future which has a fair processing requirement?

Only then can the programme start to truly address change and communications with meaningful messages. Until then, it’s PR.

Once you know what you’re saying, how to say it becomes easy.

If it’s not proving easy to do well, we need to ask why.

change>>>References>>>

 1. You said, we did NHS England presentation

2. IIGOP report into care.data

3. Pharmacists to access DWP data – example of scope change who accesses data and why, which fails fair processing without a change process in place to communicate

>>>>>>>>>

For anyone interested in considering the current materials in detail, see below: this doesn’t address the posters shared in the Manchester event or what is missing, but many of the messages are the same as in the ‘you said, we did’ and it’s a start.

>>>>>>>>>

Addendum:

1. The “co-production” approach to materials

2. Why a scope change management process is vital to trust for care.data.

3. Some feedback on the high level ‘you said, we did’ document

4. What do communications require to improve from those before?
5. Hard questions

 

1. The “co-production” approach to materials

The IIGOP report on care.data outlined in December 2014 asked a very sound question on page 8:

“What are the implications of using locally developed communications material (“co-production”) for subsequent national rollout ?”
The Programme is developing a “co-production” approach to initial GP and patient-facing material, based on feedback from the care.data “listening period” and from local events and formal research.
“The intent is to ensure that there is local ownership of material used to communicate with professionals and patients in the Pathfinder stage.”
To ask a basic tenet of change management: what’s in it for them?
It’s unclear to what level of detail the national materials will go, and how much local sites will create.

 

If I were at CCG or GP level and responsible for ‘local ownership’ of communications from this national programme, I’d be asking myself why I am expected to reinvent the wheel? I’d want to use national standards as far as possible.

Why should local organisations have to produce or design materials which should be communicating the intent of a programme whose purpose is to be identical for every one of the 62 million in England registered with a GP? Let’s hope the materials are national.

What benefit will a local level site see, by designing their own materials – it will cost time and money – where’s the benefit for the patients in each practice, for the GPs and the programme?
Is it too cynical to ask, has NHS England not got the resources to do this well and deliver ready-done?
If so, I should urge a rethink at national level, because in terms of time and people’s effort this multiple duplication will be a costly alternative.
It also runs the risk of costly mistakes in accuracy and inconsistency.
There appears to date to be no plan yet how future changes will be communicated. This must be addressed before the pathfinder and in any current communication, and all local sites need the same answer because the new decisions on extraction, will be at national level.

2. Some feedback on the high level ‘you said, we did’ document:

page 9: “present the benefits” – this fails to do so  – this is however not a failing of this presentation – there is simply still no adequate cost benefit document available in the public domain.

page 11: “keep data safe” – the secure lab is mentioned – a great forwards step compared with HES access – and it states analysts will only access it there in the pathfinder – but what about after that?

page 13: “explain the opt out clearly”: “You can opt out at any time. Just talk to your GP Practice.” > I have, but as far as I know my data is still released by the HSCIC from HES and wider secondary collections of data, which I did not know were extracted and did not consent to being used for secondary purposes. Opt out doesn’t appear to actually work. Please let me know if that’s a misunderstanding on my part. I’d be delighted to hear it is functional.

page 15: “legislative changes” – the biggest concern patients raise over and over again, is sharing data beyond their direct care with commercial companies and for non-NHS purposes. This has not been excluded. No way round that. No matter how you word it and made harder by the fact that data was released from HES in July to Experian for use in mosaic. If that makes the definition, then it’s loose.

The one-strike-and-out is not mentioned in materials, although it was discussed on Nov 26th in Manchester. When is the legislation to actually happen?

Both this and the opt out are still not on a robust legal basis – much verbal assurance has been given on “legislative changes” but they are meaningless if not enacted.

page 17: “access safeguards” – the new audit trail is an excellent step. But doesn’t help patients know if OUR data was used, it’s generic. We need some sort of personal audit trail of our consent, and show how it is respected in what data is released, to who, when, and why. The over emphasis of ‘only with legal access’ is overdone as 251 has been used to approve data access for years without patient knowledge or consent. If it is to be reassuring, it is somewhat misleading; data is shared much more widely than patients know. If it is to answer questions asked in the listening feedback events, there needs to be an explanation of how the loop will be closed to feed the information back and how it will be of concrete benefit.

And in general:

Either “this will not affect the care you receive”  or it will. Both sentences cannot be true.  Either way, there should be no coercion of participation:

“If you decide to opt out it won’t affect the care and treatment you receive. However, if significant amounts of people do opt out, we won’t be able to collect enough information to help us improve NHS services across the nation.”
Agreement must in usual medical environments, be given voluntarily and freely, without pressure or undue influence being exerted on the person either to accept or refuse.

3. What do communications require to improve from those before?

a. Lessons Learned for improvement:

The point of the pause was in order to facilitate the changes and improvement needed in the programme, whose flaws were the reason to stop in February. All the questions need shared so that all the CCGs can benefit from all the learning. If all the flaws are not discussed openly, how can they be fixed? Not only being fixed, but being seen to be fixed would be productive and useful for the programme. [The IIGOP report on care.data outlined in December 2014 covers these.]

b. Consistency:

Raw feedback will be vital for CCGs and GP practices to have. It has not been released and the ‘you said, we did’ is a very high level aggregate of what was clear last February. Since then, the detailed questions are what should be given to give all involved the information to able to understand, and to have the answers for consistently.

This way they will be properly prepared for the questions they may get in any pilot rollout. If questions have already been asked in one place, the exact same answer should be reproduced in another.

c. Time-saving:

If the same question has already been asked at a national or regional event, why make the local level search for the same answer again?  This could be costly and pointless multiplied many times over.

d. Accuracy:

Communications aren’t always delivered correctly. They can be open to misinterpretation or that the comms team simply gets facts wrong.  That would fail data protection requirements and fail to protect GPs. How will this accuracy be measured if done at local level and how will it be measured and by whom?

The IIGOP report asked: “What are the success criteria for the Pathfinders? How will we know what has worked and what has not? “

I know from my own experience that either the communications team or consultants can misunderstand the facts, or something can easily become lost in translation, from the technical theory to the tangible explanation.

4. Future change: Control of scope change for linkage and  access

Current communications may address the current pathfinder extraction, but they are not fit for purpose for a rollout which is intended to be long term and ever changing.

So what exactly is it piloting? – a “mini” approach? – if so, to what purpose? or is it just hoping to get X amount of data in, done and dusted, as ‘a start.’

If the pathfinder patients are only told a sub-set of information in a pilot rollout, we should ask:

a. why? Is this in order to make the idea sound more appealing?

b. how will it be ensured that their consent, or lack of objection, is fully informed and therefore meets Data Protection requirements?

and finally

c. how will future changes be communicated? This must be addressed before the pathfinder and in any current communication.

For example; who gets access to data may change so you can’t say only “” access to the information collected will only be given to a limited number of approved analysts who will have to travel to a new secure data facility that the HSCIC is setting up.”

Pharmacists who have access to this data for direct care, may also now be getting access to DWP data.

“the Royal Pharmaceutical Society has already said that the new measures could affect trust between patients and pharmacists.” [EHI Dec 30th 2014]

When patients signed up for the SCR at a GP practice they may not realise it is shared with pharmacies. When data is shared with the Department of Work and Pensions, citizens may not realise it could be shared with pharmacies.  Neither told the other when signing up that future access would allow this cross referencing and additional access.

This is a real life scenario that should not be glossed over in a brochure. A hoped for ‘quick-fix’ now, will simply cause later problems, and if data is used inappropriately, there may not be another opportunity for winning back trust again.

To get it legally wrong now, would be inexcusable.

Here’s why it would be better to do no more communications now:

5. Hard questions can’t be avoided

Currently, comms still avoid the hard questions, and those are the ones people want answers for. Open questions remain unaddressed.

Raw questions asked in July at a charities’ event are, with some post-event reshaping and responses here. Note how many are unknowns.

Changes have been suggested to be constructive.

One attendee of a public listening event commented online in October 2014, on the NHS England CCG announcement:

“I am one of those that has tried hard to engage with you to try and make sure that people can be assured that their personal and private information will not be exploited, I feel that you have already made the decision to press ahead regardless and feel very let down.

“Please publish the findings of your listening exercise and tell people how you intend to respond to their concerns before proceeding with this.”

People have engaged and want to be involved in making this programme work better, if it has to work at all like this.

Q: Where is the simple, clear public business case for cost and benefits?

The actual raw questions have been kept unpublished for no clear purpose. It could look like avoiding answering the hard questions.

The IIGOP report captures many of them; for example on process of competence, capacity and processes – and the report shows there is still a need to “demonstrate that what goes on ‘under the bonnet’ of Pathfinder practice systems operates in the same way that patients are being told it does.”

When is the promised legislative change to actually happen? The opt out is still not on a robust legal basis – much verbal assurance has been given on “legislative changes” but they are meaningless if not enacted.

It’s all about trust and that relationship, like the communication and feedback responses, has to be two-way.

The Deregulation Bill – Episode III : Regulate, what with?

Regulation, the use of regulatory powers and the authority to oversee them, are in flux in England.

Some will have lesser discussed, but long term, wide ranging effects such as the regulatory framework and requirement for profit in almost all public bodies.

A significant amendment [1] appears to have been proposed by Lord Hunt of Kings Heath on 9th Jan, 2015 in the Deregulation Bill [2]. The next discussion date of which seems to be provisionally scheduled for February 3rd and 5th.

The amendment proposes the removal of ten regulatory functions in health and care, from the requirement to exercise the clause of considerable concern, renamed from clause 83 to clause 88: the statutory duty towards a desirability to promote economic growth.

My last post in November on this clause was after the debate in which Lord Tunnicliffe concluded:

”if our fears comes to pass, these three clauses could wreak havoc in a regulatory regime within this country.”

Later  he asked:

“are these new clauses a licence for regulators to approve regulations that kill people to save money?”

Clause 88: background on the clause to ‘promote economic growth.’

Almost a year ago, in February 2014, [3] MPs had discussed this same clause in its passage in the House of Commons.

MPs were asked to support a reasoned amendment tabled by Caroline Lucas, Jonathan Edwards, John McDonnell and Jeremy Corbyn MPs.

They proposed the removal of the clause, requiring the desirability for economic growth, and they had concerns:

…”that this Bill represents a race to the bottom and an obsession with GDP growth at any cost which is not in the public interest.”

(my underlining):

[…]”the Health and Safety Executive, which is irresponsible and risks undermining their core roles; further considers that this Bill is another illustration of a Government which is embarking on a deregulatory path without due consideration of warnings, including from businesses, that effective regulation is essential to create jobs and innovation and that ripping up vital green legislation risks locking the UK into polluting industrial processes for decades to come, jeopardising future competitiveness, damaging the UK’s attractiveness for green investment, and undermining new industries.”

This clause must be reviewed thoroughly and transparently from scratch. If indeed these ten bodies are to be considered for exclusion from the clause there must be a detailed case of why. This leads automatically to ask for the benefits to justify the inclusion of others. If this has not been made transparent to the Lords debating the clause by now, then the bill should not pass as is without reasonable justification.

Is there an MP or Lord who will gladly take the responsibility to say:

“I agreed to a new law, the consequences of which I was not clear, but I did not ask the questions I should have done. I ignored that Lord Tunnicliffe asked: “are these new clauses a licence for regulators to approve regulations that kill people to save money?” And I did not examine why this might be for each and every function of regulation it affects.”

Based on what decision criteria and based on what measures or public interest test has this department area been selected for exclusion and others, such as the environment, been omitted?

Considering the reported opinion of the Bill’s proponent Oliver Letwin MP to the NHS it sould seem wise to ask, what kind of National Health Service do our MPs expect to see in future under this new model of statutory requirement to seek profit.

In conclusion:

Is the bill designed to future-proof regulatory common sense or set it up for widespread failure from the start?

In the words of Lord Tunnicliffe:

“The problem is the clauses themselves. Clause 83(2) states that:

‘the person must … consider the importance for the promotion of economic growth of exercising the regulatory function in a way which ensures that … regulatory action is taken only when it is needed, and … any action is proportionate”.

“Those words by themselves seem a pretty high test for a regulator. As I tried to illustrate, our lives are made acceptable and benign by regulators acting pretty well as they do at the moment to protect us. So are these new clauses a licence for regulators to approve regulations that kill people to save money?”

It should be made very transparent what bodies will be affected, why, how the decision making in each function will be carried out and what with? At national or local level ruling authority?

Clearly there is still work to be done to ensure that the implications in the public interest. That ethic seems to have been lost at the back of the vast cupboard of all that the deregulation bill has in store.

Alongside the changes to the sale of liqueur chocolates and weights and measures for knitting yarn we have lost something much greater in the Deregulation Bill.

However this amendment suggests there is new hope coming for the proposed change to regulatory powers and their profit making; that in fact, some significant bodies may be made exempt of this duty on a statutory footing.

Now the case should be made why any public bodies should not be.

Simply, the wider Public Interest must come first, above profit.

Perhaps when one hears calls to ignore criticism of these proposals of deregulation in this bill and in TTIP one would do well to ask why.

Anything else could be as disastrous for society, as the Poll Tax is now accepted to have been for Margaret Thatcher.

But perhaps, some would maintain, there is still no such thing?

*********

For those with more in depth interest:

Further detail; below I continue and review the amendment,  wider implications at local authority level, changes in the future landscape of health and social care and why it could be of significant negative impact on political and social trust.

This is my update on two previous posts; Part one: October 4th, Deregulation Bill Clause 47 and the back door access to journalist sources and Part two: the Deregulation Bill Clause 83 from 6th Nov with additional notes on Nov 21st.

It continues with Part four to follow: The Deregulation Bill: Part IV New Hope for Regulatory powers?

*****

The amendment

Here is what it looks like:

 Page 70, line 29, at end insert—“( )     This section does not apply to the following—
 

(a)   Care Quality Commission,

(b)   Human Tissue Authority,

(c)   Medicines and Healthcare Products Regulatory Agency,

(d)   Professional Standards Authority,

(e)   General Medical Council,

(f)   Nursing and Midwifery Council,

(g)   Health and Care Professions Council,

(h)   General Chiropractic Council,

(i)   General Dental Council,

(j)   General Pharmaceutical Council,

(k)   Human Fertilisation and Embryology Authority, and

(l)   any persons exercising a regulatory function with respect to health and care service that the Secretary of State specifies by order.

( )     An order under this section must be made by statutory instrument.

( )     A statutory instrument containing an order under this section may not be  made unless a draft has been laid before, and approved by a resolution of,  each House of Parliament.”

What would the amendment change, if they become law?

These exceptions are specific to healthcare and, it remains to be seen if they will be adopted.

There is also some provision, to make further special cases for the health and care service more broadly, that the Secretary of State specifies by order.

This addresses some organisations in the regulation of health and care.

But it opens up the question more clearly why should other bodies be included? Where is the benefit – and where is the cost and risk analysis?

That would be a most welcome discussion in the public interest. Some professionals and professional bodies have already flagged their concern.

The Equality and Human Rights Commission is one example, that was discussed in the last debate andthe ECHR response to it. [4]

Nov 21st update:  see Column GC229 < and whilst verbal assurances were made, it appears nothing changed in the Bill, and that the EHRC said in response:

“While we welcome this undertaking we understand that this doesn’t mean that we’ll be removed on the face of the Bill”.

The ECHR clearly sees it as detrimental and asks for change. Will the government ride roughshod over professional opinion without transparent and thorough justifications of the need for this?

If so, it seems an extraordinary dismissal of democracy.

Other bodies should take the lead from the EHRC and make their positions clear in the public domain now, or risk future backlash once the impacts become clear.

What wider impact will this amendment have?

At first the effect appears to be that a significant number of health related bodies could be freed from the duty to make a profit.

At national level this seems a welcome and sensible step.

To decide which bodies should and which bodies should not be exempt it must be very clear exactly what impact these changes will have.

 

For each body involved, an impact assessment table should be drawn up – what do they regulate, how, why and what would change under the deregulation bill and the effects of its clauses, especially 88. Risks and benefits.

 

That would help understand today’s position.

 

The next step is to understand the future implications. Identify which bodies will be deregulated by it in future, why and how they will be affected by other aspects of the bill.

However it’s not the whole story.

How these bodies perform their tasks at national level and how far down their powers reach will affect the organisations below them.

These lower branches of organisational structure also need to be understood for any regulatory implications. How that function is carried out under what powers needs to be clear at what point the removal of the requirement would have an effect.

These ten bodies are in health and social care. The future of health seems to be bound to social care and in Simon Stevens’ vision, with ever more physical, as well as financial mergers.

 

In an interview with the Financial Times: he predicted ‘a blurring of the [lines] that exist between different public services’.He said:

Basing my understanding on CCG meeting attendance, reading ADASS minutes and general media news. it appears pooled budgetary responsibility will call for a shift in more responsibility to local authorities.

 

Is it therefore logical to assume that will include the responsibility for regulatory functions?

 

Any changes therefore at national level in terms of organisational structure or regulatory responsibility will have an affect at lower levels.

 

So for an organisation of the amendment ten, taking the Care Quality Commission for example, it is not unthinkable that change is inevitable regardless whether they are in or out of this clause.

 

The CQC has come in for some criticism in recent months with media stories repeating failings. Mistakes were made, with significant media coverage, on the calculations of quality ratings of GP practices.

Questions were raised in November as to the extent of the reach of the CQC surveillance powers at practice level, reviewing individual patient medical records ‘to assess the quality of care provided by the practice’ without individual consent. Professionals on social media raised their electronic eyebrows and lamented the breach of confidentiality.

What deeper impact will this have?

What happens should the CQC powers be broken up at national level and carried out at local level instead needs to be examined.

The body having been made exempt at national level from this commercially driven clause, may find that the regulatory functions would be required to comply with it again at Local Authority level.

The reasons why the CQC should be made exempt, would therefore be lost in the transition, unless the special orders and special provision were made before any organisational restructure.

The timing therefore of new regulations would need to become integral to any departmental organisational change at any and every level of regulatory governance.

Instead of removing ‘red tape’ and bureaucracy in this bill, I foresee it adding a burden of analysis and requirement to assess and document responsibilities; determining whether or not the clause to promote economic growth should apply or not.

Its definition is so vague and its responsibility to be ‘proportionate’ so open, that in fact it is not assigned to anybody; which everybody knows,  means it ends up being done, by nobody.

Every time some any reorganisation is planned, the impact of this regulatory clause may need considered and not only in health and social care but in every aspect of regulatory function across government.

Every action a regulatory body takes, is by default ‘regulatory action.’ So any time the function should do its job, each and every time, every decision, every ruling, would need to consider the need for economic growth and if they need to act at all.

(a) regulatory action is taken only when it is needed,

and

(b) any action taken is proportionate.

Surely this is what they do already in every decision, and therefore why make it a statutory requirement at all – for any regulatory body?

If we don’t need it, why write it in. And if we do need it, what precisely is it intended to do, how and why?

I would encourage anyone who has not yet done so, to have a good look over the contents of the bill. It’s like an end of year sale and there is definitely something in there for everyone. The likelihood is high that some unforeseen damage will be done to the public interest in the rush to get it through in this term by government, akin to a Black Friday panic. The bills lined up to rush through the  last minute doors of parliament, seem to be queueing in droves.

For bodies which have regulatory functions today in health and social care at Local Authority level already, the hoped for reduction in harm through this amendment affecting their national level body, could fail to materialise.

The high-level  health and social care bodies may get “let off” the duty in explicit terms in the bill, but if the function is performed at another level, “on the ground”,  the requirement of the function will in effect still happen under-the-radar.
  
Here is at least a starting point to go deeper into who regulates what at local authority level. [6] Imagine each and every regulatory function trying to consider the importance for the promotion of economic growth of exercising  the regulatory function in a way which ensures that —

(a) regulatory action is taken only when it is needed,

and

(b) any action taken is proportionate.

How will as another example, the local government ombudsman make a profit but not put that before the people it serves?
In this case their role is managing complaints about councils and some other authorities and organisations, including education admissions appeal panels and adult social care providers. How does one justify exploiting that, for profit?

 

With purdah and the general election drawing near,  this may be a question with an unpredictable answer for many organisations if their future structural model is uncertain.

 

The backdrop

 

There are various other bills in progress to do with regulation, which involve communications and data, and by implication, potentially journalists’ sources. They are also affected by clause 47 in the deregulation bill which the NUJ protested in 2014. [more in my next post].
A press free from political control and undue regulation is something to be held dear, and indeed Guido Fawkes has experienced this week. attempts to control it, by the Electoral Commission:

 

“Guido has no intention of registering with the Electoral Commission or reporting a penny of spending or anything else to them. This authoritarian law is a nonsense. If you read the guidance it should apply to newspapers. We haven’t just rejected statutory control of the printed press by one regulator for political control of digital media by another.”

Here we arrive at the nub of the issue: what is to be deregulated and why and by whom are fundamental to understand what effects these changes will require, and the demands the duty for economic growth will create.

I question: “Can this dramatic change, really be a wise and throroughly thought out course of action, when the only certainty in the affected organisations’ governance duties is that in fewer than five months, it may all change?”
Had all the background and assessments been done already, one would think it could be understandable to press on and complete. But the fact that this significant amendment has been proposed now, surely shows that an adequate cost benefit and risk assessment does not exist. Does it not exist only for these ten, or for all?

 

All sorts of areas of public interest are affected, with questions being asked on private tenancy changes to the very Electoral Commission itself.

 

In the run up to the election, will it be asked to become a profit driven  entity? – instead of prioritising its key focus, the regulation of our democratic processes:

 

“These roles and responsibilities outline much of the work we do in order to meet our objectives of:

  • well-run elections, referendums and electoral registration
  • transparency in party and election finance, with high levels of compliance”

How will the Electoral Commission  maintain neutrality if profit must drive the function as the regulator of political funding and spending?

That decision could have almighty and lasting effect on public confidence and our trust in the wake of the MP expenses scandals.

Without a publicly available clear cost benefit analysis, the overwhelming drive for profit in every sector of UK regulatory reach remains at best unclear.  The intended benefits or whether they will even create any efficiencies, never mind public gain, lacking.

At worst, “are these new clauses a licence for regulators to approve regulations that kill people to save money?”

****

Key references:

[1] Proposed amendment by Lord Hunt of Kings Heath in the Deregulation bill.

[2] The Deregulation Bill

[3] Hansard, February 3 2014, MPs propose removal of clause

[4] Hansard, November 20th 2014, ECHR comments included in Lords’ debate

[5] Public Health functions under Local Authority

[6]  Local Authority regulatory functions

********

List of The National Regulators – the ten bodies  above are those explicitly mentioned in Lord Hunt of King’s Heath’s amendment:

Animal Health and Veterinary Laboratories Agency (AHVLA)

Animals in Science Regulation Unit

Architects Registration Board (ARB)

British Hallmarking Council (BHC)

Care Quality Commission (CQC)

Charity Commission for England and Wales

Civil Aviation Authority (CAA)

Claims Management Regulation Unit

Coal Authority

Companies House

Competition Commission

Professional Standards for Health and Social Care (PSA)

Disclosure and Barring Service (DBS)

Drinking Water Inspectorate (DWI)

Driver and Vehicle Licensing Agency (DVLA)

Driving Standards Agency (DSA)

Employment Agency Standards Inspectorate (EAS)

English Heritage (EH)

Environment Agency

Equality and Human Rights Commission

Financial Reporting Council (FRC)

Fish Health Inspectorate (FHI), Centre for Environment, Fisheries and Aquaculture Science (Cefas)

Food and environment research agency (plant and bee health) and (Plant Variety and Seeds)

Food Standards Agency (FSA)

Forestry Commission

Gambling Commission

Gangmasters Licensing Authority (GLA)

General Medical Council

General Chiropractic Council

General Dental Council

General Pharmaceutical Council

Health and Safety Executive (HSE)

Higher Education Funding Council for England (HEFCE)

Highways Agency (HA)

HM Revenue and Customs (Money Laundering Regulations and National Minimum Wage)

Homes & Communities Agency (HCA)

Human Fertilisation and Embryology Association (HFEA)

Human Tissue Authority (HTA)

Information Commissioner’s Office (ICO)

Insolvency Service including Insolvency Practitioner Unit

Intellectual Property Office (IPO)

Legal Services Board (LSB)

Marine Management Organisaton (MMO)

Maritime and Coastguard Agency (MCA)

Medicines and Healthcare Products Regulatory Agency (MHRA)

Monitor

National Measurement Office (NMO)

Natural England

Nursing and Midwifery Council

Office of Communications

Office for Fair Access (OFFA)

Office for Nuclear Regulation (ONR)

Office for Standards in Education, Children’s Services and Skills (OFSTED)

Office of Fair Trading

OFQUAL

Office of Rail Regulation (ORR)

Office of the Regulator of Community Interest Companies

OFGEM

Pensions Regulator

Rural Payments Agency (RPA)

Security Industry Authority (SIA)

Senior Traffic Commissioner

Sports Grounds Safety Authority (SGSA)

Trinity House Lighthouse Service (THLS)

UK Anti-Doping (UKAD)

Vehicle and Operator Services Agency (VOSA)

Vehicle Certification Agency (VCA)

Veterinary Medicines Directorate (VMD)

***

Please feel free to comment below or find me on twitter @TheABB