Category Archives: choice

Datasharing, lawmaking and ethics: power, practice and public policy

“Lawmaking is the Wire, not Schoolhouse Rock. It’s about blood and war and power, not evidence and argument and policy.”

"We can't trust the regulators," they say. "We need to be able to investigate the data for ourselves." Technology seems to provide the perfect solution. Just put it all online - people can go through the data while trusting no one.  There's just one problem. If you can't trust the regulators, what makes you think you can trust the data?" 

Extracts from The Boy Who Could Change the World: The Writings of Aaron Swartz. Chapter: ‘When is Technology Useful? ‘ June 2009.

The question keeps getting asked, is the concept of ethics obsolete in Big Data?

I’ve come to some conclusions why ‘Big Data’ use keeps pushing the boundaries of what many people find acceptable, and yet the people doing the research, the regulators and lawmakers often express surprise at negative reactions. Some even express disdain for public opinion, dismissing it as ignorant, not ‘understanding the benefits’, yet to be convinced. I’ve decided why I think what is considered ‘ethical’ in data science does not meet public expectation.

It’s not about people.

Researchers using large datasets, often have a foundation in data science, applied computing, maths, and don’t see data as people. It’s only data. Creating patterns, correlations, and analysis of individual level data are not seen as research involving human subjects.

This is embodied in the nth number of research ethics reviews I have read in the last year in which the question is asked, does the research involve people? The answer given is invariably ‘no’.

And these data analysts using, let’s say health data, are not working in a subject that is founded on any ethical principle, contrasting with the medical world the data come from.

The public feels differently about the information that is about them, and may be known, only to them or select professionals. The values that we as the public attach to our data  and expectations of its handling may reflect the expectation we have of handling of us as people who are connected to it. We see our data as all about us.

The values that are therefore put on data, and on how it can and should be used, can be at odds with one another, the public perception is not reciprocated by the researchers. This may be especially true if researchers are using data which has been de-identified, although it may not be anonymous.

New legislation on the horizon, the Better Use of Data in Government,  intends to fill the [loop]hole between what was legal to share in the past and what some want to exploit today, and emphasises a gap in the uses of data by public interest, academic researchers, and uses by government actors. The first incorporate by-and-large privacy and anonymisation techniques by design, versus the second designed for applied use of identifiable data.

Government departments and public bodies want to identify and track people who are somehow misaligned with the values of the system; either through fraud, debt, Troubled Families, or owing Student Loans. All highly sensitive subjects. But their ethical data science framework will not treat them as individuals, but only as data subjects. Or as groups who share certain characteristics.

The system again intrinsically fails to see these uses of data as being about individuals, but sees them as categories of people – “fraud” “debt” “Troubled families.” It is designed to profile people.

Services that weren’t built for people, but for government processes, result in datasets used in research, that aren’t well designed for research. So we now see attempts to shoehorn historical practices into data use  by modern data science practitioners, with policy that is shortsighted.

We can’t afford for these things to be so off axis, if civil service thinking is exploring “potential game-changers such as virtual reality for citizens in the autism spectrum, biometrics to reduce fraud, and data science and machine-learning to automate decisions.”

In an organisation such as DWP this must be really well designed since “the scale at which we operate is unprecedented: with 800 locations and 85,000  colleagues, we’re larger than most retail operations.”

The power to affect individual lives through poor technology is vast and some impacts seem to be being badly ignored. The ‘‘real time earnings’ database improved accuracy of benefit payments was widely agreed to have been harmful to some individuals through the Universal Credit scheme, with delayed payments meaning families at foodbanks, and contributing to worse.

“We believe execution is the major job of every business leader,” perhaps not the best wording in on DWP data uses.

What accountability will be built-by design?

I’ve been thinking recently about drawing a social ecological model of personal data empowerment or control. Thinking about visualisation of wants, gaps and consent models, to show rather than tell policy makers where these gaps exist in public perception and expectations, policy and practice. If anyone knows of one on data, please shout. I think it might be helpful.

But the data *is* all about people

Regardless whether they are in front of you or numbers on a screen, big or small datasets using data about real lives are data about people. And that triggers a need to treat the data with an ethical approach as you would people involved face-to-face.

Researchers need to stop treating data about people as meaningless data because that’s not how people think about their own data being used. Not only that, but if the whole point of your big data research is to have impact, your data outcomes, will change lives.

Tosh, I know some say. But, I have argued, the reason being is that the applications of the data science/ research/ policy findings / impact of immigration in education review / [insert purposes of the data user’s choosing] are designed to have impact on people. Often the people about whom the research is done without their knowledge or consent. And while most people say that is OK, where it’s public interest research, the possibilities are outstripping what the public has expressed as acceptable, and few seem to care.

Evidence from public engagement and ethics all say, hidden pigeon-holing, profiling, is unacceptable. Data Protection law has special requirements for it, on autonomous decisions. ‘Profiling’ is now clearly defined under article 4 of the GDPR as ” any form of automated processing of personal data consisting of using those data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.”

Using big datasets for research that ‘isn’t interested in individuals’ may still intend to create results profiling groups for applied policing, or discriminate, to make knowledge available by location. The data may have been deidentified, but in application becomes no longer anonymous.

Big Data research that results in profiling groups with the intent for applied health policy impacts for good, may by the very point of research, with the intent of improving a particular ethnic minority access to services, for example.

Then look at the voting process changes in North Carolina and see how that same data, the same research knowledge might be applied to exclude, to restrict rights, and to disempower.

Is it possible to have ethical oversight that can protect good data use and protect people’s rights if they conflict with the policy purposes?

The “clear legal basis”is not enough for public trust

Data use can be legal and can still be unethical, harmful and shortsighted in many ways, for both the impacts on research – in terms of withholding data and falsifying data and avoiding the system to avoid giving in data – and the lives it will touch.

What education has to learn from health is whether it will permit the uses by ‘others’ outside education to jeopardise the collection of school data intended in the best interests of children, not the system. In England it must start to analyse what is needed vs wanted. What is necessary and proportionate and justifies maintaining named data indefinitely, exposed to changing scope.

In health, the most recent Caldicott review suggests scope change by design – that is a red line for many: “For that reason the Review recommends that, in due course, the opt-out should not apply to all flows of information into the HSCIC. This requires careful consideration with the primary care community.”

The community spoke out already, and strongly in Spring and Summer 2014 that there must be an absolute right to confidentiality to protect patients’ trust in the system. Scope that ‘sounds’ like it might sneakily change in future, will be a death knell to public interest research, because repeated trust erosion will be fatal.

Laws change to allow scope change without informing people whose data are being used for different purposes

Regulators must be seen to be trusted, if the data they regulate is to be trustworthy. Laws and regulators that plan scope for the future watering down of public protection, water down public trust from today. Unethical policy and practice, will not be saved by pseudo-data-science ethics.

Will those decisions in private political rooms be worth the public cost to research, to policy, and to the lives it will ultimately affect?

What happens when the ethical black holes in policy, lawmaking and practice collide?

At the last UK HealthCamp towards the end of the day, when we discussed the hard things, the topic inevitably moved swiftly to consent, to building big databases, public perception, and why anyone would think there is potential for abuse, when clearly the intended use is good.

The answer came back from one of the participants, “OK now it’s the time to say. Because, Nazis.” Meaning, let’s learn from history.

Given the state of UK politics, Go Home van policies, restaurant raids, the possibility of Trump getting access to UK sensitive data of all sorts from across the Atlantic, given recent policy effects on the rights of the disabled and others, I wonder if we would hear the gentle laughter in the room in answer to the same question today.

With what is reported as Whitehall’s digital leadership sharp change today, the future of digital in government services and policy and lawmaking does indeed seem to be more “about blood and war and power,” than “evidence and argument and policy“.

The concept of ethics in datasharing using public data in the UK is far from becoming obsolete. It has yet to begin.

We have ethical black holes in big data research, in big data policy, and big data practices in England. The conflicts between public interest research and government uses of population wide datasets, how the public perceive the use of our data and how they are used, gaps and tensions in policy and practice are there.

We are simply waiting for the Big Bang. Whether it will be creative, or destructive we are yet to feel.

*****

image credit: LIGO – graphical visualisation of black holes on the discovery of gravitational waves

References:

Report: Caldicott review – National Data Guardian for Health and Care Review of Data Security, Consent and Opt-Outs 2016

Report: The OneWay Mirror: Public attitudes to commercial access to health data

Royal Statistical Society Survey carried out by Ipsos MORI: The Data Trust Deficit

Gotta know it all? Pokémon GO, privacy and behavioural research

I caught my first Pokémon and I liked it. Well, OK, someone else handed me a phone and insisted I have a go. Turns out my curve ball is pretty good. Pokémon GO is enabling all sorts of new discoveries.

Discoveries reportedly including a dead man, robbery, picking up new friends, and scrapes and bruises. While players are out hunting anime in augmented reality, enjoying the novelty, and discovering interesting fun facts about their vicinity, Pokémon GO is gathering a lot of data. It’s influencing human activity in ways that other games can only envy, taking in-game interaction to a whole new level.

And it’s popular.

But what is it learning about us as we do it?

This week questions have been asked about the depth of interaction that the app gets by accessing users’ log in credentials.

What I would like to know is what access goes in the other direction?

Google, heavily invested in AI and Machine intelligence research, has “learning systems placed at the core of interactive services in a fast changing and sometimes adversarial environment, combinations of techniques including deep learning and statistical models need to be combined with ideas from control and game theory.”

The app, which is free to download, has raised concerns over suggestions the app could access a user’s entire Google account, including email and passwords. Then it seemed it couldn’t. But Niantic is reported to have made changes to permissions to limit access to basic profile information anyway.

If Niantic gets access to data owned by Google through its use of google log in credentials, does Nantic’s investor, Google’s Alphabet, get the reverse: user data from the Google log in interaction with the app, and if so, what does Google learn through the interaction?

Who gets access to what data and why?

Brian Crecente writes that Apple, Google, Niantic likely making more on Pokémon Go than Nintendo, with 30 percent of revenue from in-app purchases on their online stores.

Next stop  is to make money from marketing deals between Niantic and the offline stores used as in-game focal points, gyms and more, according to Bryan Menegus at Gizmodo who reported Redditors had discovered decompiled code in the Android and iOS versions of Pokémon Go earlier this week “that indicated a potential sponsorship deal with global burger chain McDonald’s.”

The logical progressions of this, is that the offline store partners, i.e. McDonald’s and friends, will be making money from players, the people who get led to their shops, restaurants and cafes where players will hang out longer than the Pokéstop, because the human interaction with other humans, the battles between your collected creatures and teamwork, are at the heart of the game. Since you can’t visit gyms until you are level 5 and have chosen a team, players are building up profiles over time and getting social in real life. Location data that may build up patterns about the players.

This evening the two players that I spoke to were already real-life friends on their way home from work (that now takes at least an hour longer every evening) and they’re finding the real-life location facts quite fun, including that thing they pass on the bus every day, and umm, the Scientology centre. Well, more about that later**.

Every player I spotted looking at the phone with that finger flick action gave themselves away with shared wry smiles. All 30 something men. There is possibly something of a legacy in this they said, since the initial Pokémon game released 20 years ago is drawing players who were tweens then.

Since the app is online and open to all, children can play too. What this might mean for them in the offline world, is something the NSPCC picked up on here before the UK launch. Its focus  of concern is the physical safety of young players, citing the risk of in-game lures misuse. I am not sure how much of an increased risk this is compared with existing scenarios and if children will be increasingly unsupervised or not. It’s not a totally new concept. Players of all ages must be mindful of where they are playing**. Some stories of people getting together in the small hours of the night has generated some stories which for now are mostly fun. (Go Red Team.) Others are worried about hacking. And it raises all sorts of questions if private and public space is has become a Pokestop.

While the NSPCC includes considerations on the approach to privacy in a recent more general review of apps, it hasn’t yet mentioned the less obvious considerations of privacy and ethics in Pokémon GO. Encouraging anyone, but particularly children, out of their home or protected environments and into commercial settings with the explicit aim of targeting their spending. This is big business.

Privacy in Pokémon GO

I think we are yet to see a really transparent discussion of the broader privacy implications of the game because the combination of multiple privacy policies involved is less than transparent. They are long, they seem complete, but are they meaningful?

We can’t see how they interact.

Google has crowd sourced the collection of real time traffic data via mobile phones.  Geolocation data from google maps using GPS data, as well as network provider data seem necessary to display the street data to players. Apparently you can download and use the maps offline since Pokémon GO uses the Google Maps API. Google goes to “great lengths to make sure that imagery is useful, and reflects the world our users explore.” In building a Google virtual reality copy of the real world, how data are also collected and will be used about all of us who live in it,  is a little wooly to the public.

U.S. Senator Al Franken is apparently already asking Niantic these questions. He points out that Pokémon GO has indicated it shares de-identified and aggregate data with other third parties for a multitude of purposes but does not describe the purposes for which Pokémon GO would share or sell those data [c].

It’s widely recognised that anonymisation in many cases fails so passing only anonymised data may be reassuring but fail in reality. Stripping out what are considered individual personal identifiers in terms of data protection, can leave individuals with unique characteristics or people profiled as groups.

Opt out he feels is inadequate as a consent model for the personal and geolocational data that the app is collecting and passing to others in the U.S.

While the app provider would I’m sure argue that the UK privacy model respects the European opt in requirement, I would be surprised if many have read it. Privacy policies fail.

Poor practices must be challenged if we are to preserve the integrity of controlling the use of our data and knowledge about ourselves. Being aware of who we have ceded control of marketing to us, or influencing how we might be interacting with our environment, is at least a step towards not blindly giving up control of free choice.

The Pokémon GO permissions “for the purpose of performing services on our behalf“, “third party service providers to work with us to administer and provide the Services” and  “also use location information to improve and personalize our Services for you (or your authorized child)” are so broad as they could mean almost anything. They can also be changed without any notice period. It’s therefore pretty meaningless. But it’s the third parties’ connection, data collection in passing, that is completely hidden from players.

If we are ever to use privacy policies as meaningful tools to enable consent, then they must be transparent to show how a chain of permissions between companies connect their services.

Otherwise they are no more than get out of jail free cards for the companies that trade our data behind the scenes, if we were ever to claim for its misuse.  Data collectors must improve transparency.

Behavioural tracking and trust

Covert data collection and interaction is not conducive to user trust, whether through a failure to communicate by design or not.

By combining location data and behavioural data, measuring footfall is described as “the holy grail for retailers and landlords alike” and it is valuable.  “Pavement Opportunity” data may be sent anonymously, but if its analysis and storage provides ways to pitch to people, even if not knowing who they are individually, or to groups of people, it is discriminatory and potentially invisibly predatory. The pedestrian, or the player, Jo Public, is a commercial opportunity.

Pokémon GO has potential to connect the opportunity for profit makers with our pockets like never before. But they’re not alone.

Who else is getting our location data that we don’t sign up for sharing “in 81 towns and cities across Great Britain?

Whether footfall outside the shops or packaged as a game that gets us inside them, public interest researchers and commercial companies alike both risk losing our trust if we feel used as pieces in a game that we didn’t knowingly sign up to. It’s creepy.

For children the ethical implications are even greater.

There are obligations to meet higher legal and ethical standards when processing children’s data and presenting them marketing. Parental consent requirements fail children for a range of reasons.

So far, the UK has said it will implement the EU GDPR. Clear and affirmative consent is needed. Parental consent will be required for the processing of personal data of children under age 16. EU Member States may lower the age requiring parental consent to 13, so what that will mean for children here in the UK is unknown.

The ethics of product placement and marketing rules to children of all ages go out the window however, when the whole game or programme is one long animated advert. On children’s television and YouTube, content producers have turned brand product placement into programmes: My Little Pony, Barbie, Playmobil and many more.

Alice Webb, Director of BBC Children’s and BBC North,  looked at some of the challenges in this as the BBC considers how to deliver content for children whilst adapting to technological advances in this LSE blog and the publication of a new policy brief about families and ‘screen time’, by Alicia Blum-Ross and Sonia Livingstone.

So is this augmented reality any different from other platforms?

Yes because you can’t play the game without accepting the use of the maps and by default some sacrifice of your privacy settings.

Yes because the ethics and implications of of putting kids not simply in front of a screen that pitches products to them, but puts them physically into the place where they can consume products – if the McDonalds story is correct and a taster of what will follow – is huge.

Boundaries between platforms and people

Blum-Ross says, “To young people, the boundaries and distinctions that have traditionally been established between genres, platforms and devices mean nothing; ditto the reasoning behind the watershed system with its roots in decisions about suitability of content. “

She’s right. And if those boundaries and distinctions mean nothing to providers, then we must have that honest conversation with urgency. With our contrived consent, walking and running and driving without coercion, we are being packaged up and delivered right to the door of for-profit firms, paying for the game with our privacy. Smart cities are exploiting street sensors to do the same.

Freewill is at the very heart of who we are. “The ability to choose between different possible courses of action. It is closely linked to the concepts of responsibility, praise, guilt, sin, and other judgments which apply only to actions that are freely chosen.” Free choice of where we shop, what we buy and who we interact with is open to influence. Influence that is not entirely transparent presents opportunity for hidden manipulation, while the NSPCC might be worried about the risk of rare physical threat, the potential for the influencing of all children’s behaviour, both positive and negative, reaches everyone.

Some stories of how behaviour is affected, are heartbreakingly positive. And I met and chatted with complete strangers who shared the joy of something new and a mutual curiosity of the game. Pokémon GOis clearly a lot of fun. It’s also unclear on much more.

I would like to explicitly understand if Pokémon GO is gift packaging behavioural research by piggybacking on the Google platforms that underpin it, and providing linked data to Google or third parties.

Fishing for frequent Pokémon encourages players to ‘check in’ and keep that behaviour tracking live. 4pm caught a Krabby in the closet at work. 6pm another Krabby. Yup, still at work. 6.32pm Pidgey on the street outside ThatGreenCoffeeShop. Monday to Friday.

The Google privacy policies changed in the last year require ten clicks for opt out, and in part, the download of an add-on. Google has our contacts, calendar events, web searches, health data, has invested in our genetics, and all the ‘Things that make you “you”. They have our history, and are collecting our present. Machine intelligence work on prediction, is the future. For now, perhaps that will be pinging you with a ‘buy one get one free’ voucher at 6.20, or LCD adverts shifting as you drive back home.

Pokémon GO doesn’t have to include what data Google collects in its privacy policy. It’s in Google’s privacy policy. And who really read that when it came out months ago, or knows what it means in combination with new apps and games we connect it with today? Tracking and linking data on geolocation, behavioural patterns, footfall, whose other phones are close by,  who we contact, and potentially even our spend from Google wallet.

Have Google and friends of Niantic gotta know it all?

EU do the Hokey Cokey. In out, In out, shake it all about.

The IN and the OUT circles have formed and Boris is standing in the middle.

Speculation as to whether he means to be on the No team is agreed he doesn’t really mean no.

His comments on staying in have been more consistently in the past pointed to his head saying staying in is better for business.

Some are suggesting that his stance is neither in nor out, but ‘an unconvincing third way‘,  no doesn’t mean no to staying in but no to Cameron’s deal and would in fact mean a second vote along the lines of Dominic Cumming’s .

If so, is this breathtaking arrogance and a u-turn of unforeseeable magnitude? Had our PM not said before the last GE he planned in stepping down before the end of the next Parliament, you could think so. But this way they cannot lose.

This is in fact bloody brilliant positioning by the whole party.

A yes vote underpins Cameron’s re-negotiation as ‘the right thing to do’, best for business and his own statesmanship while showing that we’re not losing sovereignty because staying in is on our terms.

Renegotiating our relationship with the EU was a key Conservative election promise.

This pacifies the majority of that part of the population who wants out of some of the EU ‘controlling out country’ and beholden to EU law, but keeps us stable and financially secure.

The hardline Out campaigners are seen as a bag of all-sorts that few are taking that seriously. But then comes Boris.

So now there is some weight in the out circle and if the country votes ‘No’ a way to manage the outcome with a ready made leader in waiting. But significantly, it’s not a consistent call for Out across the group. Boris is not for spinning in the same clear ‘out’ direction as the Galloway group.

Boris can keep a foot in the circle saying his heart is pro-In and really wants In, but on better terms. He can lead a future party for Outers Inners and whatever the outcome, be seen to welcome all. Quite a gentleman’s agreement perhaps.

His Out just means out of some things, not others. Given all his past positioning and role as Mayor in the City of London, out wouldn’t mean wanting to risk any of the financial and business related bits.

So what does that leave?  Pay attention in his speech to the three long paragraphs on the Charter of Fundamental Human Rights.

His rambling explanation indirectly explained quite brilliantly the bits he wants ‘out’ to mean. Out means in the Boris-controlled circle, only getting out from those parts of EU directives that the party players don’t like. The bits when they get told what to do, or told off for doing something wrong, or not playing nicely with the group.

The human rights rulings and oversight from the CJEU or views which are not aligned with the ECHR for example.

As Joshua Rozenberg wrote on sovereignty, “Human rights reform has been inextricably intertwined with renegotiating the UK’s membership of the EU. And it is all the government’s fault.”

Rozenberg writes that Mr Gove told the Lords constitution committee in December that David Cameron asked him whether “we should use the British Bill of Rights in order to create a constitutional long stop […] and whether the UK Supreme Court should be that body.”

“Our top judges were relabelled a “Supreme Court” not long ago; they’ve been urged to assert themselves against the European Court of Human Rights, and are already doing so against EU law”, commented Carl Gardner elsewhere.

The Gang of Six cabinet ministers are known for their anti EU disaffectation and most often its attachment to human rights  – Michael Gove, Iain Duncan Smith, Chris Grayling, Theresa Villiers, Priti Patel and John Whittingdale plus a further 20 junior ministers and potentially dozens of backbenchers.

We can therefore expect the Out campaign to present situations in which British ‘sovereignty’ was undermined by court rulings that some perceive as silly or seriously flawed.

Every case in which a British court ever convicted someone and was overturned ‘by Europe’ that appeared nonsensical will be wheeled out by current justice Secretary Mr Gove.

Every tougher ‘terrorist’ type case, whose human rights were upheld that had been denied them by a UK ruling might be in the more ‘extreme’ remit of the former justice secretary mention whenever Grayling makes his case for Out, especially where opinions may conflict with interpretations and the EU Charter.

Priti Patel has tough views of crime and punishment, reportedly in favour of the the death penalty.

IDS isn’t famous for a generous application of disability rights.

John Whittingdale gave his views on the present relationship with the EU and CJEU here in debate in 2015 and said (22:10) he was profoundly “concerned the CJEU is writing laws which we consider to be against our national interest.”

Data protection and privacy is about to get a new EU directive that will strengthen some aspects of citizens’ data rights. Things like the right to be informed what information is stored about us, or have mistakes corrected.

Don’t forget after all that Mr Gove is the Education SoS who signed off giving away the confidential personal data of now 20 million children to commercial third parties from the National Pupil Database. Clearly not an Article 8 fan.

We are told that we are being over reactive to our loss of rights to privacy. Over generous in protections to people who don’t deserve it. Ignoring that rights are universal and indivisible, we are encouraged to see them as something that must be earned. As such, something which may or may not be respected. And therefore can be removed.

Convince the majority of that, and legislation underpinning our rights will be easier to take away without enough mass outcry that will make a difference.

To be clear, a no vote would make no actual legal difference, “Leaving the EU (if that’s what the people vote for) is NOT at all inconsistent with the United Kingdom’s position as a signatory to the European Convention on Human Rights (ECHR), a creature of the COUNCIL of EUROPE and NOT the European Union.” [ObiterJ]

But by conflating ‘the single market’, ‘the Lisbon Treaty’, and the ‘ability to vindicate people’s rights under the 55-clause “Charter of Fundamental Human Rights”, Boris has made the no vote again equate conflated things: European Union membership = loss of sovereignty = need to reduce the control or influence of all organisations seen as ‘European’ (even if like the ECHR it’s to do with the Council of Europe Convention signed post WWII and long before EU membership) and all because we are a signatory to a package deal.

Boris has reportedly no love of ‘lefty academics’ standing up for international and human rights laws and their uppity lawyers in the habit of “vindicating the rights of their clients.”

Boris will bob in and out of both the IN group for business and the OUT group for sovereignty, trying not to fall out with anyone too much and giving serious Statesmanship to the injustices inflicted on the UK. There will be banter and back biting, but the party views will be put ahead of personalities.

And the public? What we vote, won’t really matter.I think we’ll be persuaded to be IN, or to get a two step Out-In.

Either way it will give the relevant party leader, present or future, the mandate to do what he wants. Our engagement is optional.

Like the General Election, the people’s majority viewed as a ‘mandate’ seems to have become a little confused with sign-off to dictate a singular directive, rather than represent a real majority. It cannot do anything but this, since the majority didn’t vote for the government that we have.

In this EU Referendum No wont mean No. It’ll mean a second vote to be able to split the package of no-to-all-things into a no-to-some-things wrapped up in layers of ‘sovereignty’ discussion. Unpack them, and those things will be for the most part, human rights things. How they will then be handled at a later date is utterly unclear but the mandate will have been received.

Imagine if Boris can persuade enough of the undecideds that he is less bonkers than some of the EU rulings on rights, he’ll perhaps get an Out mandate, possibly meaning a second vote just to be sure, splitting off the parts everyone obviously wants to protect, the UK business interests, and allowing the government to negotiate the opt out from legislation of human rights’ protections. Things that may appear to make more people dependent on the state, and contrary to the ideology of shrinking state support.

A long-promised review of the British Human Rights Act 1998 will inevitably follow, and only makes sense if we are first made exempt from the European umbrella.

Perhaps we will hear over the next four months more about what that might mean.

Either way, the Out group will I’m sure take the opportunity to air their views and demand the shake up of where Human Rights laws are out of line for the shape of the UK future nation they wish to see us become.

Some suggest Boris has made a decision that will cost him his political career. I disagree. I think it’s incredibly clever. Not a conspiracy, simply clever party planning to make every outcome a win for the good of the party and the good of the nation,  and a nod to Boris as future leader in any case. After all, he didn’t actually say he wanted #Brexit, just reform.

It’s not all about Boris, but is staging party politics at its best, and simultaneously sets the scene for future change in the human rights debate.

Breaking up is hard to do. Restructuring education in England.

This Valentine’s I was thinking about the restructuring of education in England and its wide ranging effects. It’s all about the break up.

The US EdTech market is very keen to break into the UK, and our front door is open.

We have adopted the model of Teach First partnered with Teach America, while some worry we do not ask “What is education for?

Now we hear the next chair of Oftsed is to be sought from the US, someone who is renowned as “the scourge of the unions.”

Should we wonder how long until the management of schools themselves is US-sourced?

The education system in England has been broken up in recent years into manageable parcels  – for private organisations, schools within schools, charity arms of commercial companies, and multi-school chains to take over – in effect, recent governments have made reforms that have dismantled state education as I knew it.

Just as the future vision of education outlined in the 2005 Direct Democracy co-authored by Michael Gove said, “The first thing to do is to make existing state schools genuinely independent of the state.”

Free schools touted as giving parents the ultimate in choice, are in effect another way to nod approval to the outsourcing of the state, into private hands, and into big chains. Despite seeing the model fail spectacularly abroad, the government seems set on the same here.

Academies, the route that finagles private corporations into running public-education is the preferred model, says Mr Cameron. While there are no plans to force schools to become academies, the legislation currently in ping-pong under the theme of coasting schools enables just that. The Secretary of State can impose academisation. Albeit only on Ofsted labeled ‘failing’ schools.

What fails appears sometimes to be a school that staff and parents cannot understand as anything less than good, but small. While small can be what parents want, small pupil-teacher ratios, mean higher pupil-per teacher costs. But the direction of growth is towards ‘big’ is better’.

“There are now 87 primary schools with more than 800 pupils, up from 77 in 2014 and 58 in 2013. The number of infants in classes above the limit of 30 pupils has increased again – with 100,800 pupils in these over-sized classes, an increase of 8% compared with 2014.” [BBC]

All this restructuring creates costs about which the Department wants to be less than transparent.  And has lost track of.

If only we could see that these new structures raised standards?  But,” while some chains have clearly raised attainment, others achieve worse outcomes creating huge disparities within the academy sector.”

If not delivering better results for children, then what is the goal?

A Valentine’s view of Public Service Delivery: the Big Break up

Breaking up the State system, once perhaps unthinkable is possible through the creation of ‘acceptable’ public-private partnerships (as opposed to outright privatisation per se). Schools become academies through a range of providers and different pathways, at least to start with, and as they fail, the most successful become the market leaders in an oligopoly. Ultimately perhaps, this could become a near monopoly. Delivering ‘better’. Perhaps a new model, a new beginning, a new provider offering salvation from the flood of ‘failing’ schools coming to the State’s rescue.

In order to achieve this entry to the market by outsiders, you must first remove conditions seen as restrictive, giving more ‘freedom’ to providers; to cut corners make efficiency savings on things like food standards, required curriculum, and numbers of staff, or their pay.

And what if, as a result, staff leave, or are hard to recruit?

Convincing people that “tech” and “digital” will deliver cash savings and teach required skills through educational machine learning is key if staff costs are to be reduced, which in times of austerity and if all else has been cut, is the only budget left to slash.

Self-taught systems’ providers are convincing in their arguments that tech is the solution.

Sadly I remember when a similar thing was tried on paper. My first year of GCSE maths aged 13-14  was ‘taught’ at our secondary comp by working through booklets in a series that we self-selected from the workbench in the classroom. Then we picked up the master marking-copy once done. Many of the boys didn’t need long to work out the first step was an unnecessary waste of time. The teacher had no role in the classroom. We were bored to bits. By the final week at end of the year they sellotaped the teacher to his chair.

I kid you not.

Teachers are so much more than knowledge transfer tools, and yet by some today seem to be considered replaceable by technology.

The US is ahead of us in this model, which has grown hand-in-hand with commercialism in schools. Many parents are unhappy.

So is the DfE setting us up for future heartbreak if it wants us to go down the US route of more MOOCs, more tech, and less funding and fewer staff? Where’s the cost benefit risk analysis and transparency?

We risk losing the best of what is human from the classroom, if we will remove the values they model and inspire. Unions and teachers and educationalists are I am sure, more than aware of all these cumulative changes. However the wider public seems little engaged.

For anyone ‘in education’ these changes will all be self-evident and their balance of risks and benefits a matter of experience, and political persuasion. As a parent I’ve only come to understand these changes, through researching how our pupils’ personal and school data have been commercialised,  given away from the National Pupil Database without our consent, since legislation changed in 2013; and the Higher Education student and staff data sold.

Will more legislative change be needed to keep our private data accessible in public services operating in an increasingly privately-run delivery model? And who will oversee that?

The Education Market is sometimes referred to as ‘The Wild West’. Is it getting a sheriff?

The news that the next chair of Oftsed is to be sought from the US did set alarm bells ringing for some in the press, who fear US standards and US-led organisations in British schools.

“The scourge of unions” means not supportive of staff-based power and in health our junior doctors have clocked exactly what breaking their ‘union’ bargaining power is all about.  So who is driving all this change in education today?

Some ed providers might be seen as profiting individuals from the State break up. Some were accused of ‘questionable practices‘. Oversight has been lacking others said. Margaret Hodge in 2014 was reported to have said: “It is just wrong to hand money to a company in which you have a financial interest if you are a trustee.”

I wonder if she has an opinion on a lead non-executive board member at the Department for Education also being the director of one of the biggest school chains? Or the ex Minister now employed by the same chain? Or that his campaign was funded by the same Director?  Why this register of interests is not transparent is a wonder.

It could appear to an outsider that the private-public revolving door is well oiled with sweetheart deals.

Are the reforms begun by Mr Gove simply to be executed until their end goal, whatever that may be, through Nikky Morgan or she driving her own new policies?

If Ofsted were  to become US-experience led, will the Wild West be tamed or US providers invited to join the action, reshaping a new frontier? What is the end game?

Breaking up is not hard to do, but in whose best interest is it?

We need only look to health to see the similar pattern.

The structures are freed up, and boundaries opened up (if you make the other criteria) in the name of ‘choice’. The organisational barriers to break up are removed in the name of ‘direct accountability’. And enabling plans through more ‘business intelligence’ gathered from data sharing, well, those plans abound.

Done well, new efficient systems and structures might bring public benefits, the right technology can certainly bring great things, but have we first understood what made the old less efficient if indeed it was and where are those baselines to look back on?

Where is the transparency of the end goal and what’s the price the Department is prepared to pay in order to reach it?

Is reform in education, transparent in its ideology and how its success is being measured if not by improved attainment?

The results of change can also be damaging. In health we see failing systems and staff shortages and their knock-on effects into patient care. In schools, these failures damage children’s start in life, it’s not just a ‘system’.

Can we assess if and how these reforms are changing the right things for the right reasons? Where is the transparency of what problems we are trying to solve, to assess what solutions work?

How is change impact for good and bad being measured, with what values embedded, with what oversight, and with whose best interests at its heart?

2005’s Direct Democracy could be read as a blueprint for co-author Mr Gove’s education reforms less than a decade later.

Debate over the restructuring of education and its marketisation seems to have bypassed most of us in the public, in a way health has not.

Underperformance as measured by new and often hard to discern criteria, means takeover at unprecedented pace.

And what does this mean for our most vulnerable children? SEN children are not required to be offered places by academies. The 2005 plans co-authored by Mr Gove also included: “killing the government’s inclusion policy stone dead,” without an alternative.

Is this the direction of travel our teachers and society supports?

What happens when breakups happen and relationship goals fail?

Who picks up the pieces? I fear the state is paying heavily for the break up deals, investing heavily in new relationships, and yet will pay again for failure. And so will our teaching staff, and children.

While Mr Hunt is taking all the heat right now, for his part in writing Direct Democracy and its proposals to privatise health – set against the current health reforms and restructuring of junior doctors contracts – we should perhaps also look to Mr Gove co-author, and ask to better understand the current impact of his recent education reforms, compare them with what he proposed in 2005, and prepare for the expected outcomes of change before it happens (see p74).

One outcome was that failure was to be encouraged in this new system, and Sweden held up as an exemplary model:

“Liberating state schools would also allow the all-important freedom to fail.”

As Anita Kettunen, principal of JB Akersberga in Sweden reportedly said when the free schools chain funded by a private equity firm failed:

“if you’re going to have a system where you have a market, you have to be ready for this.”

Breaking up can be hard to do. Failure hurts. Are we ready for this?
******

 

Abbreviated on Feb 18th.

 

Thoughts since #UKHC15. UK health datasharing.

The world you will release your technology into, is the world you are familiar with, which is already of the past. Based on old data.

How can you design tools and systems fit for the future? And for all?

For my 100th post and the first of 2016, here is a summary of some of my thoughts prompted by . Several grains of thought related to UK heath data that have been growing for some time.

1000 words on “Hard things: identity, data sharing and consent.” The fun run version.

Do we confuse hard with complex? Hard does not have to mean difficult. Some things seem to be harder than necessary, because of politics. I’ve found this hard to write. Where to start?

The search to capture solutions has been elusive.

The starting line: Identity

Then my first thoughts on identity got taken care of by Vinay Gupta in this post, better than I could. (If you want a long read about identity, you might want to get a hot drink like I did and read and re-read. It says it’ll take an hour. It took me several, in absorption and thinking time. And worth it.)

That leaves data sharing and consent. Both of which I have written many of my other 99 posts about in the last year. So what’s new?

Why are we doing this: why aren’t we there yet?

It still feels very much that many parts of the health service and broader government thinking on ‘digital’ is we need to do something. Why is missing, and therefore achieving and measuring success is hard.

Often we start with a good idea and set about finding a solution how to achieve it. But if the ‘why’ behind the idea is shaky to start with, the solution may falter, as soon as it gets difficult. No one seems to know what #paperless actually means in practice.

So why try and change things? Fixing problems, rather than coming up with good ideas is another way to think of it as they suggested at  #ukhc15, it was a meet-up for people who want to make things better, usually for others, and sometimes that involves improving the systems they worked with directly, or supported others in.

I no longer work in systems’ introductions, or enhancement processes, although I have a lay role in research and admin data, but regular readers know, most of the last two years has been all about the data.  care.data.

More often than not, in #ukhc2015 discussions that focused on “the data” I would try and bring people back to thinking about what the change is trying to solve, what it wants to “make better” and why.

There’s a broad tendency to simply think more data = better. Not true, and I’ll show later a case study why. We must question why.

Why doesn’t everyone volunteer or not want to join in?

Very many people who have spoken with me over the last two years have shared their concrete concerns over the plans to share GP data and they do not get heard. They did not see a need to share their identifiable personal confidential data, or see why truly anonymous data would not be sufficient for health planning, for example.

Homeless men, and women at risk, people from the travelling community, those with disabilities, questions on patients with stigmatising conditions, minorities, children, sexual orientation – not to mention from lawyers or agencies representing them. Or the 11 million of our adult population not online. Few of whom we spoke about. Few of whom we heard from at #ukhc15. Yet put together, these individuals make up not only a significant number of people, but make up a disproportionately high proportion of the highest demands on our health and social care services.

The inverse care law appears magnified in its potential when applied to digital, and should magnify the importance of thinking about access. How will care.data make things better for them, and how will the risks be mitigated? And are those costs being properly assessed if there is no assessment of the current care.data business case and seemingly, since 2012 at least, no serious effort to look at alternatives?

The finish line? We can’t see what it looks like yet.

The #ukhc2015 event was well run, and I liked the spontaneity of people braver than me who were keen to lead sessions and did it well.  As someone who is white, living in a ‘nice’ area, I am privileged. It was a privilege to spend a day with #UKHC15 and packed with people who clearly think about hard things all the time. People who want to make things better.  People who were welcoming to nervous first-timers at an ‘un’conference over a shared lunch.

I hope the voices of those who can’t attend these events, and outside London, are equally accounted for in all government 2016 datasharing plans.

This may be the last chance after years of similar consultations have failed to deliver workable, consensual public data sharing policies.

We have vast streams of population-wide data stored in the UK, about which, the population is largely ignorant. But while the data may be from 25 years ago, whatever is designed today is going to need to think long term, not how do we solve what we know, but how do we design solutions that will work for what we don’t.

Transparency here will be paramount to trust if future decisions are made for us, or those we make for ourselves are ‘influenced’ by machine learning, by algorithms, machine learning and ‘mindspace’ work.

As Thurgood Marshall said,

“Our whole constitutional heritage rebels at the thought of giving government the power to control men’s minds.”

Control over who we are and who the system thinks we are becomes a whole new level of discussion, if we are being told how to make a decision, especially where the decision is toward a direction of public policy based on political choice. If pensions are not being properly funded, to not allocate taxes differently and fund them, is a choice the current government has made, while the DWP seeks to influence our decison, to make us save more in private pensions.

And how about in data discussions make an effort to start talking a little more clearly in the same terms – and stop packaging ‘sharing’ as if it is something voluntary in population-wide compulsory policy.

It’s done to us, not with us, in far too many areas of government we do not see. Perhaps this consultation might change that, but it’s the ‘nth’ number of consulations and I want to be convinvced this one is intentional of real change. It’s only open for a few weeks, and this meet up for discussion appeared to be something only organised in London.

I hope we’ll hear committment to real change in support of people and the uses of our personal data by the state in the new #UkDigiStrategy, not simply more blue skythinking and drinking the ‘datasharing’ kool-aid.  We’ve been talking in the UK for far too long about getting this right.

Let’s see the government serious about making it happen. Not for government, but in the public interest, in a respectful and ethical partnership with people, and not find changes forced upon us.

No other foundation will be fit for a future in which care.data, the phenotype data, is to be the basis for an NHS so totally personalised.

If you want a longer read, read on below for my ten things in detail.

Comment welcome.

########

Hard things: The marathon version, below.
Continue reading Thoughts since #UKHC15. UK health datasharing.

Access to school pupil personal data by third parties is changing

The Department for Education in England and Wales [DfE] has lost control of who can access our children’s identifiable school records by giving individual and sensitive personal data out to a range of third parties, since government changed policy in 2012. It looks now like they’re panicking how to fix it.

Applicants wanting children’s personal identifiable and/or sensitive data now need to first apply for the lowest level criminal record check, DBS, in the access process, to the National Pupil Database.

Schools Week wrote about it and asked for comment on the change [1] (as discussed by Owen in his blog [2] and our tweets).

At first glance, it sound like a great idea, but what real difference will this make to who can receive 8 million school pupils’ data?

Yes, you did read that right.

The National Pupil Database gives away the personal data of eight million children, aged 2-19. Gives it away outside its own protection,  because users get sent raw data, to their own desks.[3]

It would be good to know people receiving your child’s data hadn’t ever been cautioned or convicted about something related to children in their past, right?

Unfortunately, this DBS check won’t tell the the Department for Education (DfE) that – because it’s the the basic £25 DBS check [4], not full version.

So this change seems less about keeping children’s personal data safe than being seen to do something. Anything. Anything but the thing that needs done. Which is to keep the data secure.

Why is this not a brilliant solution? 

Moving towards the principle of keeping the data more secure is right, but in practice, the DBS check is only useful if it would make data safe by stopping people receiving data and the risks associated with data misuse. So how will this DBS check achieve this? It’s not designed for people who handle data. It’s designed for people working with children.

There is plenty of evidence available of data inappropriately used for commercial purposes often in the news, and often through inappropriate storage and sharing of data as well as malicious breaches. I am not aware, and refer to this paper [5], of risks realised through malicious data misuse of data for academic purposes in safe settings. Though mistakes do happen through inappropriate processes, and through human error and misjudgement.

However it is not necessary to have a background check for its own sake. It is necessary to know that any users handle children’s data securely and appropriately, and with transparent oversight. There is no suggestion at all that people at TalkTalk are abusing data, but their customers’ data were not secure and those data held in trust are now being misused.

That risk is the harm that is likely to affect a high number of individuals if bulk personal data are not securely managed. Measures to make it so must be proportionate to that risk. [6]

Coming back to what this will mean for individual applicants and its purpose: Basic Disclosure contains only convictions considered unspent under The Rehabilitation of Offenders Act 1974. [7]

The absence of a criminal record does not mean data are securely stored or appropriately used by the recipient.

The absence of a criminal record does not mean data will not be forwarded to another undisclosed recipient and there be a way for the DfE to ever know it happened.

The absence of a criminal record showing up on the basic DBS check does not even prove that the person has no previous conviction related to misuse of people or of data. And anything you might consider ‘relevant’ to children for example, may have expired.

DBS_box copy

So for these reasons, I disagree that the decision to have a basic DBS check is worthwhile.  Why? Because it’s effectively meaningless and doesn’t solve the problem which is this:

Anyone can apply for 8m children’s personal data, and as long as they meet some purposes and application criteria, they get sent sensitive and identifiable children’s data to their own setting. And they do. [8]

Anyone the 2009 designed legislation has defined as a prescribed person or researcher, has come to mean journalists for example. Like BBC Newsnight, or Fleet Street papers. Is it right journalists can access my children’s data, but as pupils and parents we cannot, and we’re not even informed? Clearly not.

It would be foolish to be reassured by this DBS check. The DfE is kidding themselves if they think this is a workable or useful solution.

This step is simply a tick box and it won’t stop the DfE regularly giving away the records of eight million children’s individual level and sensitive data.

What problem is this trying to solve and how will it achieve it?

Before panicking to implement a change DfE should first answer:

  • who will administer and store potentially sensitive records of criminal convictions, even if unrelated to data?
  • what implications does this have for other government departments handling individual personal data?
  • why are 8m children’s personal and sensitive data given away ‘into the wild’ beyond DfE oversight in the first place?

Until the DfE properly controls the individual personal data flowing out from NPD, from multiple locations, in raw form, and its governance, it makes little material difference whether the named user is shown to have, or not have a previous criminal record. [9] Because the DfE has no idea if they are they only person who uses it.

The last line from DfE in the article is interesting: “it is entirely right that we we continue to make sure that those who have access to it have undergone the necessary background checks.”

Continue from not doing it before? Tantamount to a denial of change, to avoid scrutiny of the past and status quo? They have no idea who has “access” to our children’s data today after they have released it, except on paper and trust, as there’s no audit process.[10]

If this is an indicator of the transparency and type of wording the DfE wants to use to communicate to schools, parents and pupils I am concerned. Instead we need to see full transparency, assessment of privacy impact and a public consultation of coordinated changes.

Further, if I were an applicant, I’d be concerned that DfE is currently handling sensitive pupil data poorly, and wants to collect more of mine.

In summary: because of change in Government policy in 2012 and the way in which it is carried out in practice, the Department for Education in England and Wales [DfE] has lost control of who can access our 8m children’s identifiable school records. Our children deserve proper control of their personal data and proper communication about who can access that and why.

Discovering through FOI [11] the sensitivity level and volume of identifiable data access journalists are being given, shocked me. Discovering that schools and parents have no idea about it, did not.

This is what must change.

 

*********

If you have questions or concerns about the National Pupil Database or your own experience, or your child’s data used in schools, please feel free to get in touch, and let’s see if we can make this better to use our data well, with informed public support and public engagement.

********

References:
[1] National Pupil Database: How to apply: https://www.gov.uk/guidance/national-pupil-database-apply-for-a-data-extract

[2]Blogpost: http://mapgubbins.tumblr.com/post/132538209345/no-more-fast-track-access-to-the-national-pupil

[3] Which third parties have received data since 2012 (Tier 1 and 2 identifiable, individual and/or sensitive): release register https://www.gov.uk/government/publications/ national-pupil-database-requests-received

[4] The Basic statement content http://www.disclosurescotland.co.uk/disclosureinformation/index.htm

[5] Effective Researcher management: 2009 T. Desai (London School of Economics) and F. Ritchie (Office for National Statistics), United Kingdom http://www.unece.org/fileadmin/DAM/stats/documents/ece/ces/ge.46/2009/wp.15.e.pdf

[6] TalkTalk is not the only recent significant data breach of public trust. An online pharmacy that sold details of more than 20,000 customers to marketing companies has been fined £130,000 https://ico.org.uk/action-weve-taken/enforcement/pharmacy2u-ltd/

[7] Guidance on rehabilitation of Offenders Act 1974 https://www.gov.uk/government/uploads/system/uploads/
attachment_data/file/299916/rehabilitation-of-offenders-guidance.pdf

[8] the August 2014 NPD application from BBC Newsnight https://www.whatdotheyknow.com/request/293030/response/723407/attach/10/BBC%20Newsnight.pdf

[9] CPS Guidelines for offences involving children https://www.sentencingcouncil.org.uk/wp-content/uploads/Final_Sexual_Offences_Definitive_Guideline_content_web1.pdf
indecent_images_of_children/

[10] FOI request https://www.whatdotheyknow.com/request/pupil_data_application_approvals#outgoing-482241

[11] #saveFOI – I found out exactly how many requests had been fast tracked and not scrutinised by the data panel via a Freedom of Information Request, as well as which fields journalists were getting access to. The importance of public access to this kind of information is a reason to stand up for FOI  http://www.pressgazette.co.uk/press-gazette-launches-petition-stop-charges-foi-requests-which-would-be-tax-journalism

 

Digital revolution by design: infrastructures and the fruits of knowledge

Since the beginning of time and the story of the Garden of Eden, man has found a way to share knowledge and its power.

Modern digital tools have become the everyday way to access knowledge for many across the world, giving quick access to information and sharing power more fairly.

In this third part of my thoughts on digital revolution by design, triggered by the #kfdigi15 event on June 16-17, I’ve been considering some of the constructs we have built; those we accept and those that could be changed, given the chance, to build a better digital future.

Not only the physical constructions, the often networked infrastructures, but intangible infrastructures of principles and power, co-dependencies around a physical system; the legal and ethical infrastructures of ownership, governance and accountability.

Our personal data flow in systems behind the screens, at the end of our fingertips. Controlled in frameworks designed by providers and manufacturers, government and commercial agencies.

Increasingly in digital discussions we hear that the data subject, the citizen, will control their own data.

But if it is on the terms and conditions set by others, how much control is real and how much is the talk of a consenting citizen only a fig leaf behind which any real control is still held by the developer or organisation providing the service?

When data are used, turned into knowledge as business intelligence that adds value to aid informed decision making. By human or machine.

How much knowledge is too much knowledge for the Internet of Things to build about its users? As Chris Matyszczyk wrote:

“We have all agreed to this. We click on ‘I agree’ with no thought of consequences, only of our convenience.”

Is not knowing what we have agreed to our fault, or the responsibility of the provider who’d rather we didn’t know?

Citizens’ rights are undermined in unethical interactions if we are exploited by easy one-click access and exchange our wealth of data at unseen cost. Can it be regulated to promote, not stifle innovation?

How can we get those rights back and how will ‘doing the right thing’ help shape and control the digital future we all want?

The infrastructures we live inside

As Andrew Chitty says in this HSJ article: “People live more mobile and transient lives and, as a result, expect more flexible, integrated, informed health services.

To manage that, do we need to know how systems work, how sharing works, and trust the functionality of what we are not being told and don’t see behind the screens?

At the personal level, whether we sign up for the social network, use a platform for free email, or connect our home and ourselves in the Internet of things, we each exchange our personal data with varying degrees of willingness. There there is often no alternative if we want to use the tool.

As more social and consensual ‘let the user decide’ models are being introduced, we hear it’s all about the user in control, but reality is that users still have to know what they sign up for.

In new models of platform identity sign on, and tools that track and mine our personal data to the nth degree that we share with the system, both the paternalistic models of the past and the new models of personal control and social sharing are merging.

Take a Fitbit as an example. It requires a named account and data sharing with the central app provider. You can choose whether or not to enable ‘social sharing’ with nominated friends whom you want to share your boasts or failures with. You can opt out of only that part.

I fear we are seeing the creation of a Leviathan sized monster that will be impossible to control and just as scary as today’s paternalistic data mis-management. Some data held by the provider and invisibly shared with third parties beyond our control, some we share with friends, and some stored only on our device.

While data are shared with third parties without our active knowledge, the same issue threatens to derail consumer products, as well as commercial ventures at national scale, and with them the public interest. Loss of trust in what is done behind the settings.

Society has somehow seen privacy lost as the default setting. It has become something to have to demand and defend.

“If there is one persistent concern about personal technology that nearly everybody expresses, it is privacy. In eleven of the twelve countries surveyed, with India the only exception, respondents say that technology’s effect on privacy was mostly negative.” [Microsoft survey 2015, of  12,002 internet users]

There’s one part of that I disagree with. It’s not the effect of technology itself, but the designer or developers’ decision making that affects privacy. People have a choice how to design and regulate how privacy is affected, not technology.

Citizens have vastly differing knowledge bases of how data are used and how to best interact with technology. But if they are told they own it, then all the decision making framework should be theirs too.

By giving consumers the impression of control, the shock is going to be all the greater if a breach should ever reveal where fitness wearable users slept and with whom, at what address, and were active for how long. Could a divorce case demand it?

Fitbit users have already found their data used by police and in the courtroom – probably not what they expected when they signed up to a better health tool.  Others may see benefits that could harm others by default who are excluded from accessing the tool.

Some at org level still seem to find this hard to understand but it is simple:
No trust = no data = no knowledge for commercial, public or personal use and it will restrict the very innovation you want to drive.

Google gmail users have to make 10+ clicks to restrict all ads and information sharing based on their privacy and ad account settings. The default is ad tailoring and data mining. Many don’t even know it is possible to change the settings and it’s not intuitive how to.

Firms need to consider their own reputational risk if users feel these policies are not explicit and are exploitation. Those caught ‘cheating’ users can get a very public slap on the wrist.

Let the data subjects rule, but on whose terms and conditions?

The question every citizen signing up to digital agreements should ask, is what are the small print  and how will I know if they change? Fair processing should offer data protection, but isn’t effective.

If you don’t have access to information, you make decisions based on a lack of information or misinformation. Decisions which may not be in your own best interest or that of others. Others can exploit that.

And realistically and fairly, organisations can’t expect citizens to read pages and pages of Ts&Cs. In addition, we don’t know what we don’t know. Information that is missing can be as vital to understand as that provided. ‘Third parties’ sharing – who exactly does that mean?

The concept of an informed citizenry is crucial to informed decision making but it must be within a framework of reasonable expectation.

How do we grow the fruits of knowledge in a digital future?

Real cash investment is needed now for a well-designed digital future, robust for cybersecurity, supporting enforceable governance and oversight. Collaboration on standards and thorough change plans. I’m sure there is much more, but this is a start.

Figurative investment is needed in educating citizens about the technology that should serve us, not imprison us in constructs we do not understand but cannot live without.

We must avoid the chaos and harm and wasted opportunity of designing massive state-run programmes in which people do not want to participate or cannot participate due to barriers of access to tools. Avoid a Babel of digital blasphemy in which the only wise solution might be to knock it down and start again.

Our legislators and regulators must take up their roles to get data use, and digital contract terms and conditions right for citizens, with simplicity and oversight. In doing so they will enable better protection against risks for commercial and non-profit orgs, while putting data subjects first.

To achieve greatness in a digital future we need: ‘people speaking the same language, then nothing they plan to do will be impossible for them’.

Ethics. It’s more than just a county east of London.

Let’s challenge decision makers to plant the best of what is human at the heart of the technology revolution: doing the right thing.

And from data, we will see the fruits of knowledge flourish.

******

1. Digital revolution by design: building for change and people
2. Digital revolution by design: barriers by design
3
. Digital revolution by design: infrastructures and the fruits of knowledge
4. Digital revolution by design: infrastructures and the world we want

The Economic Value of Data vs the Public Good? [1] care.data, Concerns and the cost of Consent

They say ‘every little helps’.  care.data needs every little it can get.

In my new lay member role on the ADRN panel, I read submissions for research requests for any ethical concerns that may be reflected in wider public opinion.

The driving force for sharing administrative data research is non-commercial, with benefits to be gained for the public good.

So how do we quantify the public good, and ‘in the public interest’?

Is there alignment between the ideology of government, the drivers of policy [for health, such as the commissioning body NHS England] and the citizens of the country on what constitutes ‘the public good’?

There is public good to be gained for example, from social and health data seen as a knowledge base,  by using it using in ‘bona fide’ research, often through linking with other data to broaden insights.

Insight that might result in improving medicines, health applications, and services. Social benefits that should help improve lives, to benefit society.

Although social benefits may be less tangible, they are no harder for the public to grasp than the economic. And often a no brainer as long as confidentiality and personal control are not disregarded.

When it comes to money making from our data the public is less happy. The economic value of data raises more questions on use.

There is economic benefit to extract from data as a knowledge base to inform decision making, being cost efficient and investing wisely. Saving money.

And there is measurable economic public good in terms of income tax from individuals and corporations who by using the data make a profit, using data as a basis from which to create tools or other knowledge. Making money for the public good through indirect sales.

Then there is economic benefit from data trading as a commodity. Direct sales.

In all of these considerations, how does what the public feels and their range of opinions, get taken into account in the public good cost and benefit accounting?

Do we have a consistent and developed understanding of ‘the public interest’ and how it is shifting to fit public expectation and use?

Public concern

“The importance of building and maintaining trust and confidence among all stakeholder groups concerned – including researchers, institutions, ethical review boards and research participants – as a basis for effective data sharing cannot be overstated.”  [Wellcome blog, April 2015]

If something is jeopardising that public good it is in the public interest to say so, and for the right reasons.

The loss of public trust in data sharing measured by public feeling in 2014 is a threat to data used in the public interest, so what are we doing to fix it and are care.data lessons being learned?

The three biggest concerns voiced by the public at care.data listening events[1] were repeatedly about commercial companies’ use, and re-use of data, third parties accessing data for unknown purposes and the resultant loss of confidentiality.

 Question from Leicester: “Are we saying there will be only clinical use of the data – no marketing, no insurance, no profit making? This is our data.” [NHS Open Day, June 2014]

While people are happy for the state to use their data without active consent for bona fide research, they are not for commercial purposes.

Much of the debate and upset caused by the revelations of how our hospital episode statistics were managed in the past centred on the sense of loss of ownership. And with that, the inability to consent to who uses it. This despite acknowledgment that patients own their data.

Significant concern centres on use of the information gleaned from data that patients consider commercial exploitation. For use segmenting the insurance markets. For consumer market research. Using data for individual targeting. And its utter lack of governance.

There is also concern about data being directly sold or exchanged as a commodity.

These concerns were raised meeting after meeting in the 2014 care.data “listening process.”

To read in Private Eye that commercially sensitive projects were discussed in various meetings between NHS England and supermarket giant Tesco throughout 2014 [2] by the Patients and Information Director, responsible for care.data, is therefore all the more surprising.

They may of course be quite unrelated.

But when transparency is the mother of trust, it’s perhaps a surprising liason while ‘listening’ to care.data concerns.

It could appear that greater confidentiality was given to the sensitivity of commercial meetings than citizens’ sensitive data.

Consent package deals may be a costly mistake

People are much more aware since care.data a year ago, that unknown third parties may access data without our consent.

Consent around secondary NHS data sharing and in wider fora is no longer an inconvenient ethical dilemma best left on the shelf, as it has been for the last 25 years in secondary use, dusted off in the care.data crisis. [3]

Consent is front and centre in the latest EU data protection discussions [4] in which consent may become a requirement for all research purposes.

How that may affect social science and health research use, its pros and cons [5] remain to be seen.

However, in principle consent has always been required and good practice in applied medicine, despite the caveat for data used in medical research. As a general rule: “An intervention in the health field may only be carried out after the person concerned has given free and informed consent to it”. But this is consent for your care. Assuming that information is shared when looking after you, for direct care, during medical treatment itself is not causes concerns.

The idea is becoming increasingly assumed in discussions I have heard, [at CCG and other public meetings] that because patients have given implied consent to sharing their information for their care, that the same data may be shared for other purposes. It is not, and it is those secondary purposes that the public has asked at care.data events, to see split up, and differentiated.

Research uses are secondary uses, and those purposes cannot ethically be assumed. However, legal gateways, access to that data which makes it possible to uses for clearly defined secondary purposes by law, may make that data sharing legal.

That legal assumption, for the majority of people polls and dialogue show [though not for everyone 6b], comes  a degree of automatic support for bona fide research in the public interest. But it’s not a blanket for all secondary uses by any means, and it is this blanket assumption which has damaged trust.

So if data use in research assumes consent, and any panel is the proxy for personal decision making, the panel must consider the public voice and public interest in its decision making.

So what does the public want?

In those cases where there is no practicable alternative [to consent], there is still pressure to respect patient privacy and to meet reasonable expectations regarding use. The stated ambition of the CAG, for example, is to only advise disclosure in those circumstances where there is reason to think patients would agree it to be reasonable.

Whether active not implied consent does or does not become a requirement for research purposes without differentiation between kinds, the public already has different expectations and trust around different users.

The biggest challenge for championing the benefits of research in the public good, may be to avoid being lumped in with commercial marketing research for private profit.

The latter’s misuse of data is an underlying cause of the mistrust now around data sharing [6]. It’s been a high price to pay for public health research and others delayed since the Partridge audit.

Consent package deals mean that the public cannot choose how data are used in what kids of research and if not happy with one kind, may refuse permission for the other.

By denying any differentiation between direct, indirect, economic and social vale derived from data uses, the public may choose to deny all researchers access to their all personal data.

That may be costly to the public good, for public health and in broader research.

A public good which takes profit into account for private companies and the state, must not be at the expense of public feeling, reasonable expectations and ethical good practice.

A state which allows profit for private companies to harm the perception of  good practice by research in the public interest has lost its principles and priorities. And lost sight of the public interest.

Understanding if the public, the research community and government have differing views on what role economic value plays in the public good matters.

It matters when we discuss how we should best protect and approach it moving towards a changing EU legal framework.

“If the law relating to health research is to be better harmonised through the passing of a Regulation (rather than the existing Directive 95/46/EC), then we need a much better developed understanding of ‘the public interest’ than is currently offered by law.”  [M Taylor, “Information Governance as a Force for Good? Lessons to be Learnt from Care.data”, (2014) 11:1 SCRIPTed 1]

In the words of Dr Mark Taylor, “we need to do this better.”

How? I took a look at some of this in more detail:

Part two: The Economic Value of Data vs the Public Good? [2] Pay-for-privacy and Defining Purposes.

Part three: The Economic Value of Data vs the Public Good? [3] The value of public voice.

Update note: A version of these three posts was combined into an opinion piece – care.data: ‘The Value of Data versus the Public Interest?’ published on StatsLife on June 3rd 2015.

****

image via Tesco media

 

[1] care.data listening event questions: https://jenpersson.com/pathfinder/

[2] Private Eye – on Tesco / NHS England commercial meetings https://twitter.com/medConfidential/status/593819474807148546

[3] HSCIC audit and programme for change www.hscic.gov.uk/article/4780/HSCIC-learns-lessons-of-the-past-with-immediate-programme-for-change

[4] EU data protection discussion http://www.digitalhealth.net/news/EHI/9934/eu-ministers-back-data-privacy-changes

[5] Joint statement on EU Data Protection proposals http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/WTP055584.pdf

[6] Ipsos MORI research with the Royal Statistical Society into the Trust deficit with lessons for policy makers https://www.ipsos-mori.com/researchpublications/researcharchive/3422/New-research-finds-data-trust-deficit-with-lessons-for-policymakers.aspx

[6b] The ‘Dialogue on Data’ Ipsos MORI research 2014 https://www.ipsos-mori.com/researchpublications/publications/1652/Dialogue-on-Data.aspx – commissioned by the Economic and Social Research Council (ESRC) and the Office for National Statistics (ONS) to conduct a public dialogue examining the public’s views on using linked administrative data for research purposes,

[7] AdExchanger Janaury 2015 http://adexchanger.com/data-driven-thinking/the-newest-asset-class-data/

[8] Tesco clubcard data sale https://jenpersson.com/public_data_in_private_hands/  / Computing 14.01.2015 – article by Sooraj Shah: http://www.computing.co.uk/ctg/feature/2390197/what-does-tescos-sale-of-dunnhumby-mean-for-its-data-strategy

[9] Direct Marketing 2013 http://www.dmnews.com/tesco-every-little-bit-of-customer-data-helps/article/317823/

[10] Personalisation in health data plans http://www.england.nhs.uk/iscg/wp-content/uploads/sites/4/2014/01/ISCG-Paper-Ref-ISCG-009-002-Adult-Social-Care-Informatics.pdf

[11] Tim Kelsey Keynote speech at Strata November 2013 https://www.youtube.com/watch?v=s8HCbXsC4z8

[12] Forbes: Illumina CEO on the US$20bn DNA market http://www.forbes.com/sites/luketimmerman/2015/04/29/qa-with-jay-flatley-ceo-of-illumina-the-genomics-company-pursuing-a-20b-market/

 

Wearables: patients will ‘essentially manage their data as they wish’. What will this mean for diagnostics, treatment and research and why should we care? [#NHSWDP 3]

 

Consent to data sharing appears to be a new choice firmly available on the NHS England patient menu if patient ownership of our own records, is clearly acknowledged as ‘the operating principle legally’.

Simon Stevens, had just said in his keynote speech:

“..smartphones; […] the single most important health treatment and diagnostic tool at our disposal over the coming decade and beyond ” Simon Stevens, March 18 2015.

Tim Kelsey, Director Patients and Information, NHS England, then talked about consent in the Q&A:

“We now acknowledge the patient’s ownership of the record […] essentially, it’s always been implied, it’s still not absolutely explicit but it is the operating principle now legally for the NHS.

“So, let’s get back to consent and what it means for clinical professionals, because we are going to move to a place where people will make those decisions as they currently do with wearable devices, and other kinds of mobile, and we need to get to a point where people can plug their wearable device into their medical record, and essentially manage their data as they wish.

“It is essentially, their data.”

How this principle has been applied in the past, is being now, and how it may change matters, as it will affect many other areas.

Our personal health data is the business intelligence of the health industry’s future.

Some parts of that industry will say we don’t share enough data. Or don’t use it in the right way.  For wearables designed as medical devices, it will be vital to do so.

But before some launch into polemics on the rights and wrongs of blanket ‘data sharing’ we should be careful what types of data we mean, and for what purposes it is extracted.It matters when discussing consent and sharing.

We should be clear to separate consent to data sharing for direct treatment from consent for secondary purposes other than care (although Mr Kelsey hinted at a conflation of the two in a later comment). The promised opt-out from sharing for secondary uses is pending legal change. At least that’s what we’ve been told.

Given that patient data from hospital and range of NHS health settings today, are used for secondary purposes without consent – despite the political acknowledgement that patients have an opt out – this sounded a bold new statement, and contrasted with his past stance.

Primary care data extraction for secondary uses, in the care.data programme, was not intended to be consensual. Will it become so?

Its plan so far has an assumed opt-in model, despite professional calls from some, such as at the the BMA ARM to move to an opt-in model, and the acknowledged risk of harm that it will do to patient trust.

The NHS England Privacy Assessment said: ‘The extraction of personal confidential data from providers without consent carries the risk that patients may lose trust in the confidential nature of the health service.’

A year into the launch, Jan 2014, a national communications plan should have solved the need for fair processing, but another year on, March 2015, there is postcode lottery, pilot approach.

If in principle, datasharing is to be decided by consensual active choice,  as it “is the operating principle now legally for the NHS” then why not now, for care.data, and for all?

When will the promised choice be enacted to withhold data from secondary uses and sharing with third parties beyond the HSCIC?

“we are going to move to a place where people will make those decisions as they currently do with wearable devices” [Widening digital participation, at the King’s Fund March 2015]

So when will we see this ‘move’ and what will it mean?

Why plan to continue to extract more data under the ‘old’ assumption principle, if ownership of data is now with the individual?

And who is to make the move first – NHS patients or NHS patriarchy – if patients use wearables before the NHS is geared up to them?

Looking back or forward thinking?

Last year’s programme has become outdated not only in principle, but digital best practice if top down dictatorship is out, and the individual is now to “manage their data as they wish.”

What might happen in the next two years, in the scope of the Five Year Forward Plan or indeed by 2020?

This shift in data creation, sharing and acknowledged ownership may mean epic change for expectations and access.

It will mean that people’s choice around data sharing; from patients and healthy controls, need considered early on in research & projects. Engagement, communication and involvement will be all about trust.

For the ‘worried well’, wearables could ‘provide digital “nudges” that will empower us to live healthier and better lives‘ or perhaps not.

What understanding have we yet, of the big picture of what this may mean and where apps fit into the wider digital NHS application and beyond?

Patients right to choose

The rights to information and decision making responsibility is shifting towards the patient in other applied areas of care.

But what data will patients truly choose to apply and what to share, manipulate or delete? Who will use wearables and who will not, and how will that affect the access to and delivery of care?

What data will citizens choose to share in future and how will it affect the decision making by their clinician, the NHS as an organisation, research, public health, the state, and the individual?

Selective deletion could change a clinical history and clinician’s view.

Selective accuracy in terms of false measurements [think diabetes], or in medication, could kill people quickly.

How are apps to be regulated? Will only NHS ‘approved’ apps be licensed for use in the NHS and made available to choose from and what happens to patients’ data who use a non-approved app?

How will any of their data be accessed and applied in primary care?

Knowledge is used to make choices and inform decisions. Individuals make choices about their own lives, clinicians make decisions for and with their patients in their service provision, organisations make choices about their business model which may include where to profit.

Our personal health data is the business intelligence of the health industry’s future.

Who holds the balance of power in that future delivery model for healthcare in England, is going to be an ongoing debate of epic proportions but it will likely change in drips rather than a flood.

It has already begun. Lobbyists and companies who want access to data are apparently asking for significant changes to be made in the access to micro data held at the ONS. EU laws are changing.

The players who hold data, will hold knowledge, will hold power.

If the NHS were a monopoly board game, data intermediaries would be some of the wealthiest sites, but the value they create from publicly funded NHS data, should belong in the community chest.

If consent is to be with the individual for all purposes other than direct care, then all data sharing bodies and users had best set their expectations accordingly. Patients will need to make wise decisions, for themselves and in the public interest.

Projects for research and sharing must design trust and security into plans from the start or risk failure through lack of participants.

It’s enormously exciting.  I suspect some apps will be rather well hyped and deflate quickly if not effective. Others might be truly useful. Others may kill us.

As twitter might say, what a time to be alive.

Digital opportunities for engaging citizens as far as apps and data sharing goes, is not only not about how the NHS will engage citizens, but how citizens will engage with what NHS offering.

Consent it seems will one day be king.
Will there or won’t there be a wearables revolution?
Will we be offered or choose digital ‘wellness tools’ or medically approved apps? Will we trust them for diagnostics and treatment? Or will few become more than a fad for the worried well?
Control for the individual over their own data and choice to make their own decisions of what to store, share or deny may rule in practice, as well as theory.
That practice will need to differentiate between purposes for direct clinical care and secondary uses as it does today, and be supported and protected in legislation, protecting patient trust.
“We are going to move to a place where people will make those decisions as they currently do with wearable devices, and other kinds of mobile, and we need to get to a point where people can plug their wearable device into their medical record, and essentially manage their data as they wish.”
However as ‘choice’ was the buzzword for NHS care in recent years – conflated with increasing the use of private providers – will consent be abused to mean a shift of responsibility from the state to the individual, with caveats for how it could affect care?
With that shift in responsibility for decision making, as with personalized budgets, will we also see a shift in responsibility for payment choices from state to citizen?
Will our lifestyle choices in one area exclude choice in another?
Could app data of unhealthy purchases from the supermarket or refusal to share our health data, one day be seen as refusal of care and a reason to decline it? Mr Kelsey hinted at this last question in the meeting.
Add a population stratified by risk groups into the mix, and we have lots of legitimate questions to ask on the future vision of the NHS.
He went on to say:
“we have got some very significant challenges to explore in our minds, and we need to do, quite urgently from a legal and ethical perspective, around the advent of machine learning, and …artificial intelligence capable of handling data at a scale which we don’t currently do […] .
“I happen to be the person responsible in the NHS for the 100K genomes programme[…]. We are on the edge of a new kind of medicine, where we can also look at the interaction of all your molecules, as they bounce around your DNA. […]
“The point is, the principle is, it’s the patient’s data and they must make decisions about who uses it and what they mash it up with.”
How well that is managed will determine who citizens will choose to engage and share data with, inside and outside our future NHS.
Simon Stevens earlier at the event, had acknowledged a fundamental power shift he sees as necessary:
“This has got to be central about what the redesign of care looks like, with a fundamental power shift actually, in the way in which services are produced and co-produced.”

That could affect everyone in the NHS, with or without a wearables revolution.

These are challenges the public is not yet discussing and we’re already late to the party.

We’re all invited. What will you be wearing?

********
[Previous: part one here #NHSWDP 1  – From the event “Digital Participation and Health Literacy: Opportunities for engaging citizens” held at the King’s Fund, London, March 18, 2015]

[Previous: part two #NHSWDP 2: Smartphones: the single most important health treatment & diagnostic tool at our disposal]

********

Apple ResearchKit: http://techcrunch.com/2015/03/09/apple-introduces-researchkit-turning-iphones-into-medical-diagnostic-devices/#lZOCiR:UwOp
Digital nudges – the Tyranny of the Should by Maneesha Juneja http://maneeshjuneja.com/blog/2015/3/2/the-tyranny-of-the-should

You may use these HTML tags and attributes: <blockquote cite="">