Category Archives: politics

Information society services: Children in the GDPR, Digital Economy Bill & Digital Strategy

In preparation for The General Data Protection Regulation (GDPR) there  must be an active UK decision about policy in the coming months for children and the Internet – provision of ‘Information Society Services’. The age of consent for online content aimed at children from May 25, 2018 will be 16 by default unless UK law is made to lower it.

Age verification for online information services in the GDPR, will mean capturing parent-child relationships. This could mean a parent’s email or credit card unless there are other choices made. What will that mean for access to services for children and to privacy? It is likely to offer companies an opportunity for a data grab, and mean privacy loss for the public, as more data about family relationships will be created and collected than the content provider would get otherwise.

Our interactions create a blended identity of online and offline attributes which I suggested in a previous post, create synthesised versions of our selves raises questions on data privacy and security.

The goal may be to protect the physical child. The outcome will mean it simultaneously expose children and parents to risks that we would not otherwise be put through increased personal data collection. By increasing the data collected, it increases the associated risks of loss, theft, and harm to identity integrity. How will legislation balance these risks and rights to participation?

The UK government has various work in progress before then, that could address these questions:

But will they?

As Sonia Livingstone wrote in the post on the LSE media blog about what to expect from the GDPR and its online challenges for children:

“Now the UK, along with other Member States, has until May 2018 to get its house in order”.

What will that order look like?

The Digital Strategy and Ed Tech

The Digital Strategy commits to changes in National Pupil Data  management. That is, changes in the handling and secondary uses of data collected from pupils in the school census, like using it for national research and planning.

It also means giving data to commercial companies and the press. Companies such as private tutor pupil matching services, and data intermediaries. Journalists at the Times and the Telegraph.

Access to NPD via the ONS VML would mean safe data use, in safe settings, by safe (trained and accredited) users.

Sensitive data — it remains to be seen how DfE intends to interpret ‘sensitive’ and whether that is the DPA1998 term or lay term meaning ‘identifying’ as it should — will no longer be seen by users for secondary uses outside safe settings.

However, a grey area on privacy and security remains in the “Data Exchange” which will enable EdTech products to “talk to each other”.

The aim of changes in data access is to ensure that children’s data integrity and identity are secure.  Let’s hope the intention that “at all times, the need to preserve appropriate privacy and security will remain paramount and will be non-negotiable” applies across all closed pupil data, and not only to that which may be made available via the VML.

This strategy is still far from clear or set in place.

The Digital Strategy and consumer data rights

The Digital Strategy commits under the heading of “Unlocking the power of data in the UK economy and improving public confidence in its use” to the implementation of the General Data Protection Regulation by May 2018. The Strategy frames this as a business issue, labelling data as “a global commodity” and as such, its handling is framed solely as a requirements needed to ensure “that our businesses can continue to compete and communicate effectively around the world” and that adoption “will ensure a shared and higher standard of protection for consumers and their data.”

The GDPR as far as children goes, is far more about protection of children as people. It focuses on returning control over children’s own identity and being able to revoke control by others, rather than consumer rights.

That said, there are data rights issues which are also consumer issues and  product safety failures posing real risk of harm.

Neither The Digital Economy Bill nor the Digital Strategy address these rights and security issues, particularly when posed by the Internet of Things with any meaningful effect.

In fact, the chapter Internet of Things and Smart Infrastructure [ 9/19]  singularly miss out anything on security and safety:

“We want the UK to remain an international leader in R&D and adoption of IoT. We are funding research and innovation through the three year, £30 million IoT UK Programme.”

There was much more thoughtful detail in the 2014 Blackett Review on the IoT to which I was signposted today after yesterday’s post.

If it’s not scary enough for the public to think that their sex secrets and devices are hackable, perhaps it will kill public trust in connected devices more when they find strangers talking to their children through a baby monitor or toy. [BEUC campaign report on #Toyfail]

“The internet-connected toys ‘My Friend Cayla’ and ‘i-Que’ fail miserably when it comes to safeguarding basic consumer rights, security, and privacy. Both toys are sold widely in the EU.”

Digital skills and training in the strategy doesn’t touch on any form of change management plans for existing working sectors in which we expect to see machine learning and AI change the job market. This is something the digital and industrial strategy must be addressing hand in glove.

The tactics and training providers listed sound super, but there does not appear to be an aspirational strategy hidden between the lines.

The Digital Economy Bill and citizens’ data rights

While the rest of Europe in this legislation has recognised that a future thinking digital world without boundaries, needs future thinking on data protection and empowered citizens with better control of identity, the UK government appears intent on taking ours away.

To take only one example for children, the Digital Economy Bill in Cabinet Office led meetings was explicit about use for identifying and tracking individuals labelled under “Troubled Families” and interventions with them. Why, when consent is required to work directly with people, that consent is being ignored to access their information is baffling and in conflict with both the spirit and letter of GDPR. Students and Applicants will see their personal data sent to the Student Loans Company without their consent or knowledge. This overrides the current consent model in place at UCAS.

It is baffling that the government is pursuing the Digital Economy Bill data copying clauses relentlessly, that remove confidentiality by default, and will release our identities in birth, marriage and death data for third party use without consent through Chapter 2, the opening of the Civil Registry, without any safeguards in the bill.

Government has not only excluded important aspects of Parliamentary scrutiny in the bill, it is trying to introduce “almost untrammeled powers” (paragraph 21), that will “very significantly broaden the scope for the sharing of information” and “specified persons”  which applies “whether the service provider concerned is in the public sector or is a charity or a commercial organisation” and non-specific purposes for which the information may be disclosed or used. [Reference: Scrutiny committee comments]

Future changes need future joined up thinking

While it is important to learn from the past, I worry that the effort some social scientists put into looking backwards,  is not matched by enthusiasm to look ahead and making active recommendations for a better future.

Society appears to have its eyes wide shut to the risks of coercive control and nudge as research among academics and government departments moves in the direction of predictive data analysis.

Uses of administrative big data and publicly available social media data for example, in research and statistics, needs further new regulation in practice and policy but instead the Digital Economy Bill looks only at how more data can be got out of Department silos.

A certain intransigence about data sharing with researchers from government departments is understandable. What’s the incentive for DWP to release data showing its policy may kill people?

Westminster may fear it has more to lose from data releases and don’t seek out the political capital to be had from good news.

The ethics of data science are applied patchily at best in government, and inconsistently in academic expectations.

Some researchers have identified this but there seems little will to action:

 “It will no longer be possible to assume that secondary data use is ethically unproblematic.”

[Data Horizons: New forms of Data for Social Research, Elliot, M., Purdam, K., Mackey, E., School of Social Sciences, The University Of Manchester, 2013.]

Research and legislation alike seem hell bent on the low hanging fruit but miss out the really hard things. What meaningful benefit will it bring by spending millions of pounds on exploiting these personal data and opening our identities to risk just to find out whether X course means people are employed in Y tax bracket 5 years later, versus course Z where everyone ends up self employed artists? What ethics will be applied to the outcomes of those questions asked and why?

And while government is busy joining up children’s education data throughout their lifetimes from age 2 across school, FE, HE, into their HMRC and DWP interactions, there is no public plan in the Digital Strategy for the coming 10 to 20 years employment market, when many believe, as do these authors in American Scientific, “around half of today’s jobs will be threatened by algorithms. 40% of today’s top 500 companies will have vanished in a decade.”

What benefit will it have to know what was, or for the plans around workforce and digital skills list ad hoc tactics, but no strategy?

We must safeguard jobs and societal needs, but just teaching people to code is not a solution to a fundamental gap in what our purpose will be, and the place of people as a world-leading tech nation after Brexit. We are going to have fewer talented people from across the world staying on after completing academic studies, because they’re not coming at all.

There may be investment in A.I. but where is the investment in good data practices around automation and machine learning in the Digital Economy Bill?

To do this Digital Strategy well, we need joined up thinking.

Improving online safety for children in The Green Paper on Children’s Internet Safety should mean one thing:

Children should be able to use online services without being used and abused by them.

This article arrived on my Twitter timeline via a number of people. Doteveryone CEO Rachel Coldicutt summed up various strands of thought I started to hear hints of last month at #CPDP2017 in Brussels:

“As designers and engineers, we’ve contributed to a post-thought world. In 2017, it’s time to start making people think again.

“We need to find new ways of putting friction and thoughtfulness back into the products we make.” [Glanceable truthiness, 30.1.2017]

Let’s keep the human in discussions about technology, and people first in our products

All too often in technology and even privacy discussions, people have become ‘consumers’ and ‘customers’ instead of people.

The Digital Strategy may seek to unlock “the power of data in the UK economy” but policy and legislation must put equal if not more emphasis on “improving public confidence in its use” if that long term opportunity is to be achieved.

And in technology discussions about AI and algorithms we hear very little about people at all.  Discussions I hear seem siloed instead into three camps: the academics, the designers and developers,  the politicians and policy makers.  And then comes the lowest circle, ‘the public’ and ‘society’.

It is therefore unsurprising that human rights have fallen down the ranking of importance in some areas of technology development.

It’s time to get this house in order.

Information. Society. Services. Children in the Internet of Things.

In this post, I think out loud about what improving online safety for children in The Green Paper on Children’s Internet Safety means ahead of the General Data Protection Regulation in 2018. Children should be able to use online services without being used and abused by them. If this regulation and other UK Government policy and strategy are to be meaningful for children, I think we need to completely rethink the State approach to what data privacy means in the Internet of Things.
[listen on soundcloud]


Children in the Internet of Things

In 1979 Star Trek: The Motion Picture created a striking image of A.I. as Commander Decker merged with V’Ger and the artificial copy of Lieutenant Ilia, blending human and computer intelligence and creating an integrated, synthesised form of life.

Ten years later, Sir Tim Berners-Lee wrote his proposal and created the world wide web, designing the way for people to share and access knowledge with each other through networks of computers.

In the 90s my parents described using the Internet as spending time ‘on the computer’, and going online meant from a fixed phone point.

Today our wireless computers in our homes, pockets and school bags, have built-in added functionality to enable us to do other things with them at the same time; make toast, play a game, and make a phone call, and we live in the Internet of Things.

Although we talk about it as if it were an environment of inanimate appliances,  it would be more accurate to think of the interconnected web of information that these things capture, create and share about our interactions 24/7, as vibrant snapshots of our lives, labelled with retrievable tags, and stored within the Internet.

Data about every moment of how and when we use an appliance, is captured at a rapid rate, or measured by smart meters, and shared within a network of computers. Computers that not only capture data but create, analyse and exchange new data about the people using them and how they interact with the appliance.

In this environment, children’s lives in the Internet of Things no longer involve a conscious choice to go online. Using the Internet is no longer about going online, but being online. The web knows us. In using the web, we become part of the web.

Our children, to the computers that gather their data, have simply become extensions of the things they use about which data is gathered and sold by the companies who make and sell the things. Things whose makers can even choose who uses them or not and how. In the Internet of things,  children have become things of the Internet.

A child’s use of a smart hairbrush will become part of the company’s knowledge base how the hairbrush works. A child’s voice is captured and becomes part of the database for the development training of the doll or robot they play with.

Our biometrics, measurements of the unique physical parts of our identities, provides a further example of the recent offline-self physically incorporated into banking services. Over 1 million UK children’s biometrics are estimated to be used in school canteens and library services through, often compulsory, fingerprinting.

Our interactions create a blended identity of online and offline attributes.

The web has created synthesised versions of our selves.

I say synthesised not synthetic, because our online self is blended with our real self and ‘synthetic’ gives the impression of being less real. If you take my own children’s everyday life as an example,  there is no ‘real’ life that is without a digital self.  The two are inseparable. And we might have multiple versions.

Our synthesised self is not only about our interactions with appliances and what we do, but who we know and how we think based on how we take decisions.

Data is created and captured not only about how we live, but where we live. These online data can be further linked with data about our behaviours offline generated from trillions of sensors and physical network interactions with our portable devices. Our synthesised self is tracked from real life geolocations. In cities surrounded by sensors under pavements, in buildings, cameras, mapping and tracking everywhere we go, our behaviours are converted into data, and stored inside an overarching network of cloud computers so that our online lives take on life of their own.

Data about us, whether uniquely identifiable on its own or not, is created and collected actively and passively. Online site visits record IP Address and use linked platform log-ins that can even extract friends lists without consent or affirmative action from them.

Using a tool like Privacy Badger from EEF gives you some insight into how many sites create new data about online behaviour once that synthesised self logs in, then tracks your synthesised self across the Internet. How you move from page to page, with what referring and exit pages and URLs, what adverts you click on or ignore,  platform types, number of clicks, cookies, invisible on page gifs and web beacons. Data that computers see, interpret and act on better than us.

Those synthesised identities are tracked online,  just as we move about a shopping mall offline.

Sir Tim Berners-Lee said this week, there is a need to put “a fair level of data control back in the hands of people.” It is not a need but vital to our future flourishing, very survival even. Data control is not about protecting a list of information or facts about ourselves and our identity for its own sake, it is about choosing who can exert influence and control over our life, our choices, and future of democracy.

And while today that who may be companies, it is increasingly A.I. itself that has a degree of control over our lives, as decisions are machine made.

Understanding how the Internet uses people

We get the service, the web gets our identity and our behaviours. And in what is in effect a hidden slave trade, they get access to use our synthesised selves in secret, and forever.

This grasp of what the Internet is, what the web is, is key to getting a rounded view of children’s online safety. Namely, we need to get away from the sole focus of online safeguarding as about children’s use of the web, and also look at how the web uses children.

Online services use children to:

  • mine, and exchange, repackage, and trade profile data, offline behavioural data (location, likes), and invisible Internet-use behavioural data (cookies, website analytics)
  • extend marketing influence in human decision-making earlier in life, even before children carry payment cards of their own,
  • enjoy the insights of parent-child relationships connected by an email account, sometimes a credit card, used as age verification or in online payments.

What are the risks?

Exploitation of identity and behavioural tracking not only puts our synthesised child at risk from exploitation, it puts our real life child’s future adult identity and data integrity at risk. If we cannot know who holds the keys to our digital identity, how can we trust that systems and services will be fair to us, not discriminate or defraud. Or not make errors that we cannot understand in order to correct?

Leaks, loss and hacks abound and manufacturers are slow to respond. Software that monitors children can also be used in coercive control. Organisations whose data are insecure, can be held to ransom. Children’s products should do what we expect them to and nothing more, there should be “no surprises” how data are used.

Companies tailor and target their marketing activity to those identity profiles. Our data is sold on in secret without consent to data brokers we never see, who in turn sell us on to others who monitor, track and target our synthesised selves every time we show up at their sites, in a never-ending cycle.

And from exploiting the knowledge of our synthesised self, decisions are made by companies, that target their audience, select which search results or adverts to show us, or hide, on which network sites, how often, to actively nudge our behaviours quite invisibly.

Nudge misuse is one of the greatest threats to our autonomy and with it democratic control of the society we live in. Who decides on the “choice architecture” that may shape another’s decisions and actions, and on what ethical basis?  once asked these authors who now seem to want to be the decision makers.

Thinking about Sir Tim Berners-Lee’s comments today on things that threaten the web, including how to address the loss of control over our personal data, we must frame it not a user-led loss of control, but autonomy taken by others; by developers, by product sellers, by the biggest ‘nudge controllers’ the Internet giants themselves.

Loss of identity is near impossible to reclaim. Our synthesised selves are sold into unending data slavery and we seem powerless to stop it. Our autonomy and with it our self worth, seem diminished.

How can we protect children better online?

Safeguarding must include ending data slavery of our synthesised self. I think of five things needed by policy shapers to tackle it.

  1. Understanding what ‘online’ and the Internet mean and how the web works – i.e. what data does a visit to a web page collect about the user and what happens to that data?
  2. Threat models and risk must go beyond the usual irl protection issues. Those  posed by undermining citizens’ autonomy, loss of public trust, of control over our identity, misuse of nudge, and how some are intrinsic to the current web business model, site users or government policy are unseen are underestimated.
  3. On user regulation (age verification / filtering) we must confront the idea that as a stand-alone step  it will not create a better online experience for the user, when it will not prevent the misuse of our synthesised selves and may increase risks – regulation of misuse must shift the point of responsibility
  4. Meaningful data privacy training must be mandatory for anyone in contact with children and its role in children’s safeguarding
  5. Siloed thinking must go. Forward thinking must join the dots across Departments into cohesive inclusive digital strategy and that doesn’t just mean ‘let’s join all of the data, all of the time’
  6. Respect our synthesised selves. Data slavery includes government misuse and must end if we respect children’s rights.

In the words of James T. Kirk, “the human adventure is just beginning.”

When our synthesised self is an inseparable blend of offline and online identity, every child is a synthesised child. And they are people. It is vital that government realises their obligation to protect rights to privacy, provision and participation under the Convention of the Rights of the Child and address our children’s real online life.

Governments, policy makers, and commercial companies must not use children’s offline safety as an excuse in a binary trade off to infringe on those digital rights or ignore risk and harm to the synthesised self in law, policy, and practice.

If future society is to thrive we must do all that is technologically possible to safeguard the best of what makes us human in this blend; our free will.


Part 2 follows with thoughts specific to the upcoming regulations, Digital Economy Bill andDigital Strategy

References:

[1] Internet of things WEF film, starting from 19:30

“What do an umbrella, a shark, a houseplant, the brake pads in a mining truck and a smoke detector all have in common?  They can all be connected online, and in this example, in this WEF film, they are.

“By 2024 more than 50% of home Internet traffic will be used by appliances and devices, rather than just for communication and entertainment…The IoT raises huge questions on privacy and security, that have to be addressed by government, corporations and consumers.”

[2] The government has today announced a “major new drive on internet safety”  [The Register, Martin, A. 27.02.2017]

[3] GDPR page 38 footnote (1) indicates the definition of Information Society Services as laid out in the Directive (EU) 2015/1535 of the European Parliament and of the Council of 9 September 2015 laying down a procedure for the provision of information in the field of technical regulations and of rules on Information Society services (OJ L 241, 17.9.2015, p. 1 and Annex 1)

image source: Startrek.com

A vanquished ghost returns as details of distress required in NHS opt out

It seems the ugly ghosts of care.data past were alive and well at NHS Digital this Christmas.

Old style thinking, the top-down patriarchal ‘no one who uses a public service should be allowed to opt out of sharing their records. Nor can people rely on their record being anonymised,‘ that you thought was vanquished, has returned with a vengeance.

The Secretary of State for Health, Jeremy Hunt, has reportedly  done a U-turn on opt out of the transfer of our medical records to third parties without consent.

That backtracks on what he said in Parliament on January 25th, 2014 on opt out of anonymous data transfers, despite the right to object in the NHS constitution [1].

So what’s the solution? If the new opt out methods aren’t working, then back to the old ones and making Section 10 requests? But it seems the Information Centre isn’t keen on making that work either.

All the data the HSCIC holds is sensitive and as such, its release risks patients’ significant harm or distress [2] so it shouldn’t be difficult to tell them to cease and desist, when it comes to data about you.

But how is NHS Digital responding to people who make the effort to write directly?

Someone who “got a very unhelpful reply” is being made to jump through hoops.

If anyone asks that their hospital data should not be used in any format and passed to third parties, that’s surely for them to decide.

Let’s take the case study of a woman who spoke to me during the whole care.data debacle who had been let down by the records system after rape. Her NHS records subsequently about her mental health care were inaccurate, and had led to her being denied the benefit of private health insurance at a new job.

Would she have to detail why selling her medical records would cause her distress? What level of detail is fair and who decides? The whole point is, you want to keep info confidential.

Should you have to state what you fear? “I have future distress, what you might do to me?” Once you lose control of data, it’s gone. Based on past planning secrecy and ideas for the future, like mashing up health data with retail loyalty cards as suggested at Strata in November 2013 [from 16:00] [2] no wonder people are sceptical. 

Given the long list of commercial companies,  charities, think tanks and others that passing out our sensitive data puts at risk and given the Information Centre’s past record, HSCIC might be grateful they have only opt out requests to deal with, and not millions of medical ethics court summonses. So far.

HSCIC / NHS Digital has extracted our identifiable records and has given them away, including for commercial product use, and continues give them away, without informing us. We’ve accepted Ministers’ statements and that a solution would be found. Two years on, patience wears thin.

“Without that external trust, we risk losing our public mandate and then cannot offer the vital insights that quality healthcare requires.”

— Sir Nick Partridge on publication of the audit report of 10% of 3,059 releases by the HSCIC between 2005-13

— Andy WIlliams said, “We want people to be certain their choices will be followed.”

Jeremy Hunt said everyone should be able to opt out of having their anonymised data used. David Cameron did too when the plan was  announced in 2012.

In 2014 the public was told there should be no more surprises. This latest response is not only a surprise but enormously disrespectful.

When you’re trying to rebuild trust, assuming that we accept that ‘is’ the aim, you can’t say one thing, and do another.  Perhaps the Department for Health doesn’t like the public answer to what the public wants from opt out, but that doesn’t make the DH view right.

Perhaps NHS Digital doesn’t want to deal with lots of individual opt out requests, that doesn’t make their refusal right.

Kingsley Manning recognised in July 2014, that the Information Centre “had made big mistakes over the last 10 years.” And there was “a once-in-a-generation chance to get it right.”

I didn’t think I’d have to move into the next one before they fix it.

The recent round of 2016 public feedback was the same as care.data 1.0. Respect nuanced opt outs and you will have all the identifiable public interest research data you want. Solutions must be better for other uses, opt out requests must be respected without distressing patients further in the process, and anonymous must mean  anonymous.

Pseudonymised data requests that go through the DARS process so that a Data Sharing Framework Contract and Data Sharing Agreement are in place are considered to be compliant with the ICO code of practice – fine, but they are not anonymous. If DARS is still giving my family’s data to Experian, Harvey Walsh, and co, despite opt out, I’ll be furious.

The [Caldicott 2] Review Panel found “that commissioners do not need dispensation from confidentiality, human rights & data protection law.

Neither do our politicians, their policies or ALBs.


[1] https://www.england.nhs.uk/ourwork/tsd/ig/ig-fair-process/further-info-gps/

“A patient can object to their confidential personal information from being disclosed out of the GP Practice and/or from being shared onwards by the HSCIC for non-direct care purposes (secondary purposes).”

[2] Minimum Mandatory Measures http://www.nationalarchives.gov.uk/documents/information-management/cross-govt-actions.pdf p7

The perfect storm: three bills that will destroy student data privacy in England

Lords have voiced criticism and concern at plans for ‘free market’ universities, that will prioritise competition over collaboration and private interests over social good. But while both Houses have identified the institutional effects, they are yet to discuss the effects on the individuals of a bill in which “too much power is concentrated in the centre”.

The Higher Education and Research Bill sucks in personal data to the centre, as well as power. It creates an authoritarian panopticon of the people within the higher education and further education systems. Section 1, parts 72-74 creates risks but offers no safeguards.

Applicants and students’ personal data is being shifted into a  top-down management model, at the same time as the horizontal safeguards for its distribution are to be scrapped.

Through deregulation and the building of a centralised framework, these bills will weaken the purposes for which personal data are collected, and weaken existing requirements on consent to which the data may be used at national level. Without amendments, every student who enters this system will find their personal data used at the discretion of any future Secretary of State for Education without safeguards or oversight, and forever. Goodbye privacy.

Part of the data extraction plans are for use in public interest research in safe settings with published purpose, governance, and benefit. These are well intentioned and this year’s intake of students will have had to accept that use as part of the service in the privacy policy.

But in addition and separately, the Bill will permit data to be used at the discretion of the Secretary of State, which waters down and removes nuances of consent for what data may or may not be used today when applicants sign up to UCAS.

Applicants today are told in the privacy policy they can consent separately to sharing their data with the Student Loans company for example. This Bill will remove that right when it permits all Applicant data to be used by the State.

This removal of today’s consent process denies all students their rights to decide who may use their personal data beyond the purposes for which they permit its sharing.

And it explicitly overrides the express wishes registered by the 28,000 applicants, 66% of respondents to a 2015 UCAS survey, who said as an example, that they should be asked before any data was provided to third parties for student loan applications (or even that their data should never be provided for this).

Not only can the future purposes be changed without limitation,  by definition, when combined with other legislation, namely the Digital Economy Bill that is in the Lords at the same time right now, this shift will pass personal data together with DWP and in connection with HMRC data expressly to the Student Loans Company.

In just this one example, the Higher Education and Research Bill is being used as a man in the middle. But it will enable all data for broad purposes, and if those expand in future, we’ll never know.

This change, far from making more data available to public interest research, shifts the balance of power between state and citizen and undermines the very fabric of its source of knowledge; the creation and collection of personal data.

Further, a number of amendments have been proposed in the Lords to clause 9 (the transparency duty) which raise more detailed privacy issues for all prospective students, concerns UCAS share.

Why this lack of privacy by design is damaging

This shift takes away our control, and gives it to the State at the very time when ‘take back control’ is in vogue. These bills are building a foundation for a data Brexit.

If the public does not trust who will use it and why or are told that when they provide data they must waive any rights to its future control, they will withhold or fake data. 8% of applicants even said it would put them off applying through UCAS at all.

And without future limitation, what might be imposed is unknown.

This shortsightedness will ultimately cause damage to data integrity and the damage won’t come in education data from the Higher Education Bill alone. The Higher Education and Research Bill is just one of three bills sweeping through Parliament right now which build a cumulative anti-privacy storm together, in what is labelled overtly as data sharing legislation or is hidden in tucked away clauses.

The Technical and Further Education Bill – Part 3

In addition to entirely new Applicant datasets moving from UCAS to the DfE in clauses 73 and 74 of the  Higher Education and Research Bill,  Apprentice and FE student data already under the Secretary of State for Education will see potentially broader use under changed purposes of Part 3 of the Technical and Further Education Bill.

Unlike the Higher Education and Research Bill, it may not fundamentally changing how the State gathers information on further education, but it has the potential to do so on use.

The change is a generalisation of purposes. Currently, subsection 1 of section 54 refers to “purposes of the exercise of any of the functions of the Secretary of State under Part 4 of the Apprenticeships, Skills, Children and Learning Act 2009”.

Therefore, the government argues, “it would not hold good in circumstances where certain further education functions were transferred from the Secretary of State to some combined authorities in England, which is due to happen in 2018.”<

This is why clause 38 will amend that wording to “purposes connected with further education”.

Whatever the details of the reason, the purposes are broader.

Again, combined with the Digital Economy Bill’s open ended purposes, it means the Secretary of State could agree to pass these data on to every other government department, a range of public bodies, and some private organisations.

The TFE BIll is at Report stage in the House of Commons on January 9, 2017.

What could go possibly go wrong?

These loose purposes, without future restrictions, definitions of third parties it could be given to or why, or clear need to consult the public or parliament on future scope changes, is a  repeat of similar legislative changes which have resulted in poor data practices using school pupil data in England age 2-19 since 2000.

Policy makers should consider whether the intent of these three bills is to give out identifiable, individual level, confidential data of young people under 18, for commercial use without their consent? Or to journalists and charities access? Should it mean unfettered access by government departments and agencies such as police and Home Office Removals Casework teams without any transparent register of access, any oversight, or accountability?

These are today’s uses by third-parties of school children’s individual, identifiable and sensitive data from the National Pupil Database.

Uses of data not as statistics, but named individuals for interventions in individual lives.

If the Home Secretaries past and present have put international students at the centre of plans to cut migration to the tens of thousands and government refuses to take student numbers out of migration figures, despite them being seen as irrelevant in the substance of the numbers debate, this should be deeply worrying.

Will the MOU between the DfE and the Home Office Removals Casework team be a model for access to all student data held at the Department for Education, even all areas of public administrative data?

Hoping that the data transfers to the Home Office won’t result in the deportation of thousands we would not predict today, may be naive.

Under the new open wording, the Secretary of State for Education might even  decide to sell the nation’s entire Technical and Further Education student data to Trump University for the purposes of their ‘research’ to target marketing at UK students or institutions that may be potential US post-grad applicants. The Secretary of State will have the data simply because she “may require [it] for purposes connected with further education.”

And to think US buyers or others would not be interested is too late.

In 2015 Stanford University made a request of the National Pupil Database for both academic staff and students’ data. It was rejected. We know this only from the third party release register. Without any duty to publish requests, approved users or purposes of data release, where is the oversight for use of these other datasets?

If these are not the intended purposes of these three bills, if there should be any limitation on purposes of use and future scope change, then safeguards and oversight need built into the face of the bills to ensure data privacy is protected and avoid repeating the same again.

Hoping that the decision is always going to be, ‘they wouldn’t approve a request like that’ is not enough to protect millions of students privacy.

The three bills are a perfect privacy storm

As other Europeans seek to strengthen the fundamental rights of their citizens to take back control of their personal data under the GDPR coming into force in May 2018, the UK government is pre-emptively undermining ours in these three bills.

Young people, and data dependent institutions, are asking for solutions to show what personal data is held where, and used by whom, for what purposes. That buys in the benefit message that builds trust showing what you said you’d do with my data, is what you did with my data. [1] [2]

Reality is that in post-truth politics it seems anything goes, on both sides of the Pond. So how will we trust what our data is used for?

2015-16 advice from the cross party Science and Technology Committee suggested data privacy is unsatisfactory, “to be left unaddressed by Government and without a clear public-policy position set out“.  We hear the need for data privacy debated about use of consumer data, social media, and on using age verification. It’s necessary to secure the public trust needed for long term public benefit and for economic value derived from data to be achieved.

But the British government seems intent on shortsighted legislation which does entirely the opposite for its own use: in the Higher Education Bill, the Technical and Further Education Bill and in the Digital Economy Bill.

These bills share what Baroness Chakrabarti said of the Higher Education Bill in its Lords second reading on the 6th December, “quite an achievement for a policy to combine both unnecessary authoritarianism with dangerous degrees of deregulation.”

Unchecked these Bills create the conditions needed for catastrophic failure of public trust. They shift ever more personal data away from personal control, into the centralised control of the Secretary of State for unclear purposes and use by undefined third parties. They jeopardise the collection and integrity of public administrative data.

To future-proof the immediate integrity of student personal data collection and use, the DfE reputation, and public and professional trust in DfE political leadership, action must be taken on safeguards and oversight, and should consider:

  • Transparency register: a public record of access, purposes, and benefits to be achieved from use
  • Subject Access Requests: Providing the public ways to access copies of their own data
  • Consent procedures should be strengthened for collection and cannot say one thing, and do another
  • Ability to withdraw consent from secondary purposes should be built in by design, looking to GDPR from 2018
  • Clarification of the legislative purpose of intended current use by the Secretary of State and its boundaries shoud be clear
  • Future purpose and scope change limitations should require consultation – data collected today must not used quite differently tomorrow without scrutiny and ability to opt out (i.e. population wide registries of religion, ethnicity, disability)
  • Review or sunset clause

If the legislation in these three bills pass without amendment, the potential damage to privacy will be lasting.


[1] http://www.parliament.uk/business/publications/written-questions-answers-statements/written-question/Commons/2016-07-15/42942/  Parliamentary written question 42942 on the collection of pupil nationality data in the school census starting in September 2016:   “what limitations will be placed by her Department on disclosure of such information to (a) other government departments?”

Schools Minister Nick Gibb responded on July 25th 2016: ”

“These new data items will provide valuable statistical information on the characteristics of these groups of children […] “The data will be collected solely for internal Departmental use for the analytical, statistical and research purposes described above. There are currently no plans to share the data with other government Departments”

[2] December 15, publication of MOU between the Home Office  Casework Removals Team and the DfE, reveals “the previous agreement “did state that DfE would provide nationality information to the Home Office”, but that this was changed “following discussions” between the two departments.” http://schoolsweek.co.uk/dfe-had-agreement-to-share-pupil-nationality-data-with-home-office/ 

The agreement was changed on 7th October 2016 to not pass nationality data over. It makes no mention of not using the data within the DfE for the same purposes.

Data for Policy: Ten takeaways from the conference

The knowledge and thinking on changing technology, the understanding of the computing experts and those familiar with data, must not stay within conference rooms and paywalls.

What role do data and policy play in a world of post-truth politics and press? How will young people become better informed for their future?

The data for policy conference this week, brought together some of the leading names in academia and a range of technologists, government representatives, people from the European Commission, and other global organisations, Think Tanks, civil society groups, companies, and individuals interested in data and statistics. Beyond the UK, speakers came from several other countries in Europe, from the US, South America and Australia.

The schedule was ambitious and wide-ranging in topics. There was brilliant thinking and applications of ideas. Theoretical and methodological discussions were outnumbered by the presentations that included practical applications or work in real-life scenarios using social science data, humanitarian data, urban planning, public population-wide administrative data from health, finance, documenting sexual violence and more. This was good.

We heard about lots of opportunities and applied projects where large datasets are being used to improve the world. But while I always come away from these events having learned something and encouraged to learn more about those I didn’t, I do wonder if the biggest challenges in data and policy aren’t still the simplest.

No matter how much information we have, we must use it wisely. I’ve captured ten takeaways of things I would like to see follow. This may not have been the forum for it.

Ten takeaways on Data-for-Policy

1. Getting beyond the Bubble

All this knowledge must reach beyond the bubble of academia, beyond a select few white-male-experts-in well off parts of the world, and get into the hands and heads of the many. Ways to do this must include reducing the cost or changing  pathways of academic print access. Event and conference fees are also a  barrier to many.

2. Context of accessibility and control

There is little discussion of the importance of context. The nuance of most of these subjects was too much for the length of the sessions but I didn’t hear any single session mention threats to data access and trust in data collection posed by surveillance or state censorship or restriction of access to data or information systems, or the editorial control of knowledge and news by Facebook and co. There was no discussion of the influence of machine manipulators, how bots change news or numbers and create fictitious followings.

Policy makers and public are influenced by the media, post-truth or not. Policy makers in the UK government recently wrote in response to challenge over a Statutory Instrument that if Mums-net wasn’t kicking up  a fuss then they believed the majority of the public were happy. How are policy makers being influenced by press or social media statistics without oversight or regulating for their accuracy?

Increasing data and technology literacy in policy makers, is going to go far beyond improving an understanding of data science.

3. Them and Us

I feel a growing disconnect between those ‘in the know’ and those in ‘the public’. Perhaps that is a side-effect of my own understanding growing about how policy is made, but it goes wider. Those who talked about ‘the public’ did so without mention that attendees are all part of that public. Big data, are often our data. We are the public.

Vast parts of the population feel left behind already by policy and government decision-making; divided by income, Internet access, housing, life opportunites, and the ability to realise our dreams.

How policy makers address this gulf in the short and long term both matter as a foundation for what data infrastructure we have access to, how well trusted it is, whose data are included and who is left out of access to the information or decision-making using it.

Researchers prevented from accessing data held by government departments, perhaps who fear it will be used to criticise rather than help improve policy of the day, may be limiting our true picture of some of this divide and its solutions.

Equally data that is used to implement top-down policy without public involvement, seems a shame to ignore public opinion. I would like to have asked, does GDS in its land survey work searching for free school sites include people surveys asking, do you want a free school in your area at all?

4. There is no neutral

Global trust in politics is in tatters. Trust in the media is as bad. Neither appear to be interested across the world in doing much to restore their integrity.

All the wisdom in the world could not convince a majority in the 23rd June referendum, that the UK should remain in the European Union. This unspoken context was perhaps an aside to most of the subjects of the conference which went beyond the UK,  but we cannot ignore that the UK is deep in political crisis in the world, and at home the Opposition seems to have gone into a tailspin.

What role do data and evidence have in post-truth politics?

It was clear in discussion, that if I mentioned technology and policy in a political context, eyes started to glaze over. Politics should not interfere with the public interest, but it does and cannot be ignored. In fact it is short term political terms and needs for long term vision that are perhaps most at-odds in making good data policy plans.

The concept of public good, is not uncomplicated. It is made more complex still if you factor in changes over time, and cannot ignore that Trump or Turkey are not fictitious backdrops considering who decides what the public good and policy priorities should be.

Researchers’ role in shaping public good is not only about being ethical in their own research, but having the vision to have safeguards in place for how the knowledge they create are used.

5. Ethics is our problem, but who has the solution?

While many speakers touched on the common themes of ethics and privacy in data collection and analytics, saying this is going to be one of our greatest challenges, few address how, and who is taking responsibility and accountability for making it happen in ways that are not left to big business and profit making decision-takers.

It appears from last year, that ethics played a more central role. A year later we now have two new ethical bodies in the UK, at the UK Statistics Authority and at the Turing Institute. How they will influence the wider ethics issues in data science remains to be seen.

Legislation and policy are not keeping pace with the purchasing power or potential of the big players, the Googles and Amazons and Microsofts, and a government that sees anything resulting in economic growth as good, is unlikely to be willing to regulate it.

How technology can be used and how it should be used still seems a far off debate that no one is willing to take on and hold policy makers to account for. Implementing legislation and policy underpinned with ethics must serve as a framework for giving individuals insight into how decisions about them were reached by machines, or the imbalance of power that commercial companies and state agencies have in our lives that comes from insights through privacy invasion.

6. Inclusion and bias

Clearly this is one event in a world of many events that address similar themes, but I do hope that the unequal balance in representation across the many diverse aspects of being human are being addressed elsewhere.  A wider audience must be inclusive. The talk by Jim Waldo on retaining data accuracy while preserving privacy was interesting as it showed how deidentified data can create bias in results if data is very different from the original. Gaps in data, especially using big population data which excludes certain communities, wasn’t something I heard discussed as much.

7.Commercial data sources

Government and governmental organisations appear to be starting to give significant weight to the use of commercial data and social media data sources. I guess any data seen as ‘freely available’ that can be mined seems valuable. I wonder however how this will shape the picture of our populations, with what measures of validity and  whether data are comparable and offer reproducability.

These questions will matter in shaping policy and what governments know about the public. And equally, they must consider those communities whether in the UK or in other countries, that are not represented in these datasets and how these bias decision-making.

8. Data is not a panacea for policy making

Overall my take away is the important role that data scientists have to remind policy makers that data is only information. Nothing new. We may be able to access different sources of data in different ways, and process it faster or differently from the past, but we cannot rely on data of itself to solve the universal problems of the human condition. Data must be of good integrity to be useful and valuable. Data must be only one part of the library of resources to be used in planning policy. The limitations of data must also be understood. The uncertainties and unknowns can be just as important as evidence.

9. Trust and transparency

Regulation and oversight matter but cannot be the only solutions offered to concerns about shaping what is possible to do versus what should be done. Talking about protecting trust is not enough. Organisations must become more trustworthy if trust levels are to change; through better privacy policies, through secure data portability and rights to revoke consent and delete outdated data.

10. Young people and involvement in their future

What inspired me most were the younger attendees presenting posters, especially the PhD student using data to provide evidence of sexual violence in El Salvador and their passion for improving lives.

We are still not talking about how to protect and promote privacy in the Internet of Things, where sensors on every street corner in Smart Cities gather data about where we have been, what we buy and who we are with. Even our children’s toys send data to others.

I’m still as determined to convince policy makers that young people’s data privacy and digital self-awareness must be prioritised.

Highlighting the policy and practice failings in the niche area of the National Pupil Database serves only to get ideas from others how  policy and practice could be better. 20 million school children’s records is not a bad place to start to make data practice better.

The questions that seem hardest to move forward are the simplest: how to involve everyone in what data and policy may bring for future and not leave out certain communities through carelessness.

If the public is not encouraged to understand how our own personal data are collected and used, how can we expect to grow great data scientists of the future? What uses of data put good uses at risk?

And we must make sure we don’t miss other things, while data takes up the time and focus of today’s policy makers and great minds alike.

cb-poster-for-web

The illusion that might cheat us: ethical data science vision and practice

This blog post is also available as an audio file on soundcloud.


Anais Nin, wrote in her 1946 diary of the dangers she saw in the growth of technology to expand our potential for connectivity through machines, but diminish our genuine connectedness as people. She could hardly have been more contemporary for today:

“This is the illusion that might cheat us of being in touch deeply with the one breathing next to us. The dangerous time when mechanical voices, radios, telephone, take the place of human intimacies, and the concept of being in touch with millions brings a greater and greater poverty in intimacy and human vision.”
[Extract from volume IV 1944-1947]

Echoes from over 70 years ago, can be heard in the more recent comments of entrepreneur Elon Musk. Both are concerned with simulation, a lack of connection between the perceived, and reality, and the jeopardy this presents for humanity. But both also have a dream. A dream based on the positive potential society has.

How will we use our potential?

Data is the connection we all have between us as humans and what machines and their masters know about us. The values that masters underpin their machine design with, will determine the effect the machines and knowledge they deliver, have on society.

In seeking ever greater personalisation, a wider dragnet of data is putting together ever more detailed pieces of information about an individual person. At the same time data science is becoming ever more impersonal in how we treat people as individuals. We risk losing sight of how we respect and treat the very people whom the work should benefit.

Nin grasped the risk that a wider reach, can mean more superficial depth. Facebook might be a model today for the large circle of friends you might gather, but how few you trust with confidences, with personal knowledge about your own personal life, and the privilege it is when someone chooses to entrust that knowledge to you. Machine data mining increasingly tries to get an understanding of depth, and may also add new layers of meaning through profiling, comparing our characteristics with others in risk stratification.
Data science, research using data, is often talked about as if it is something separate from using information from individual people. Yet it is all about exploiting those confidences.

Today as the reach has grown in what is possible for a few people in institutions to gather about most people in the public, whether in scientific research, or in surveillance of different kinds, we hear experts repeatedly talk of the risk of losing the valuable part, the knowledge, the insights that benefit us as society if we can act upon them.

We might know more, but do we know any better? To use a well known quote from her contemporary, T S Eliot, ‘Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?’

What can humans achieve? We don’t yet know our own limits. What don’t we yet know?  We have future priorities we aren’t yet aware of.

To be able to explore the best of what Nin saw as ‘human vision’ and Musk sees in technology, the benefits we have from our connectivity; our collaboration, shared learning; need to be driven with an element of humility, accepting values that shape  boundaries of what we should do, while constantly evolving with what we could do.

The essence of this applied risk is that technology could harm you, more than it helps you. How do we avoid this and develop instead the best of what human vision makes possible? Can we also exceed our own expectations of today, to advance in moral progress?

Continue reading The illusion that might cheat us: ethical data science vision and practice

OkCupid and Google DeepMind: Happily ever after? Purposes and ethics in datasharing

This blog post is also available as an audio file on soundcloud.


What constitutes the public interest must be set in a universally fair and transparent ethics framework if the benefits of research are to be realised – whether in social science, health, education and more – that framework will provide a strategy to getting the pre-requisite success factors right, ensuring research in the public interest is not only fit for the future, but thrives. There has been a climate change in consent. We need to stop talking about barriers that prevent datasharing  and start talking about the boundaries within which we can.

What is the purpose for which I provide my personal data?

‘We use math to get you dates’, says OkCupid’s tagline.

That’s the purpose of the site. It’s the reason people log in and create a profile, enter their personal data and post it online for others who are looking for dates to see. The purpose, is to get a date.

When over 68K OkCupid users registered for the site to find dates, they didn’t sign up to have their identifiable data used and published in ‘a very large dataset’ and onwardly re-used by anyone with unregistered access. The users data were extracted “without the express prior consent of the user […].”

Are the registration consent purposes compatible with the purposes to which the researcher put the data should be a simple enough question.  Are the research purposes what the person signed up to, or would they be surprised to find out their data were used like this?

Questions the “OkCupid data snatcher”, now self-confessed ‘non-academic’ researcher, thought unimportant to consider.

But it appears in the last month, he has been in good company.

Google DeepMind, and the Royal Free, big players who do know how to handle data and consent well, paid too little attention to the very same question of purposes.

The boundaries of how the users of OkCupid had chosen to reveal information and to whom, have not been respected in this project.

Nor were these boundaries respected by the Royal Free London trust that gave out patient data for use by Google DeepMind with changing explanations, without clear purposes or permission.

The legal boundaries in these recent stories appear unclear or to have been ignored. The privacy boundaries deemed irrelevant. Regulatory oversight lacking.

The respectful ethical boundaries of consent to purposes, disregarding autonomy, have indisputably broken down, whether by commercial org, public body, or lone ‘researcher’.

Research purposes

The crux of data access decisions is purposes. What question is the research to address – what is the purpose for which the data will be used? The intent by Kirkegaard was to test:

“the relationship of cognitive ability to religious beliefs and political interest/participation…”

In this case the question appears intended rather a test of the data, not the data opened up to answer the test. While methodological studies matter, given the care and attention [or self-stated lack thereof] given to its extraction and any attempt to be representative and fair, it would appear this is not the point of this study either.

The data doesn’t include profiles identified as heterosexual male, because ‘the scraper was’. It is also unknown how many users hide their profiles, “so the 99.7% figure [identifying as binary male or female] should be cautiously interpreted.”

“Furthermore, due to the way we sampled the data from the site, it is not even representative of the users on the site, because users who answered more questions are overrepresented.” [sic]

The paper goes on to say photos were not gathered because they would have taken up a lot of storage space and could be done in a future scraping, and

“other data were not collected because we forgot to include them in the scraper.”

The data are knowingly of poor quality, inaccurate and incomplete. The project cannot be repeated as ‘the scraping tool no longer works’. There is an unclear ethical or peer review process, and the research purpose is at best unclear. We can certainly give someone the benefit of the doubt and say intent appears to have been entirely benevolent. It’s not clear what the intent was. I think it is clearly misplaced and foolish, but not malevolent.

The trouble is, it’s not enough to say, “don’t be evil.” These actions have consequences.

When the researcher asserts in his paper that, “the lack of data sharing probably slows down the progress of science immensely because other researchers would use the data if they could,”  in part he is right.

Google and the Royal Free have tried more eloquently to say the same thing. It’s not research, it’s direct care, in effect, ignore that people are no longer our patients and we’re using historical data without re-consent. We know what we’re doing, we’re the good guys.

However the principles are the same, whether it’s a lone project or global giant. And they’re both wildly wrong as well. More people must take this on board. It’s the reason the public interest needs the Dame Fiona Caldicott review published sooner rather than later.

Just because there is a boundary to data sharing in place, does not mean it is a barrier to be ignored or overcome. Like the registration step to the OkCupid site, consent and the right to opt out of medical research in England and Wales is there for a reason.

We’re desperate to build public trust in UK research right now. So to assert that the lack of data sharing probably slows down the progress of science is misplaced, when it is getting ‘sharing’ wrong, that caused the lack of trust in the first place and harms research.

A climate change in consent

There has been a climate change in public attitude to consent since care.data, clouded by the smoke and mirrors of state surveillance. It cannot be ignored.  The EUGDPR supports it. Researchers may not like change, but there needs to be an according adjustment in expectations and practice.

Without change, there will be no change. Public trust is low. As technology advances and if we continue to see commercial companies get this wrong, we will continue to see public trust falter unless broken things get fixed. Change is possible for the better. But it has to come from companies, institutions, and people within them.

Like climate change, you may deny it if you choose to. But some things are inevitable and unavoidably true.

There is strong support for public interest research but that is not to be taken for granted. Public bodies should defend research from being sunk by commercial misappropriation if they want to future-proof public interest research.

The purpose for which the people gave consent are the boundaries within which you have permission to use data, that gives you freedom within its limits, to use the data.  Purposes and consent are not barriers to be overcome.

If research is to win back public trust developing a future proofed, robust ethical framework for data science must be a priority today.

Commercial companies must overcome the low levels of public trust they have generated in the public to date if they ask ‘trust us because we’re not evil‘. If you can’t rule out the use of data for other purposes, it’s not helping. If you delay independent oversight it’s not helping.

This case study and indeed the Google DeepMind recent episode by contrast demonstrate the urgency with which working out what common expectations and oversight of applied ethics in research, who gets to decide what is ‘in the public interest’ and data science public engagement must be made a priority, in the UK and beyond.

Boundaries in the best interest of the subject and the user

Society needs research in the public interest. We need good decisions made on what will be funded and what will not be. What will influence public policy and where needs attention for change.

To do this ethically, we all need to agree what is fair use of personal data, when is it closed and when is it open, what is direct and what are secondary uses, and how advances in technology are used when they present both opportunities for benefit or risks to harm to individuals, to society and to research as a whole.

The potential benefits of research are potentially being compromised for the sake of arrogance, greed, or misjudgement, no matter intent. Those benefits cannot come at any cost, or disregard public concern, or the price will be trust in all research itself.

In discussing this with social science and medical researchers, I realise not everyone agrees. For some, using deidentified data in trusted third party settings poses such a low privacy risk, that they feel the public should have no say in whether their data are used in research as long it’s ‘in the public interest’.

For the DeepMind researchers and Royal Free, they were confident even using identifiable data, this is the “right” thing to do, without consent.

For the Cabinet Office datasharing consultation, the parts that will open up national registries, share identifiable data more widely and with commercial companies, they are convinced it is all the “right” thing to do, without consent.

How can researchers, society and government understand what is good ethics of data science, as technology permits ever more invasive or covert data mining and the current approach is desperately outdated?

Who decides where those boundaries lie?

“It’s research Jim, but not as we know it.” This is one aspect of data use that ethical reviewers will need to deal with, as we advance the debate on data science in the UK. Whether independents or commercial organisations. Google said their work was not research. Is‘OkCupid’ research?

If this research and data publication proves anything at all, and can offer lessons to learn from, it is perhaps these three things:

Who is accredited as a researcher or ‘prescribed person’ matters. If we are considering new datasharing legislation, and for example, who the UK government is granting access to millions of children’s personal data today. Your idea of a ‘prescribed person’ may not be the same as the rest of the public’s.

Researchers and ethics committees need to adjust to the climate change of public consent. Purposes must be respected in research particularly when sharing sensitive, identifiable data, and there should be no assumptions made that differ from the original purposes when users give consent.

Data ethics and laws are desperately behind data science technology. Governments, institutions, civil, and all society needs to reach a common vision and leadership how to manage these challenges. Who defines these boundaries that matter?

How do we move forward towards better use of data?

Our data and technology are taking on a life of their own, in space which is another frontier, and in time, as data gathered in the past might be used for quite different purposes today.

The public are being left behind in the game-changing decisions made by those who deem they know best about the world we want to live in. We need a say in what shape society wants that to take, particularly for our children as it is their future we are deciding now.

How about an ethical framework for datasharing that supports a transparent public interest, which tries to build a little kinder, less discriminating, more just world, where hope is stronger than fear?

Working with people, with consent, with public support and transparent oversight shouldn’t be too much to ask. Perhaps it is naive, but I believe that with an independent ethical driver behind good decision-making, we could get closer to datasharing like that.

That would bring Better use of data in government.

Purposes and consent are not barriers to be overcome. Within these, shaped by a strong ethical framework, good data sharing practices can tackle some of the real challenges that hinder ‘good use of data’: training, understanding data protection law, communications, accountability and intra-organisational trust. More data sharing alone won’t fix these structural weaknesses in current UK datasharing which are our really tough barriers to good practice.

How our public data will be used in the public interest will not be a destination or have a well defined happy ending, but it is a long term  process which needs to be consensual and there needs to be a clear path to setting out together and achieving collaborative solutions.

While we are all different, I believe that society shares for the most part, commonalities in what we accept as good, and fair, and what we believe is important. The family sitting next to me have just counted out their money and bought an ice cream to share, and the staff gave them two. The little girl is beaming. It seems that even when things are difficult, there is always hope things can be better. And there is always love.

Even if some might give it a bad name.

********

img credit: flickr/sofi01/ Beauty and The Beast  under creative commons

On the Boundaries of Being Human and Big Data

Atlas, the Boston Dynamics created robot, won hearts and minds this week as it stoically survived man being mean.  Our collective human response was an emotional defence of the machine, and criticism of its unfair treatment by its tester.

Some on Twitter recalled the incident of Lord of The Flies style bullying by children in Japan that led the programmers to create an algorithm for ‘abuse avoidance’.

The concepts of fairness and of decision making algorithms for ‘abuse avoidance’ are interesting from perspectives of data mining, AI and the wider access to and use of tech in general, and in health specifically.

If the decision to avoid abuse can be taken out of an individual’s human hands and are based on unfathomable amounts of big data, where are its limits applied to human behaviour and activity?

When it is decided that an individual’s decision making capability is impaired or has been forfeited their consent may be revoked in their best interest.

Who has oversight of the boundaries of what is acceptable for one person, or for an organisation, to decide what is in someone else’s best interest, or indeed, the public interest?

Where these boundaries overlap – personal abuse avoidance, individual best interest and the public interest – and how society manage them, with what oversight, is yet to be widely debated.

The public will shortly be given the opportunity to respond to plans for the expansion of administrative datasharing in England through consultation.

We must get involved and it must be the start of a debate and dialogue not simply a tick-box to a done-deal, if data derived from us are to be used as a platform for future to “achieve great results for the NHS and everyone who depends on it.”

Administering applied “abuse avoidance” and Restraining Abilities

Administrative uses and secondary research using the public’s personal data are applied not only in health, but across the board of public bodies, including big plans for tech in the justice system.

An example in the news this week of applied tech and its restraint on human behaviour was ankle monitors.  While one type was abandoned by the MOJ at a cost of £23m on the same day more funding for transdermal tags was announced in London.

The use of this technology as a monitoring tool, should not of itself be a punishment. It is said compliance is not intended to affect the dignity of individuals who are being monitored, but through the collection of personal and health data  will ensure the deprivation of alcohol – avoiding its abuse for a person’s own good and in the public interest. Is it fair?

Abstinence orders might be applied to those convicted of crimes such as assault, being drunk and disorderly and drunk driving.

We’re yet to see much discussion of how these varying degrees of integration of tech with the human body, and human enhancement will happen through robot elements in our human lives.

How will the boundaries of what is possible and desirable be determined and by whom with what oversight?

What else might be considered as harmful as alcohol to individuals and to  society? Drugs? Nictotine? Excess sugar?

As we wonder about the ethics of how humanoids will act and the aesthetics of how human they look, I wonder how humane are we being, in all our ‘public’ tech design and deployment?

Umberto Eco who died on Friday wrote in ‘The birth of ethics’ that there are universal ideas on constraints, effectively that people should not harm other people, through deprivation, restrictions or psychological torture. And that we should not impose anything on others that “diminishes or stifles our capacity to think.”

How will we as a society collectively agree what that should look like, how far some can impose on others, without consent?

Enhancing the Boundaries of Being Human

Technology might be used to impose bodily boundaries on some people, but tech can also be used for the enhancement of others. retweeted this week, the brilliant Angel Giuffria’s arm.

While the technology in this case is literally hands-on in its application, increasingly it is not the technology itself but the data that it creates or captures which enables action through data-based decision making.

Robots that are tiny may be given big responsibilities to monitor and report massive amounts of data. What if we could swallow them?

Data if analysed and understood, become knowledge.

Knowledge can be used to inform decisions and take action.

So where are the boundaries of what data may be extracted,  information collated, and applied as individual interventions?

Defining the Boundaries of “in the Public Interest”

Where are boundaries of what data may be created, stored, and linked to create a detailed picture about us as individuals, if the purpose is determined to be in the public interest?

Who decides which purposes are in the public interest? What qualifies as research purposes? Who qualifies as meeting the criteria of ‘researcher’?

How far can research and interventions go without consent?

Should security services and law enforcement agencies always be entitled to get access to individuals’ data ‘in the public interest’?

That’s something Apple is currently testing in the US.

Should research bodies always be entitled to get access to individuals’ data ‘in the public interest’?

That’s something care.data tried and failed to assume the public supported and has yet to re-test. Impossible before respecting the opt out that was promised over two years ago in March 2014.

The question how much data research bodies may be ‘entitled to’ will be tested again in the datasharing consultation in the UK.

How data already gathered are used in research may be used differently from it is when we consent to its use at colllection. How this changes over time and its potential for scope creep is seen in Education. Pupil data has gone from passive collection of name to giving it out to third parties, to use in national surveys, so far.

And what of the future?

Where is the boundary between access and use of data not in enforcement of acts already committed but in their prediction and prevention?

If you believe there should be an assumption of law enforcement access to data when data are used for prediction and prevention, what about health?

Should there be any difference between researchers’ access to data when data are used for past analysis and for use in prediction?

If ethics define the boundary between what is acceptable and where actions by one person may impose something on another that “diminishes or stifles our capacity to think” – that takes away our decision making capacity – that nudges behaviour, or acts on behaviour that has not yet happened, who decides what is ethical?

How does a public that is poorly informed about current data practices, become well enough informed to participate in the debate of how data management should be designed today for their future?

How Deeply Mined should our Personal Data be?

The application of technology, non-specific but not yet AI, was also announced this week in the Google DeepMind work in the NHS.

Its first key launch app co-founder provided a report that established the operating framework for the Behavioural Insights Team established by Prime Minister David Cameron.

A number of highly respected public figures have been engaged to act in the public interest as unpaid Independent Reviewers of Google DeepMind Health. It will be interesting to see what their role is and how transparent its workings and public engagement will be.

The recent consultation on the NHS gave overwhelming feedback that the public does not support the direction of current NHS change. Even having removed all responses associated with ‘lefty’ campaigns, concerns listed on page 11, are consistent including a request the Government “should end further involvement of the private sector in healthcare”. It appears from the response that this engagement exercise will feed little into practice.

The strength of feeling should however be a clear message to new projects that people are passionate that equal access to healthcare for all matters and that the public wants to be informed and have their voices heard.

How will public involvement be ensured as complexity increases in these healthcare add-ons and changing technology?

Will Google DeepMind pave the way to a new approach to health research? A combination of ‘nudge’ behavioural insights, advanced neural networks, Big Data and technology is powerful. How will that power be used?

I was recently told that if new research is not pushing the boundaries of what is possible and permissible then it may not be worth doing, as it’s probably been done before.

Should anything that is new that becomes possible be realised?

I wonder how the balance will be weighted in requests for patient data and their application, in such a high profile project.

Will NHS Research Ethics Committees turn down research proposals in-house in hospitals that benefit the institution or advance their reputation, or the HSCIC, ever feel able to say no to data use by Google DeepMind?

Ethics committees safeguard the rights, safety, dignity and well-being of research participants, independently of research sponsors whereas these representatives are not all independent of commercial supporters. And it has not claimed it’s trying to be an ethics panel. But oversight is certainly needed.

The boundaries of ownership between what is seen to benefit commercial and state in modern health investment is perhaps more than blurred to an untrained eye. Genomics England – the government’s flagship programme giving commercial access to the genome of 100K people –  stockholding companies, data analytics companies, genome analytic companies, genome collection, and human tissue research, commercial and academic research,  often share directors, working partnerships and funders. That’s perhaps unsurprising given such a specialist small world.

It’s exciting to think of the possibilities if, “through a focus on patient outcomes, effective oversight, and the highest ethical principles, we can achieve great results for the NHS and everyone who depends on it.”

Where will an ageing society go, if medics can successfully treat more cancer for example? What diseases will be prioritised and others left behind in what is economically most viable to prevent? How much investment will be made in diseases of the poor or in countries where governments cannot afford to fund programmes?

What will we die from instead? What happens when some causes of ‘preventative death’ are deemed more socially acceptable than others? Where might prevention become socially enforced through nudging behaviour into new socially acceptable or ethical norms?

Don’t be Evil

Given the leading edge of the company and its curiosity-by-design to see how far “can we” will reach, “don’t be evil” may be very important. But “be good” might be better. Where is that boundary?

The boundaries of what ‘being human’ means and how Big Data will decide and influence that, are unclear and changing. How will the law and regulation keep up and society be engaged in support?

Data principles such as fairness, keeping data accurate, complete and up-to-date and ensuring data are not excessive retained for no longer than necessary for the purpose are being widely ignored or exempted under the banner of ‘research’.

Can data use retain a principled approach despite this and if we accept commercial users, profit making based on public data, will those principles from academic research remain in practice?

Exempt from the obligation to give a copy of personal data to an individual on request if data are for ‘research’ purposes, data about us and our children, are extracted and stored ‘without us’. Forever. That means in a future that we cannot see, but Google DeepMind among others, is designing.

Lay understanding, and that of many climical professionals is likely to be left far behind if advanced technologies and use of big data decision-making algorithms are hidden in black boxes.

Public transparency of the use of our data and future planned purposes are needed to create trust that these purposes are wise.

Data are increasingly linked and more valuable when identifiable.

Any organisation that wants to future-proof its reputational risk will make sure data collection and use today is with consent, since future outcomes derived are likely to be in interventions for individuals or society. Catching up consent will be hard unless designed in now.

A Dialogue on the Boundaries of Being Human and Big Data

Where the commercial, personal, and public interests are blurred, the highest ethical principles are going to be needed to ensure ‘abuse avoidance’ in the use of new technology, in increased data linkage and resultant data use in research of many different kinds.

How we as a society achieve the benefits of tech and datasharing and where its boundaries lie in “the public interest” needs public debate to co-design the direction we collectively want to partake in.

Once that is over, change needs supported by a method of oversight that is responsive to new technology, data use, and its challenges.

What a channel for ongoing public dialogue, challenge and potentially recourse might look like, should be part of that debate.

EU do the Hokey Cokey. In out, In out, shake it all about.

The IN and the OUT circles have formed and Boris is standing in the middle.

Speculation as to whether he means to be on the No team is agreed he doesn’t really mean no.

His comments on staying in have been more consistently in the past pointed to his head saying staying in is better for business.

Some are suggesting that his stance is neither in nor out, but ‘an unconvincing third way‘,  no doesn’t mean no to staying in but no to Cameron’s deal and would in fact mean a second vote along the lines of Dominic Cumming’s .

If so, is this breathtaking arrogance and a u-turn of unforeseeable magnitude? Had our PM not said before the last GE he planned in stepping down before the end of the next Parliament, you could think so. But this way they cannot lose.

This is in fact bloody brilliant positioning by the whole party.

A yes vote underpins Cameron’s re-negotiation as ‘the right thing to do’, best for business and his own statesmanship while showing that we’re not losing sovereignty because staying in is on our terms.

Renegotiating our relationship with the EU was a key Conservative election promise.

This pacifies the majority of that part of the population who wants out of some of the EU ‘controlling out country’ and beholden to EU law, but keeps us stable and financially secure.

The hardline Out campaigners are seen as a bag of all-sorts that few are taking that seriously. But then comes Boris.

So now there is some weight in the out circle and if the country votes ‘No’ a way to manage the outcome with a ready made leader in waiting. But significantly, it’s not a consistent call for Out across the group. Boris is not for spinning in the same clear ‘out’ direction as the Galloway group.

Boris can keep a foot in the circle saying his heart is pro-In and really wants In, but on better terms. He can lead a future party for Outers Inners and whatever the outcome, be seen to welcome all. Quite a gentleman’s agreement perhaps.

His Out just means out of some things, not others. Given all his past positioning and role as Mayor in the City of London, out wouldn’t mean wanting to risk any of the financial and business related bits.

So what does that leave?  Pay attention in his speech to the three long paragraphs on the Charter of Fundamental Human Rights.

His rambling explanation indirectly explained quite brilliantly the bits he wants ‘out’ to mean. Out means in the Boris-controlled circle, only getting out from those parts of EU directives that the party players don’t like. The bits when they get told what to do, or told off for doing something wrong, or not playing nicely with the group.

The human rights rulings and oversight from the CJEU or views which are not aligned with the ECHR for example.

As Joshua Rozenberg wrote on sovereignty, “Human rights reform has been inextricably intertwined with renegotiating the UK’s membership of the EU. And it is all the government’s fault.”

Rozenberg writes that Mr Gove told the Lords constitution committee in December that David Cameron asked him whether “we should use the British Bill of Rights in order to create a constitutional long stop […] and whether the UK Supreme Court should be that body.”

“Our top judges were relabelled a “Supreme Court” not long ago; they’ve been urged to assert themselves against the European Court of Human Rights, and are already doing so against EU law”, commented Carl Gardner elsewhere.

The Gang of Six cabinet ministers are known for their anti EU disaffectation and most often its attachment to human rights  – Michael Gove, Iain Duncan Smith, Chris Grayling, Theresa Villiers, Priti Patel and John Whittingdale plus a further 20 junior ministers and potentially dozens of backbenchers.

We can therefore expect the Out campaign to present situations in which British ‘sovereignty’ was undermined by court rulings that some perceive as silly or seriously flawed.

Every case in which a British court ever convicted someone and was overturned ‘by Europe’ that appeared nonsensical will be wheeled out by current justice Secretary Mr Gove.

Every tougher ‘terrorist’ type case, whose human rights were upheld that had been denied them by a UK ruling might be in the more ‘extreme’ remit of the former justice secretary mention whenever Grayling makes his case for Out, especially where opinions may conflict with interpretations and the EU Charter.

Priti Patel has tough views of crime and punishment, reportedly in favour of the the death penalty.

IDS isn’t famous for a generous application of disability rights.

John Whittingdale gave his views on the present relationship with the EU and CJEU here in debate in 2015 and said (22:10) he was profoundly “concerned the CJEU is writing laws which we consider to be against our national interest.”

Data protection and privacy is about to get a new EU directive that will strengthen some aspects of citizens’ data rights. Things like the right to be informed what information is stored about us, or have mistakes corrected.

Don’t forget after all that Mr Gove is the Education SoS who signed off giving away the confidential personal data of now 20 million children to commercial third parties from the National Pupil Database. Clearly not an Article 8 fan.

We are told that we are being over reactive to our loss of rights to privacy. Over generous in protections to people who don’t deserve it. Ignoring that rights are universal and indivisible, we are encouraged to see them as something that must be earned. As such, something which may or may not be respected. And therefore can be removed.

Convince the majority of that, and legislation underpinning our rights will be easier to take away without enough mass outcry that will make a difference.

To be clear, a no vote would make no actual legal difference, “Leaving the EU (if that’s what the people vote for) is NOT at all inconsistent with the United Kingdom’s position as a signatory to the European Convention on Human Rights (ECHR), a creature of the COUNCIL of EUROPE and NOT the European Union.” [ObiterJ]

But by conflating ‘the single market’, ‘the Lisbon Treaty’, and the ‘ability to vindicate people’s rights under the 55-clause “Charter of Fundamental Human Rights”, Boris has made the no vote again equate conflated things: European Union membership = loss of sovereignty = need to reduce the control or influence of all organisations seen as ‘European’ (even if like the ECHR it’s to do with the Council of Europe Convention signed post WWII and long before EU membership) and all because we are a signatory to a package deal.

Boris has reportedly no love of ‘lefty academics’ standing up for international and human rights laws and their uppity lawyers in the habit of “vindicating the rights of their clients.”

Boris will bob in and out of both the IN group for business and the OUT group for sovereignty, trying not to fall out with anyone too much and giving serious Statesmanship to the injustices inflicted on the UK. There will be banter and back biting, but the party views will be put ahead of personalities.

And the public? What we vote, won’t really matter.I think we’ll be persuaded to be IN, or to get a two step Out-In.

Either way it will give the relevant party leader, present or future, the mandate to do what he wants. Our engagement is optional.

Like the General Election, the people’s majority viewed as a ‘mandate’ seems to have become a little confused with sign-off to dictate a singular directive, rather than represent a real majority. It cannot do anything but this, since the majority didn’t vote for the government that we have.

In this EU Referendum No wont mean No. It’ll mean a second vote to be able to split the package of no-to-all-things into a no-to-some-things wrapped up in layers of ‘sovereignty’ discussion. Unpack them, and those things will be for the most part, human rights things. How they will then be handled at a later date is utterly unclear but the mandate will have been received.

Imagine if Boris can persuade enough of the undecideds that he is less bonkers than some of the EU rulings on rights, he’ll perhaps get an Out mandate, possibly meaning a second vote just to be sure, splitting off the parts everyone obviously wants to protect, the UK business interests, and allowing the government to negotiate the opt out from legislation of human rights’ protections. Things that may appear to make more people dependent on the state, and contrary to the ideology of shrinking state support.

A long-promised review of the British Human Rights Act 1998 will inevitably follow, and only makes sense if we are first made exempt from the European umbrella.

Perhaps we will hear over the next four months more about what that might mean.

Either way, the Out group will I’m sure take the opportunity to air their views and demand the shake up of where Human Rights laws are out of line for the shape of the UK future nation they wish to see us become.

Some suggest Boris has made a decision that will cost him his political career. I disagree. I think it’s incredibly clever. Not a conspiracy, simply clever party planning to make every outcome a win for the good of the party and the good of the nation,  and a nod to Boris as future leader in any case. After all, he didn’t actually say he wanted #Brexit, just reform.

It’s not all about Boris, but is staging party politics at its best, and simultaneously sets the scene for future change in the human rights debate.

Destination smart-cities: design, desire and democracy (Part four)

Who is using all this Big Data? What decisions are being made on the back of it that we never see?

In the everyday and press it often seems that the general public does not understand data, and can easily be told things which we misinterpret.

There are tools in social media influencing public discussions and leading conversations in a different direction from that it had taken, and they operate without regulation.

It is perhaps meaningful that pro-reform Wellington School last week opted out of some of the greatest uses of Big Data sharing in the UK. League tables. Citing their failures. Deciding they werein fact, a key driver for poor educational practice.”

Most often we cannot tell from the data provided what we are told those Big Data should be telling us. And we can’t tell if the data are accurate, genuine and reliable.

Yet big companies are making big money selling the dream that Big Data is the key to decision making. Cumulatively through lack of skills to spot inaccuracy, and inability to do necessary interpretation, we’re being misled by what we find in Big Data.

Being misled is devastating for public trust, as the botched beginnings of care.data found in 2014. Trust has come to be understood as vital for future based on datasharing. Public involvement in how we are used in Big Data in the future, needs to include how our data are used in order to trust they are used well. And interpreting those data well is vital. Those lessons of the past and present must be learned, and not forgotten.

It’s time to invest some time in thinking about safeguarding trust in the future, in the unknown, and the unseen.

We need to be told which private companies like Cinven and FFT have copies of datasets like HES, the entire 62m national hospital records, or the NPD, our entire schools database population of 20 million, or even just its current cohort of 8+ million.

If the public is to trust the government and public bodies to use our data well, we need to know exactly how those data are used today and all these future plans that others have for our personal data.

When we talk about public bodies sharing data they hold for administrative purposes, do we know which private companies this may mean in reality?

The UK government has big plans for big data sharing, sharing across all public bodies, some tailored for individual interventions.

While there are interesting opportunities for public benefit from at-scale systems, the public benefit is at risk not only from lack of trust in how systems gather data and use them, but that interoperability gets lost in market competition.

Openness and transparency can be absent in public-private partnerships until things go wrong. Given the scale of smart-cities, we must have more than hope that data management and security will not be one of those things.

But how will we know if new plans design well, or not?

Who exactly holds and manages those data and where is the oversight of how they are being used?

Using Big Data to be predictive and personal

How do we definde “best use of data” in “public services” right across the board in a world in which boundaries between private and public in the provision of services have become increasingly blurred?

UK researchers and police are already analysing big data for predictive factors at postcode level for those at risk or harm, for example in combining health and education data.

What has grown across the Atlantic is now spreading here. When I lived there I could already see some of what is deeply flawed.

When your system has been as racist in its policing and equity of punishment as institutionally systemic as it is in the US, years of cumulative data bias translates into ‘heat lists’ and means “communities of color will be systematically penalized by any risk assessment tool that uses criminal history as a legitimate criterion.”

How can we ensure British policing does not pursue flawed predictive policies and methodologies, without seeing them?

What transparency have our use of predictive prisons and justice data?

What oversight will the planned new increase in use of satellite tags, and biometrics access in prisons have?

What policies can we have in place to hold data-driven decision-making processes accountable?<

What tools do we need to seek redress for decisions made using flawed algorithms that are apparently indisputable?

Is government truly committed to being open and talking about how far the nudge unit work is incorporated into any government predictive data use? If not, why not?

There is a need for a broad debate on the direction of big data and predictive technology and whether the public understands and wants it.If we don’t understand, it’s time someone explained it.

If I can’t opt out of O2 picking up my travel data ad infinitum on the Tube, I will opt out of their business model and try to find a less invasive provider. If I can’t opt out of EE picking up my personal data as I move around Hyde park, it won’t be them.

Most people just want to be left alone and their space is personal.

A public consultation on smart-technology, and its growth into public space and effect on privacy could be insightful.

Feed me Seymour?

With the encroachment of integrated smart technology over our cities – our roads, our parking, our shopping, our parks, our classrooms, our TV and our entertainment, even our children’s toys – surveillance and sharing information from systems we cannot see  start defining what others may view, or decide about us, behind the scenes in everything we do.

As it expands city wide, it will be watched closely if data are to be open for public benefit, but not invade privacy if “The data stored in this infrastructure won’t be confidential.”

If the destination of digital in all parts of our lives is smart-cities then we have to collectively decide, what do we want, what do we design, and how do we keep it democratic?

What price is our freedom to decide how far its growth should reach into public space and private lives?

The cost of smart cities to individuals and the public is not what it costs in investment made by private conglomerates.

Already the cost of smart technology is privacy inside our homes, our finances, and autonomy of decision making.

Facebook and social media may run algorithms we never see that influence our mood or decision making. Influencing that decision making is significant enough when it’s done through advertising encouraging us to decide which sausages to buy for your kids tea.

It is even more significant when you’re talking about influencing voting.

Who influences most voters wins an election. If we can’t see the technology behind the influence, have we also lost sight of how democracy is decided? The power behind the mechanics of the cogs of Whitehall may weaken inexplicably as computer driven decision from the tech companies’ hidden tools takes hold.

What opportunity and risk to “every part of government” does ever expanding digital bring?

The design and development of smart technology that makes decisions for us and about us, lies in in the hands of large private corporations, not government.

The means the public-interest values that could be built by design and their protection and oversight are currently outside our control.

There is no disincentive for companies that have taken private information that is none of their business, and quite literally, made it their business to not want to collect ever more data about us. It is outside our control.

We must plan by-design for the values we hope for, for ethics, to be embedded in systems, in policies, embedded in public planning and oversight of service provision by all providers. And that the a fair framework of values is used when giving permission to private providers who operate in public spaces.

We must plan for transparency and interoperability.

We must plan by-design for the safe use of data that does not choke creativity and innovation but both protects and champions privacy as a fundamental building block of trust for these new relationships between providers of private and public services, private and public things, in private and public space.

If “digital is changing how we deliver every part of government,” and we want to “harness the best of digital and technology, and the best use of data to improve public services right across the board” then we must see integration in the planning of policy and its application.

Across the board “the best use of data” must truly value privacy, and enable us to keep our autonomy as individuals.

Without this, the cost of smart cities growing unchecked, will be an ever growing transfer of power to the funders behind corporations and campaign politics.

The ultimate price of this loss of privacy, will be democracy itself.

****

This is the conclusion to a four part set of thoughts: On smart technology and data from the Sprint16 session (part one). I thought about this more in depth on “Smart systems and Public Services” here (part two), and the design and development of smart technology making “The Best Use of Data” here looking at today in a UK company case study (part three) and this part four, “The Best Use of Data” used in predictions and the Future.