Can new datasharing laws win social legitimacy, public trust and support without public engagement?

I’ve been struck by stories I’ve heard on the datasharing consultation, on data science, and on data infrastructures as part of ‘government as a platform’ (#GaaPFuture) in recent weeks. The audio recorded by the Royal Statistical Society on March 17th is excellent, and there were some good questions asked.

There were even questions from insurance backed panels to open up more data for commercial users, and calls for journalists to be seen as accredited researchers, as well as to include health data sharing. Three things that some stakeholders, all users of data, feel are  missing from consultation, and possibly some of those with the most widespread public concern and lowest levels of public trust. [1]

What I feel is missing in consultation discussions are:

  1. a representative range of independent public voice
  2. a compelling story of needs – why tailored public services benefits citizens from whom data is taken, not only benefits data users
  3. the impacts we expect to see in local government
  4. any cost/risk/benefit assessment of those impacts, or for citizens
  5. how the changes will be independently evaluated – as some are to be reviewed

The Royal Statistical Society and ODI have good summaries here of their thoughts, more geared towards the statistical and research aspects of data,  infrastructure and the consultation.

I focus on the other strands that use identifiable data for targeted interventions. Tailored public services, Debt, Fraud, Energy Companies’ use. I think we talk too little of people, and real needs.

Why the State wants more datasharing is not yet a compelling story and public need and benefit seem weak.

So far the creation of new data intermediaries, giving copies of our personal data to other public bodies  – and let’s be clear that this often means through commercial representatives like G4S, Atos, Management consultancies and more –  is yet to convince me of true public needs for the people, versus wants from parts of the State.

What the consultation hopes to achieve, is new powers of law, to give increased data sharing increased legal authority. However this alone will not bring about the social legitimacy of datasharing that the consultation appears to seek through ‘open policy making’.

Legitimacy is badly needed if there is to be public and professional support for change and increased use of our personal data as held by the State, which is missing today,  as care.data starkly exposed. [2]

The gap between Social Legitimacy and the Law

Almost 8 months ago now, before I knew about the datasharing consultation work-in-progress, I suggested to BIS that there was an opportunity for the UK to drive excellence in public involvement in the use of public data by getting real engagement, through pro-active consent.

The carrot for this, is achieving the goal that government wants – greater legal clarity, the use of a significant number of consented people’s personal data for complex range of secondary uses as a secondary benefit.

It was ignored.

If some feel entitled to the right to infringe on citizens’ privacy through a new legal gateway because they believe the public benefit outweighs private rights, then they must also take on the increased balance of risk of doing so, and a responsibility to  do so safely. It is in principle a slippery slope. Any new safeguards and ethics for how this will be done are however unclear in those data strands which are for targeted individual interventions. Especially if predictive.

Upcoming discussions on codes of practice [which have still to be shared] should demonstrate how this is to happen in practice, but codes are not sufficient. Laws which enable will be pushed to their borderline of legal and beyond that of ethical.

In England who would have thought that the 2013 changes that permitted individual children’s data to be given to third parties [3] for educational purposes, would mean giving highly sensitive, identifiable data to journalists without pupils or parental consent? The wording allows it. It is legal. However it fails the DPA Act legal requirement of fair processing.  Above all, it lacks social legitimacy and common sense.

In Scotland, there is current anger over the intrusive ‘named person’ laws which lack both professional and public support and intrude on privacy. Concerns raised should be lessons to learn from in England.

Common sense says laws must take into account social legitimacy.

We have been told at the open policy meetings that this change will not remove the need for informed consent. To be informed, means creating the opportunity for proper communications, and also knowing how you can use the service without coercion, i.e. not having to consent to secondary data uses in order to get the service, and knowing to withdraw consent at any later date. How will that be offered with ways of achieving the removal of data after sharing?

The stick for change, is the legal duty that the recent 2015 CJEU ruling reiterating the legal duty to fair processing [4] waved about. Not just a nice to have, but State bodies’ responsibility to inform citizens when their personal data are used for purposes other than those for which those data had initially been consented and given. New legislation will not  remove this legal duty.

How will it be achieved without public engagement?

Engagement is not PR

Failure to act on what you hear from listening to the public is costly.

Engagement is not done *to* people, don’t think explain why we need the data and its public benefit’ will work. Policy makers must engage with fears and not seek to dismiss or diminish them, but acknowledge and mitigate them by designing technically acceptable solutions. Solutions that enable data sharing in a strong framework of privacy and ethics, not that sees these concepts as barriers. Solutions that have social legitimacy because people support them.

Mr Hunt’s promised February 2014 opt out of anonymised data being used in health research, has yet to be put in place and has had immeasurable costs for delayed public research, and public trust.

How long before people consider suing the DH as data controller for misuse? From where does the arrogance stem that decides to ignore legal rights, moral rights and public opinion of more people than those who voted for the Minister responsible for its delay?

 

This attitude is what fails care.data and the harm is ongoing to public trust and to confidence for researchers’ continued access to data.

The same failure was pointed out by the public members of the tiny Genomics England public engagement meeting two years ago in March 2014, called to respond to concerns over the lack of engagement and potential harm for existing research. The comms lead made a suggestion that the new model of the commercialisation of the human genome in England, to be embedded in the NHS by 2017 as standard clinical practice, was like steam trains in Victorian England opening up the country to new commercial markets. The analogy was felt by the lay attendees to be, and I quote, ‘ridiculous.’

Exploiting confidential personal data for public good must have support and good two-way engagement if it is to get that support, and what is said and agreed must be acted on to be trustworthy.

Policy makers must take into account broad public opinion, and that is unlikely to be submitted to a Parliamentary consultation. (Personally, I first knew such  processes existed only when care.data was brought before the Select Committee in 2014.) We already know what many in the public think about sharing their confidential data from the work with care.data and objections to third party access, to lack of consent. Just because some policy makers don’t like what was said, doesn’t make that public opinion any less valid.

We must bring to the table the public voice from past but recent public engagement work on administrative datasharing [5], the voice of the non-research community, and from those who are not stakeholders who will use the data but the ‘data subjects’, the public  whose data are to be used.

Policy Making must be built on Public Trust

Open policy making is not open just because it says it is. Who has been invited, participated, and how their views actually make a difference on content and implementation is what matters.

Adding controversial ideas at the last minute is terrible engagement, its makes the process less trustworthy and diminishes its legitimacy.

This last minute change suggests some datasharing will be dictated despite critical views in the policy making and without any public engagement. If so, we should ask policy makers on what mandate?

Democracy depends on social legitimacy. Once you lose public trust, it is not easy to restore.

Can new datasharing laws win social legitimacy, public trust and support without public engagement?

In my next post I’ll post look at some of the public engagement work done on datasharing to date, and think about ethics in how data are applied.

*************

References:

[1] The Royal Statistical Society data trust deficit

[2] “The social licence for research: why care.data ran into trouble,” by Carter et al.

[3] FAQs: Campaign for safe and ethical National Pupil Data

[4] CJEU Bara 2015 Ruling – fair processing between public bodies

[5] Public Dialogues using Administrative data (ESRC / ADRN)

img credit: flickr.com/photos/internetarchivebookimages/

A data sharing fairytale (3): transformation and impact

Part three: It is vital that the data sharing consultation is not seen in a silo, or even a set of silos each particular to its own stakeholder. To do it justice and ensure the questions that should be asked are answered, we must look instead at the whole story and the background setting. And we must ask each stakeholder, what does your happy ending look like?

Parts one and two to follow address public engagement and ethics, this focuses on current national data practice, tailored public services, and local impact of the change and transformation that will result.

What is your happy ending?

This data sharing consultation is gradually revealing to me how disjoined government appears in practice and strategy. Our digital future, a society that is more inclusive and more just, supported by better uses of technology and data in ‘dot everyone’ will not happen if they cannot first join the dots across all of Cabinet thinking and good practice, and align policies that are out of step with each other.

Last Thursday night’s “Government as a Platform Future” panel discussion (#GaaPFuture) took me back to memories of my old job, working in business implementations of process and cutting edge systems. Our finest hour was showing leadership why success would depend on neither. Success was down to local change management and communications, because change is about people, not the tech.

People in this data sharing consultation, means the public, means the staff of local government public bodies, as well as the people working at national stakeholders of the UKSA (statistics strand), ADRN (de-identified research strand), Home Office (GRO strand), DWP (Fraud and Debt strands), and DECC (energy) and staff at the national driver, the Cabinet Office.

I’ve attended two of the 2016 datasharing meetings,  and am most interested from three points of view  – because I am directly involved in the de-identified data strand,  campaign for privacy, and believe in public engagement.

Engagement with civil society, after almost 2 years of involvement on three projects, and an almost ten month pause in between, the projects had suddenly become six in 2016, so the most sensitive strands of the datasharing legislation have been the least openly discussed.

At the end of the first 2016 meeting, I asked one question.

How will local change management be handled and the consultation tailored to local organisations’ understanding and expectations of its outcome?

Why? Because a top down data extraction programme from all public services opens up the extraction of personal data as business intelligence to national level, of all local services interactions with citizens’ data.  Or at least, those parts they have collected or may collect in future.

That means a change in how the process works today. Global business intelligence/data extractions are designed to make processes more efficient, through reductions in current delivery, yet concrete public benefits for citizens are hard to see that would be different from today, so why make this change in practice?

What it might mean for example, would be to enable collection of all citizens’ debt information into one place, and that would allow the service to centralise chasing debt and enforce its collection, outsourced to a single national commercial provider.

So what does the future look like from the top? What is the happy ending for each strand, that will be achieved should this legislation be passed?  What will success for each set of plans look like?

What will we stop doing, what will we start doing differently and how will services concretely change from today, the current state, to the future?

Most importantly to understand its implications for citizens and staff, we should ask how will this transformation be managed well to see the benefits we are told it will deliver?

Can we avoid being left holding a pumpkin, after the glitter of ‘use more shiny tech’ and government love affair with the promises of Big Data wear off?

Look into the local future

Those with the vision of the future on a panel at the GDS meeting this week, the new local government model enabled by GaaP, also identified, there are implications for potential loss of local jobs, and “turkeys won’t vote for Christmas”. So who is packaging this change to make it successfully deliverable?

If we can’t be told easily in consultation, then it is not a clear enough policy to deliver. If there is a clear end-state, then we should ask what the applied implications in practice are going to be?

It is vital that the data sharing consultation is not seen in a silo, or even a set of silos each particular to its own stakeholder, about copying datasets to share them more widely, but that we look instead at the whole story and the background setting.

The Tailored Reviews: public bodies guidance suggests massive reform of local government, looking for additional savings, looking to cut back office functions and commercial plans. It asks “What workforce reductions have already been agreed for the body? Is there potential to go further? Are these linked to digital savings referenced earlier?”

Options include ‘abolish, move out of central government, commercial model, bring in-house, merge with another body.’

So where is the local government public bodies engagement with change management plans in the datasharing consultation as a change process? Does it not exist?

I asked at the end of the first datasharing meeting in January and everyone looked a bit blank. A question ‘to take away’ turned into nothing.

Yet to make this work, the buy-in of local public bodies is vital. So why skirt round this issue in local government, if there are plans to address it properly?

If there are none, then with all the data in the world, public services delivery will not be improved, because the issues are friction not of interference by consent, or privacy issues, but working practices.

If the idea is to avoid this ‘friction’ by removing it, then where is the change management plan for public services and our public staff?

Trust depends on transparency

John Pullinger, our National Statistician, this week also said on datasharing we need a social charter on data to develop trust.

Trust can only be built between public and state if the organisations, and all the people in them, are trustworthy.

To implement process change successfully, the people involved in these affected organisations, the staff, must trust that change will mean positive improvement and risks explained.

For the public, what defined levels of data access, privacy protection, and scope limitation that this new consultation will permit in practice, are clearly going to be vital to define if the public will trust its purposes.

The consultation does not do this, and there is no draft code of conduct yet, and no one is willing to define ‘research’ or ‘public interest’.

Public interest models or ‘charter’ for collection and use of research data in health, concluded that ofr ethical purposes, time also mattered. Benefits must be specific, measurable, attainable, relevant and time-bound. So let’s talk about the intended end state that is to be achieved from these changes, and identify how its benefits are to meet those objectives – change without an intended end state will almost never be successful, if you don’t know start knowing what it looks like.

For public trust, that means scope boundaries. Sharing now, with today’s laws and ethics is only fully meaningful if we trust that today’s governance, ethics and safeguards will be changeable in future to the benefit of the citizen, not ever greater powers to the state at the expense of the individual. Where is scope defined?

There is very little information about where limits would be on what data could not be shared, or when it would not be possible to do so without explicit consent. Permissive powers put the onus onto the data controller to share, and given ‘a new law says you should share’ would become the mantra, it is likely to mean less individual accountability. Where are those lines to be drawn to support the staff and public, the data user and the data subject?

So to summarise, so far I have six key questions:

  • What does your happy ending look like for each data strand?
  • How will bad practices which conflict with the current consultation proposals be stopped?
  • How will the ongoing balance of use of data for government purposes, privacy and information rights be decided and by whom?
  • In what context will the ethical principles be shaped today?
  • How will the transformation from the current to that future end state be supported, paid for and delivered?
  • Who will oversee new policies and ensure good data science practices, protection and ethics are applied in practice?

This datasharing consultation is not entirely for something new, but expansion of what is done already. And in some places is done very badly.

How will the old stories and new be reconciled?

Wearing my privacy and public engagement hats, here’s an idea.

Perhaps before the central State starts collecting more, sharing more, and using more of our personal data for ‘tailored public services’ and more, the government should ask for a data amnesty?

It’s time to draw a line under bad practice.  Clear out the ethics drawers of bad historical practice, and start again, with a fresh chapter. Because current practices are not future-proofed and covering them up in the language of ‘better data ethics’ will fail.

The consultation assures us that: “These proposals are not about selling public or personal data, collecting new data from citizens or weakening the Data Protection Act 1998.”

However it does already sell out personal data from at least BIS. How will these contradictory positions across all Departments be resolved?

The left hand gives out de-identified data in safe settings for public benefit research while the right hands out over 10 million records to the Telegraph and The Times without parental or schools’ consent. Only in la-la land are these both considered ethical.

Will somebody at the data sharing meeting please ask, “when will this stop?” It is wrong. These are our individual children’s identifiable personal data. Stop giving them away to press and charities and commercial users without informed consent. It’s ludicrous. Yet it is real.

Policy makers should provide an assurance there are plans for this to change as part of this consultation.

Without it, the consultation line about commercial use, is at best disingenuous, at worst a bare cheeked lie.

“These powers will also ensure we can improve the safe handling of citizen data by bringing consistency and improved safeguards to the way it is handled.”

Will it? Show me how and I might believe it.

Privacy, it was said at the RSS event, is the biggest concern in this consultation:

“includes proposals to expand the use of appropriate and ethical data science techniques to help tailor interventions to the public”

“also to start fixing government’s data infrastructure to better support public services.”

The techniques need outlined what they mean, and practices fixed now, because many stand on shaky legal ground. These privacy issues have come about over cumulative governments of different parties in the last ten years, so the problems are non-partisan, but need practical fixes.

Today, less than transparent international agreements push ‘very far-reaching chapters on the liberalisation of data trading’ while according to the European Court of Justice these practices lack a solid legal basis.

Today our government already gives our children’s personal data to commercial third parties and sells our higher education data without informed consent, while the DfE and BIS both know they fail processing and its potential consequences: the European Court reaffirmed in 2015 “persons whose personal data are subject to transfer and processing between two public administrative bodies must be informed in advance” in Judgment in Case C-201/14.

In a time that actively cultivates universal public fear,  it is time for individuals to be brave and ask the awkward questions because you either solve them up front, or hit the problems later. The child who stood up and said The Emperor has on no clothes, was right.

What’s missing?

The consultation conversation will only be genuine, once the policy makers acknowledge and address solutions regards:

  1. those data practices that are currently unethical and must change
  2. how the tailored public services datasharing legislation will shape the delivery of government services’ infrastructure and staff, as well as the service to the individual in the public.

If we start by understanding what the happy ending looks like, we are much more likely to arrive there, and how to measure success.

The datasharing consultation engagement, the ethics of data science, and impact on data infrastructures as part of ‘government as a platform’ need seen as a whole joined up story if we are each to consider what success for us as stakeholders, looks like.

We need to call out current data failings and things that are missing, to get them fixed.

Without a strong, consistent ethical framework you risk 3 things:

  1. data misuse and loss of public trust
  2. data non-use because your staff don’t trust they’re doing it right
  3. data is becoming a toxic asset

The upcoming meetings should address this and ask practically:

  1. How the codes of conduct, and ethics, are to be shaped, and by whom, if outwith the consultation?
  2. What is planned to manage and pay for the future changes in our data infrastructures;  ie the models of local government delivery?
  3. What is the happy ending that each data strand wants to achieve through this and how will the success criteria be measured?

Public benefit is supposed to be at the heart of this change. For UK statistics, for academic public benefit research, they are clear.

For some of the other strands, local public benefits that outweigh the privacy risks and do not jeopardise public trust seem like magical unicorns dancing in the land far, far away of centralised government; hard to imagine, and even harder to capture.

*****

Part one: A data sharing fairytale: Engagement
Part two: A data sharing fairytale: Ethics
Part three: A data sharing fairytale: Impact (this post)

Tailored public bodies review: Feb 2016

img credit: Hermann Vogel illustration ‘Cinderella’

On the Boundaries of Being Human and Big Data

Atlas, the Boston Dynamics created robot, won hearts and minds this week as it stoically survived man being mean.  Our collective human response was an emotional defence of the machine, and criticism of its unfair treatment by its tester.

Some on Twitter recalled the incident of Lord of The Flies style bullying by children in Japan that led the programmers to create an algorithm for ‘abuse avoidance’.

The concepts of fairness and of decision making algorithms for ‘abuse avoidance’ are interesting from perspectives of data mining, AI and the wider access to and use of tech in general, and in health specifically.

If the decision to avoid abuse can be taken out of an individual’s human hands and are based on unfathomable amounts of big data, where are its limits applied to human behaviour and activity?

When it is decided that an individual’s decision making capability is impaired or has been forfeited their consent may be revoked in their best interest.

Who has oversight of the boundaries of what is acceptable for one person, or for an organisation, to decide what is in someone else’s best interest, or indeed, the public interest?

Where these boundaries overlap – personal abuse avoidance, individual best interest and the public interest – and how society manage them, with what oversight, is yet to be widely debated.

The public will shortly be given the opportunity to respond to plans for the expansion of administrative datasharing in England through consultation.

We must get involved and it must be the start of a debate and dialogue not simply a tick-box to a done-deal, if data derived from us are to be used as a platform for future to “achieve great results for the NHS and everyone who depends on it.”

Administering applied “abuse avoidance” and Restraining Abilities

Administrative uses and secondary research using the public’s personal data are applied not only in health, but across the board of public bodies, including big plans for tech in the justice system.

An example in the news this week of applied tech and its restraint on human behaviour was ankle monitors.  While one type was abandoned by the MOJ at a cost of £23m on the same day more funding for transdermal tags was announced in London.

The use of this technology as a monitoring tool, should not of itself be a punishment. It is said compliance is not intended to affect the dignity of individuals who are being monitored, but through the collection of personal and health data  will ensure the deprivation of alcohol – avoiding its abuse for a person’s own good and in the public interest. Is it fair?

Abstinence orders might be applied to those convicted of crimes such as assault, being drunk and disorderly and drunk driving.

We’re yet to see much discussion of how these varying degrees of integration of tech with the human body, and human enhancement will happen through robot elements in our human lives.

How will the boundaries of what is possible and desirable be determined and by whom with what oversight?

What else might be considered as harmful as alcohol to individuals and to  society? Drugs? Nictotine? Excess sugar?

As we wonder about the ethics of how humanoids will act and the aesthetics of how human they look, I wonder how humane are we being, in all our ‘public’ tech design and deployment?

Umberto Eco who died on Friday wrote in ‘The birth of ethics’ that there are universal ideas on constraints, effectively that people should not harm other people, through deprivation, restrictions or psychological torture. And that we should not impose anything on others that “diminishes or stifles our capacity to think.”

How will we as a society collectively agree what that should look like, how far some can impose on others, without consent?

Enhancing the Boundaries of Being Human

Technology might be used to impose bodily boundaries on some people, but tech can also be used for the enhancement of others. retweeted this week, the brilliant Angel Giuffria’s arm.

While the technology in this case is literally hands-on in its application, increasingly it is not the technology itself but the data that it creates or captures which enables action through data-based decision making.

Robots that are tiny may be given big responsibilities to monitor and report massive amounts of data. What if we could swallow them?

Data if analysed and understood, become knowledge.

Knowledge can be used to inform decisions and take action.

So where are the boundaries of what data may be extracted,  information collated, and applied as individual interventions?

Defining the Boundaries of “in the Public Interest”

Where are boundaries of what data may be created, stored, and linked to create a detailed picture about us as individuals, if the purpose is determined to be in the public interest?

Who decides which purposes are in the public interest? What qualifies as research purposes? Who qualifies as meeting the criteria of ‘researcher’?

How far can research and interventions go without consent?

Should security services and law enforcement agencies always be entitled to get access to individuals’ data ‘in the public interest’?

That’s something Apple is currently testing in the US.

Should research bodies always be entitled to get access to individuals’ data ‘in the public interest’?

That’s something care.data tried and failed to assume the public supported and has yet to re-test. Impossible before respecting the opt out that was promised over two years ago in March 2014.

The question how much data research bodies may be ‘entitled to’ will be tested again in the datasharing consultation in the UK.

How data already gathered are used in research may be used differently from it is when we consent to its use at colllection. How this changes over time and its potential for scope creep is seen in Education. Pupil data has gone from passive collection of name to giving it out to third parties, to use in national surveys, so far.

And what of the future?

Where is the boundary between access and use of data not in enforcement of acts already committed but in their prediction and prevention?

If you believe there should be an assumption of law enforcement access to data when data are used for prediction and prevention, what about health?

Should there be any difference between researchers’ access to data when data are used for past analysis and for use in prediction?

If ethics define the boundary between what is acceptable and where actions by one person may impose something on another that “diminishes or stifles our capacity to think” – that takes away our decision making capacity – that nudges behaviour, or acts on behaviour that has not yet happened, who decides what is ethical?

How does a public that is poorly informed about current data practices, become well enough informed to participate in the debate of how data management should be designed today for their future?

How Deeply Mined should our Personal Data be?

The application of technology, non-specific but not yet AI, was also announced this week in the Google DeepMind work in the NHS.

Its first key launch app co-founder provided a report that established the operating framework for the Behavioural Insights Team established by Prime Minister David Cameron.

A number of highly respected public figures have been engaged to act in the public interest as unpaid Independent Reviewers of Google DeepMind Health. It will be interesting to see what their role is and how transparent its workings and public engagement will be.

The recent consultation on the NHS gave overwhelming feedback that the public does not support the direction of current NHS change. Even having removed all responses associated with ‘lefty’ campaigns, concerns listed on page 11, are consistent including a request the Government “should end further involvement of the private sector in healthcare”. It appears from the response that this engagement exercise will feed little into practice.

The strength of feeling should however be a clear message to new projects that people are passionate that equal access to healthcare for all matters and that the public wants to be informed and have their voices heard.

How will public involvement be ensured as complexity increases in these healthcare add-ons and changing technology?

Will Google DeepMind pave the way to a new approach to health research? A combination of ‘nudge’ behavioural insights, advanced neural networks, Big Data and technology is powerful. How will that power be used?

I was recently told that if new research is not pushing the boundaries of what is possible and permissible then it may not be worth doing, as it’s probably been done before.

Should anything that is new that becomes possible be realised?

I wonder how the balance will be weighted in requests for patient data and their application, in such a high profile project.

Will NHS Research Ethics Committees turn down research proposals in-house in hospitals that benefit the institution or advance their reputation, or the HSCIC, ever feel able to say no to data use by Google DeepMind?

Ethics committees safeguard the rights, safety, dignity and well-being of research participants, independently of research sponsors whereas these representatives are not all independent of commercial supporters. And it has not claimed it’s trying to be an ethics panel. But oversight is certainly needed.

The boundaries of ownership between what is seen to benefit commercial and state in modern health investment is perhaps more than blurred to an untrained eye. Genomics England – the government’s flagship programme giving commercial access to the genome of 100K people –  stockholding companies, data analytics companies, genome analytic companies, genome collection, and human tissue research, commercial and academic research,  often share directors, working partnerships and funders. That’s perhaps unsurprising given such a specialist small world.

It’s exciting to think of the possibilities if, “through a focus on patient outcomes, effective oversight, and the highest ethical principles, we can achieve great results for the NHS and everyone who depends on it.”

Where will an ageing society go, if medics can successfully treat more cancer for example? What diseases will be prioritised and others left behind in what is economically most viable to prevent? How much investment will be made in diseases of the poor or in countries where governments cannot afford to fund programmes?

What will we die from instead? What happens when some causes of ‘preventative death’ are deemed more socially acceptable than others? Where might prevention become socially enforced through nudging behaviour into new socially acceptable or ethical norms?

Don’t be Evil

Given the leading edge of the company and its curiosity-by-design to see how far “can we” will reach, “don’t be evil” may be very important. But “be good” might be better. Where is that boundary?

The boundaries of what ‘being human’ means and how Big Data will decide and influence that, are unclear and changing. How will the law and regulation keep up and society be engaged in support?

Data principles such as fairness, keeping data accurate, complete and up-to-date and ensuring data are not excessive retained for no longer than necessary for the purpose are being widely ignored or exempted under the banner of ‘research’.

Can data use retain a principled approach despite this and if we accept commercial users, profit making based on public data, will those principles from academic research remain in practice?

Exempt from the obligation to give a copy of personal data to an individual on request if data are for ‘research’ purposes, data about us and our children, are extracted and stored ‘without us’. Forever. That means in a future that we cannot see, but Google DeepMind among others, is designing.

Lay understanding, and that of many climical professionals is likely to be left far behind if advanced technologies and use of big data decision-making algorithms are hidden in black boxes.

Public transparency of the use of our data and future planned purposes are needed to create trust that these purposes are wise.

Data are increasingly linked and more valuable when identifiable.

Any organisation that wants to future-proof its reputational risk will make sure data collection and use today is with consent, since future outcomes derived are likely to be in interventions for individuals or society. Catching up consent will be hard unless designed in now.

A Dialogue on the Boundaries of Being Human and Big Data

Where the commercial, personal, and public interests are blurred, the highest ethical principles are going to be needed to ensure ‘abuse avoidance’ in the use of new technology, in increased data linkage and resultant data use in research of many different kinds.

How we as a society achieve the benefits of tech and datasharing and where its boundaries lie in “the public interest” needs public debate to co-design the direction we collectively want to partake in.

Once that is over, change needs supported by a method of oversight that is responsive to new technology, data use, and its challenges.

What a channel for ongoing public dialogue, challenge and potentially recourse might look like, should be part of that debate.

EU do the Hokey Cokey. In out, In out, shake it all about.

The IN and the OUT circles have formed and Boris is standing in the middle.

Speculation as to whether he means to be on the No team is agreed he doesn’t really mean no.

His comments on staying in have been more consistently in the past pointed to his head saying staying in is better for business.

Some are suggesting that his stance is neither in nor out, but ‘an unconvincing third way‘,  no doesn’t mean no to staying in but no to Cameron’s deal and would in fact mean a second vote along the lines of Dominic Cumming’s .

If so, is this breathtaking arrogance and a u-turn of unforeseeable magnitude? Had our PM not said before the last GE he planned in stepping down before the end of the next Parliament, you could think so. But this way they cannot lose.

This is in fact bloody brilliant positioning by the whole party.

A yes vote underpins Cameron’s re-negotiation as ‘the right thing to do’, best for business and his own statesmanship while showing that we’re not losing sovereignty because staying in is on our terms.

Renegotiating our relationship with the EU was a key Conservative election promise.

This pacifies the majority of that part of the population who wants out of some of the EU ‘controlling out country’ and beholden to EU law, but keeps us stable and financially secure.

The hardline Out campaigners are seen as a bag of all-sorts that few are taking that seriously. But then comes Boris.

So now there is some weight in the out circle and if the country votes ‘No’ a way to manage the outcome with a ready made leader in waiting. But significantly, it’s not a consistent call for Out across the group. Boris is not for spinning in the same clear ‘out’ direction as the Galloway group.

Boris can keep a foot in the circle saying his heart is pro-In and really wants In, but on better terms. He can lead a future party for Outers Inners and whatever the outcome, be seen to welcome all. Quite a gentleman’s agreement perhaps.

His Out just means out of some things, not others. Given all his past positioning and role as Mayor in the City of London, out wouldn’t mean wanting to risk any of the financial and business related bits.

So what does that leave?  Pay attention in his speech to the three long paragraphs on the Charter of Fundamental Human Rights.

His rambling explanation indirectly explained quite brilliantly the bits he wants ‘out’ to mean. Out means in the Boris-controlled circle, only getting out from those parts of EU directives that the party players don’t like. The bits when they get told what to do, or told off for doing something wrong, or not playing nicely with the group.

The human rights rulings and oversight from the CJEU or views which are not aligned with the ECHR for example.

As Joshua Rozenberg wrote on sovereignty, “Human rights reform has been inextricably intertwined with renegotiating the UK’s membership of the EU. And it is all the government’s fault.”

Rozenberg writes that Mr Gove told the Lords constitution committee in December that David Cameron asked him whether “we should use the British Bill of Rights in order to create a constitutional long stop […] and whether the UK Supreme Court should be that body.”

“Our top judges were relabelled a “Supreme Court” not long ago; they’ve been urged to assert themselves against the European Court of Human Rights, and are already doing so against EU law”, commented Carl Gardner elsewhere.

The Gang of Six cabinet ministers are known for their anti EU disaffectation and most often its attachment to human rights  – Michael Gove, Iain Duncan Smith, Chris Grayling, Theresa Villiers, Priti Patel and John Whittingdale plus a further 20 junior ministers and potentially dozens of backbenchers.

We can therefore expect the Out campaign to present situations in which British ‘sovereignty’ was undermined by court rulings that some perceive as silly or seriously flawed.

Every case in which a British court ever convicted someone and was overturned ‘by Europe’ that appeared nonsensical will be wheeled out by current justice Secretary Mr Gove.

Every tougher ‘terrorist’ type case, whose human rights were upheld that had been denied them by a UK ruling might be in the more ‘extreme’ remit of the former justice secretary mention whenever Grayling makes his case for Out, especially where opinions may conflict with interpretations and the EU Charter.

Priti Patel has tough views of crime and punishment, reportedly in favour of the the death penalty.

IDS isn’t famous for a generous application of disability rights.

John Whittingdale gave his views on the present relationship with the EU and CJEU here in debate in 2015 and said (22:10) he was profoundly “concerned the CJEU is writing laws which we consider to be against our national interest.”

Data protection and privacy is about to get a new EU directive that will strengthen some aspects of citizens’ data rights. Things like the right to be informed what information is stored about us, or have mistakes corrected.

Don’t forget after all that Mr Gove is the Education SoS who signed off giving away the confidential personal data of now 20 million children to commercial third parties from the National Pupil Database. Clearly not an Article 8 fan.

We are told that we are being over reactive to our loss of rights to privacy. Over generous in protections to people who don’t deserve it. Ignoring that rights are universal and indivisible, we are encouraged to see them as something that must be earned. As such, something which may or may not be respected. And therefore can be removed.

Convince the majority of that, and legislation underpinning our rights will be easier to take away without enough mass outcry that will make a difference.

To be clear, a no vote would make no actual legal difference, “Leaving the EU (if that’s what the people vote for) is NOT at all inconsistent with the United Kingdom’s position as a signatory to the European Convention on Human Rights (ECHR), a creature of the COUNCIL of EUROPE and NOT the European Union.” [ObiterJ]

But by conflating ‘the single market’, ‘the Lisbon Treaty’, and the ‘ability to vindicate people’s rights under the 55-clause “Charter of Fundamental Human Rights”, Boris has made the no vote again equate conflated things: European Union membership = loss of sovereignty = need to reduce the control or influence of all organisations seen as ‘European’ (even if like the ECHR it’s to do with the Council of Europe Convention signed post WWII and long before EU membership) and all because we are a signatory to a package deal.

Boris has reportedly no love of ‘lefty academics’ standing up for international and human rights laws and their uppity lawyers in the habit of “vindicating the rights of their clients.”

Boris will bob in and out of both the IN group for business and the OUT group for sovereignty, trying not to fall out with anyone too much and giving serious Statesmanship to the injustices inflicted on the UK. There will be banter and back biting, but the party views will be put ahead of personalities.

And the public? What we vote, won’t really matter.I think we’ll be persuaded to be IN, or to get a two step Out-In.

Either way it will give the relevant party leader, present or future, the mandate to do what he wants. Our engagement is optional.

Like the General Election, the people’s majority viewed as a ‘mandate’ seems to have become a little confused with sign-off to dictate a singular directive, rather than represent a real majority. It cannot do anything but this, since the majority didn’t vote for the government that we have.

In this EU Referendum No wont mean No. It’ll mean a second vote to be able to split the package of no-to-all-things into a no-to-some-things wrapped up in layers of ‘sovereignty’ discussion. Unpack them, and those things will be for the most part, human rights things. How they will then be handled at a later date is utterly unclear but the mandate will have been received.

Imagine if Boris can persuade enough of the undecideds that he is less bonkers than some of the EU rulings on rights, he’ll perhaps get an Out mandate, possibly meaning a second vote just to be sure, splitting off the parts everyone obviously wants to protect, the UK business interests, and allowing the government to negotiate the opt out from legislation of human rights’ protections. Things that may appear to make more people dependent on the state, and contrary to the ideology of shrinking state support.

A long-promised review of the British Human Rights Act 1998 will inevitably follow, and only makes sense if we are first made exempt from the European umbrella.

Perhaps we will hear over the next four months more about what that might mean.

Either way, the Out group will I’m sure take the opportunity to air their views and demand the shake up of where Human Rights laws are out of line for the shape of the UK future nation they wish to see us become.

Some suggest Boris has made a decision that will cost him his political career. I disagree. I think it’s incredibly clever. Not a conspiracy, simply clever party planning to make every outcome a win for the good of the party and the good of the nation,  and a nod to Boris as future leader in any case. After all, he didn’t actually say he wanted #Brexit, just reform.

It’s not all about Boris, but is staging party politics at its best, and simultaneously sets the scene for future change in the human rights debate.

Destination smart-cities: design, desire and democracy (Part four)

Who is using all this Big Data? What decisions are being made on the back of it that we never see?

In the everyday and press it often seems that the general public does not understand data, and can easily be told things which we misinterpret.

There are tools in social media influencing public discussions and leading conversations in a different direction from that it had taken, and they operate without regulation.

It is perhaps meaningful that pro-reform Wellington School last week opted out of some of the greatest uses of Big Data sharing in the UK. League tables. Citing their failures. Deciding they werein fact, a key driver for poor educational practice.”

Most often we cannot tell from the data provided what we are told those Big Data should be telling us. And we can’t tell if the data are accurate, genuine and reliable.

Yet big companies are making big money selling the dream that Big Data is the key to decision making. Cumulatively through lack of skills to spot inaccuracy, and inability to do necessary interpretation, we’re being misled by what we find in Big Data.

Being misled is devastating for public trust, as the botched beginnings of care.data found in 2014. Trust has come to be understood as vital for future based on datasharing. Public involvement in how we are used in Big Data in the future, needs to include how our data are used in order to trust they are used well. And interpreting those data well is vital. Those lessons of the past and present must be learned, and not forgotten.

It’s time to invest some time in thinking about safeguarding trust in the future, in the unknown, and the unseen.

We need to be told which private companies like Cinven and FFT have copies of datasets like HES, the entire 62m national hospital records, or the NPD, our entire schools database population of 20 million, or even just its current cohort of 8+ million.

If the public is to trust the government and public bodies to use our data well, we need to know exactly how those data are used today and all these future plans that others have for our personal data.

When we talk about public bodies sharing data they hold for administrative purposes, do we know which private companies this may mean in reality?

The UK government has big plans for big data sharing, sharing across all public bodies, some tailored for individual interventions.

While there are interesting opportunities for public benefit from at-scale systems, the public benefit is at risk not only from lack of trust in how systems gather data and use them, but that interoperability gets lost in market competition.

Openness and transparency can be absent in public-private partnerships until things go wrong. Given the scale of smart-cities, we must have more than hope that data management and security will not be one of those things.

But how will we know if new plans design well, or not?

Who exactly holds and manages those data and where is the oversight of how they are being used?

Using Big Data to be predictive and personal

How do we definde “best use of data” in “public services” right across the board in a world in which boundaries between private and public in the provision of services have become increasingly blurred?

UK researchers and police are already analysing big data for predictive factors at postcode level for those at risk or harm, for example in combining health and education data.

What has grown across the Atlantic is now spreading here. When I lived there I could already see some of what is deeply flawed.

When your system has been as racist in its policing and equity of punishment as institutionally systemic as it is in the US, years of cumulative data bias translates into ‘heat lists’ and means “communities of color will be systematically penalized by any risk assessment tool that uses criminal history as a legitimate criterion.”

How can we ensure British policing does not pursue flawed predictive policies and methodologies, without seeing them?

What transparency have our use of predictive prisons and justice data?

What oversight will the planned new increase in use of satellite tags, and biometrics access in prisons have?

What policies can we have in place to hold data-driven decision-making processes accountable?<

What tools do we need to seek redress for decisions made using flawed algorithms that are apparently indisputable?

Is government truly committed to being open and talking about how far the nudge unit work is incorporated into any government predictive data use? If not, why not?

There is a need for a broad debate on the direction of big data and predictive technology and whether the public understands and wants it.If we don’t understand, it’s time someone explained it.

If I can’t opt out of O2 picking up my travel data ad infinitum on the Tube, I will opt out of their business model and try to find a less invasive provider. If I can’t opt out of EE picking up my personal data as I move around Hyde park, it won’t be them.

Most people just want to be left alone and their space is personal.

A public consultation on smart-technology, and its growth into public space and effect on privacy could be insightful.

Feed me Seymour?

With the encroachment of integrated smart technology over our cities – our roads, our parking, our shopping, our parks, our classrooms, our TV and our entertainment, even our children’s toys – surveillance and sharing information from systems we cannot see  start defining what others may view, or decide about us, behind the scenes in everything we do.

As it expands city wide, it will be watched closely if data are to be open for public benefit, but not invade privacy if “The data stored in this infrastructure won’t be confidential.”

If the destination of digital in all parts of our lives is smart-cities then we have to collectively decide, what do we want, what do we design, and how do we keep it democratic?

What price is our freedom to decide how far its growth should reach into public space and private lives?

The cost of smart cities to individuals and the public is not what it costs in investment made by private conglomerates.

Already the cost of smart technology is privacy inside our homes, our finances, and autonomy of decision making.

Facebook and social media may run algorithms we never see that influence our mood or decision making. Influencing that decision making is significant enough when it’s done through advertising encouraging us to decide which sausages to buy for your kids tea.

It is even more significant when you’re talking about influencing voting.

Who influences most voters wins an election. If we can’t see the technology behind the influence, have we also lost sight of how democracy is decided? The power behind the mechanics of the cogs of Whitehall may weaken inexplicably as computer driven decision from the tech companies’ hidden tools takes hold.

What opportunity and risk to “every part of government” does ever expanding digital bring?

The design and development of smart technology that makes decisions for us and about us, lies in in the hands of large private corporations, not government.

The means the public-interest values that could be built by design and their protection and oversight are currently outside our control.

There is no disincentive for companies that have taken private information that is none of their business, and quite literally, made it their business to not want to collect ever more data about us. It is outside our control.

We must plan by-design for the values we hope for, for ethics, to be embedded in systems, in policies, embedded in public planning and oversight of service provision by all providers. And that the a fair framework of values is used when giving permission to private providers who operate in public spaces.

We must plan for transparency and interoperability.

We must plan by-design for the safe use of data that does not choke creativity and innovation but both protects and champions privacy as a fundamental building block of trust for these new relationships between providers of private and public services, private and public things, in private and public space.

If “digital is changing how we deliver every part of government,” and we want to “harness the best of digital and technology, and the best use of data to improve public services right across the board” then we must see integration in the planning of policy and its application.

Across the board “the best use of data” must truly value privacy, and enable us to keep our autonomy as individuals.

Without this, the cost of smart cities growing unchecked, will be an ever growing transfer of power to the funders behind corporations and campaign politics.

The ultimate price of this loss of privacy, will be democracy itself.

****

This is the conclusion to a four part set of thoughts: On smart technology and data from the Sprint16 session (part one). I thought about this more in depth on “Smart systems and Public Services” here (part two), and the design and development of smart technology making “The Best Use of Data” here looking at today in a UK company case study (part three) and this part four, “The Best Use of Data” used in predictions and the Future.

Destination smart-cities: design, desire and democracy (Part three)

Smart Technology we have now: A UK Case Study

In places today, where climate surveillance sensors are used to predict and decide which smog-days cars should be banned from cities, automatic number-plate recognition (ANPR) can identify cars driving on the wrong days and send automatic penalties.

Similarly ANPR technology is used in our UK tunnels and congestion charging systems. One British company encouraging installation of ANPR in India is the same provider of a most significant part of our British public administrative data and surveillance softwares in a range of sectors.

About themselves that company says:

“Northgate Public Services has a unique experience of delivering ANPR software to all Home Office police forces. We developed and managed the NADC, the mission critical solution providing continuous surveillance of the UK’s road network.  The NADC is integrated with other databases, including the Police National Computer, and supports more than 30 million reads a day across the country.”

30 million snapshots from ‘continuous surveillance of the UK’s road network‘. That’s surprised me. That’s half the population in England, not all of whom drive. 30 million every day. It’s massive, unreasonable, and risks backlash.

Northgate Public Services’ clients also include 80% of UK water companies, as well as many other energy and utility suppliers.

And in the social housing market they stretch to debt collection, or ‘income management’.

So who I wondered, who is this company that owns all this data-driven access to our homes, our roads, our utilities, life insurance, hospital records and registeries, half of all UK calls to emergency services?

Northgate Information Solutions announced the sale of its Public Services division in December 2014 to venture capital firm Cinven. Cinven that also owns a 62% shareholding in the UK private healthcare provider Spire with all sorts of influence given their active share of services and markets. 

Not only does this private equity firm hold these vast range of data systems across a wide range of sectors, but it’s making decisions about how our public policies and money are being driven.

Using health screening data they’re even making decisions that affect our future and our behaviour and affect our private lives: software provides the information and tools that housing officers need to proactively support residents, such as sending emails, letters or rent reminders by SMS and freeing up time for face-to-face support.”

Of their ANPR systems, Northgate says the data should be even more widely used “to turn CONNECT: ANPR into a critical source of intelligence for proactive policing.”

If the company were to start to ‘proactively’ use all the data it owns across the sectors we should be asking, is ‘smart’ sensible and safe?

Where is the boundary between proactive and predictive? Or public and private?

Where do companies draw the line between public and personal space?

The public services provided by the company seem to encroach into our private lives in many ways, In Northgate’s own words, “It’s also deeply personal.”

Who’s driving decision making is clear. The source of their decision making is data. And it’s data about us.

Today already whether collected by companies proactively like ANPR or through managing data we give them with consent for direct administrative purpose, private companies are the guardians of massive amounts of our personal and public data.

What is shocking to me, is how collected data in one area of public services are also used for entirely different secondary purposes without informed consent or an FYI, for example in schools.

If we don’t know which companies manage our data, how can we trust that it is looked after well and that we are told if things go wrong?

Steps must be taken in administrative personal data security, transparency and public engagement to shore up public trust as the foundation for future datasharing as part of the critical infrastructure for any future strategy, for public or commercial application. Strategy must include more transparency of the processing of our data and public involvement, not the minimum, if ‘digital citizenship’ is to be meaningful.

How would our understanding of data improve if anyone using personal data were required to put in place clear public statements about their collection, use and analysis of data?  If the principles of data protection were actually upheld, in particular that individuals should be informed? How would our understanding of data improve especially regards automated decision making and monitoring technology? Not ninety page privacy policies. Plain English. If you need ninety pages, you’re doing too much with my data.

Independent privacy impact assessments should be mandatory and published before data are collected and shared with any party other than that to which it was given for a specific purpose. Extensions broadening that purpose should require consultation and consent. If that’s a street, then make it public in plain sight.

Above all, planning committees in local government, in policy making and practical application, need to think of data in every public decision they make and its ethical implications. We need some more robust decision-making in the face of corporate data grabs, to defend data collected in public space safe, and to keep some private.

How much less fun is a summer’s picnic spent smooching, if you feel watched? How much more anxious will we make our children if they’re not allowed to ever have their own time to themselves, and every word they type in a school computer is monitored?

How much individual creativity and innovation does that stifle? We are effectively censoring children before they have written a word.

Large corporations have played historically significant and often shadowy roles in surveillance that retrospectively were seen as unethical.

We should consider sooner rather than later, if corporations such as BAE systems, Siemens and the IMSs of the world act in ways worthy of our trust in such massive reach into our lives, with little transparency and oversight.

“Big data is big opportunity but Government should tackle misuse”

The Select Committee warned in its recent report on Big Data that distrust arising from concerns about privacy and security is often well-founded and must be resolved by industry and Government.

If ‘digital’ means smart technology in the future is used in “every part of government” as announced at #Sprint16, what will its effects be on the involvement and influence these massive corporations on democracy itself?

******

I thought about this more in depth on Part one here,  “Smart systems and Public Services” here (part two), and continue after this by looking at “The Best Use of Data” used in predictions and the Future (part four).

Destination smart-cities: design, desire and democracy (Part two)

Smart cities: private reach in public space and personal lives

Smart-cities are growing in the UK through private investment and encroachment on public space. They are being built by design at home, and supported by UK money abroad, with enormous expansion plans in India for example, in almost 100 cities.

With this rapid expansion of “smart” technology not only within our living rooms but my living space and indeed across all areas of life, how do we ensure equitable service delivery, (what citizens generally want, as demonstrated by strength of feeling on the NHS) continues in public ownership, when the boundary in current policy is ever more blurred between public and private corporate ownership?

How can we know and plan by-design that the values we hope for, are good values, and that they will be embedded in systems, in policies and planning? Values that most people really care about. How do we ensure “smart” does not ultimately mean less good? That “smart” does not in the end mean, less human.

Economic benefits seem to be the key driver in current government thinking around technology – more efficient = costs less.

While using technology progressing towards replacing repetitive work may be positive, how will we accommodate for those whose skills will no longer be needed? In particular its gendered aspect, and the more vulnerable in the workforce, since it is women and other minorities who work disproportionately in our part-time, low skill jobs. Jobs that are mainly held by women, even what we think of as intrinsically human, such as carers, are being trialed for outsourcing or assistance by technology. These robots monitor people, in their own homes and reduce staffing levels and care home occupancy. We’ll no doubt hear how good it is we need fewer carers because after all, we have a shortage of care staff. We’ll find out whether it is positive for the cared, or whether they find it it less ‘human'[e]. How will we measure those costs?

The ideal future of us all therefore having more leisure time sounds fab, but if we can’t afford it, we won’t be spending more of our time employed in leisure. Some think we’ll simply be unemployed. And more people live in the slums of Calcutta than in Soho.

One of the greatest benefits of technology is how more connected the world can be, but will it also be more equitable?

There are benefits in remote sensors monitoring changes in the atmosphere that dictate when cars should be taken off the roads on smog-days, or indicators when asthma risk-factors are high.

Crowd sourcing information about things which are broken, like fix-my-street, or lifts out-of-order are invaluable in cities for wheelchair users.

Innovative thinking and building things through technology can create things which solve simple problems and add value to the person using the tool.

But what of the people that cannot afford data, cannot be included in the skilled workforce, or will not navigate apps on a phone?

How this dis-incentivises the person using the technology has not only an effect on their disappointment with the tool, but the service delivery, and potentially wider still even to societal exclusion or stigma.These were the findings of the e-red book in Glasgow explained at the Digital event in health, held at the King’s Fund in summer 2015.

Further along the scale of systems and potential for negative user experience, how do we expect citizens to react to finding punishments handed out by unseen monitoring systems, finding out our behaviour was ‘nudged’ or find decisions taken about us, without us?

And what is the oversight and system of redress for people using systems, or whose data are used but inaccurate in a system, and cause injustice?

And wider still, while we encourage big money spent on big data in our part of the world how is it contributing to solving problems for millions for whom they will never matter? Digital and social media makes increasingly transparent our one connected world, with even less excuse for closing our eyes.

Approximately 15 million girls worldwide are married each year – that’s one girl, aged under 18, married off against her will every two seconds. [Huff Post, 2015]

Tinder-type apps are luxury optional extras for many in the world.

Without embedding values and oversight into some of what we do through digital tools implemented by private corporations for profit, ‘smart’ could mean less fair, less inclusive, less kind. Less global.

If digital becomes a destination, and how much it is implemented is seen as a measure of success, by measuring how “smart” we become risks losing sight of seeing technology as solutions and steps towards solving real problems for real people.

We need to be both clever and sensible, in our ‘smart’.

Are public oversight and regulation built in to make ‘smart’ also be safe?

If there were public consultation on how “smart” society will look would we all agree if and how we want it?

Thinking globally, we need to ask if we are prioritising the wrong problems? Are we creating more tech that we already have invented solutions for place where governments are willing to spend on them? And will it in those places make the society more connected across class and improve it for all, or enhance the lives of the ‘haves’ by having more, and the ‘have-nots’ be excluded?

Does it matter how smart your TV gets, or carer, or car, if you cannot afford any of these convenient add-ons to Life v1.1?

As we are ever more connected, we are a global society, and being ‘smart’ in one area may be reckless if at the expense or ignorance of another.

People need to Understand what “Smart” means

“Consistent with the wider global discourse on ‘smart’ cities, in India urban problems are constructed in specific ways to facilitate the adoption of “smart hi-tech solutions”. ‘Smart’ is thus likely to mean technocratic and centralized, undergirded by alliances between the Indian government and hi-technology corporations.”  [Saurabh Arora, Senior Lecturer in Technology and Innovation for Development at SPRU]

Those investing in both countries are often the same large corporations. Very often, venture capitalists.

Systems designed and owned by private companies provide the information technology infrastructure that i:

the basis for providing essential services to residents. There are many technological platforms involved, including but not limited to automated sensor networks and data centres.’

What happens when the commercial and public interest conflict and who decides that they do?

Decision making, Mining and Value

Massive amounts of data generated are being mined for making predictions, decisions and influencing public policy: in effect using Big Data for research purposes.

Using population-wide datasets for social and economic research today, is done in safe settings, using deidentified data, in the public interest, and has independent analysis of the risks and benefits of projects as part of the data access process.

Each project goes before an ethics committee review to assess its considerations for privacy and not only if the project can be done, but should be done, before it comes for central review.

Similarly our smart-cities need ethics committee review assessing the privacy impact and potential of projects before commissioning or approving smart-technology. Not only assessing if they are they feasible, and that we ‘can’ do it, but ‘should’ we do it. Not only assessing the use of the data generated from the projects, but assessing the ethical and privacy implications of the technology implementation itself.

The Committee recommendations on Big Data recently proposed that a ‘Council of Data Ethics’ should be created to explicitly address these consent and trust issues head on. But how?

Unseen smart-technology continues to grow unchecked often taking root in the cracks between public-private partnerships.

We keep hearing about Big Data improving public services but that “public” data is often held by private companies. In fact our personal data for public administration has been widely outsourced to private companies of which we have little oversight.

We’re told we paid the price in terms of skills and are catching up.

But if we simply roll forward in first gear into the connected city that sees all, we may find we arrive at a destination that was neither designed nor desired by the majority.

We may find that the “revolution, not evolution”, hoped for in digital services will be of the unwanted kind if companies keep pushing more and more for more data without the individual’s consent and our collective public buy-in to decisions made about data use.

Having written all this, I’ve now read the Royal Statistical Society’s publication which eloquently summarises their recent work and thinking. But I wonder how we tie all this into practical application?

How we do governance and regulation is tied tightly into the practicality of public-private relationships but also into deciding what should society look like? That is what our collective and policy decisions about what smart-cities should be and may do, is ultimately defining.

I don’t think we are addressing in depth yet the complexity of regulation and governance that will be sufficient to make Big Data and Public Spaces safe because companies say too much regulation risks choking off innovation and creativity.

But that risk must not be realised if it is managed well.

Rather we must see action to manage the application of smart-technology in a thoughtful way quickly, because if we do not, very soon, we’ll have lost any say in how our service providers deliver.

*******

I began my thoughts about this in Part one, on smart technology and data from the Sprint16 session and after this (Part two), continue to look at the design and development of smart technology making “The Best Use of Data” with a UK company case study (Part three) and “The Best Use of Data” used in predictions and the Future (Part four).

Destination smart-cities: design, desire and democracy (Part one)

When I drop my children at school in the morning I usually tell them three things: “Be kind. Have fun. Make good choices.”

I’ve been thinking recently about what a positive and sustainable future for them might look like. What will England be in 10 years?

The #Sprint16 snippets I read talk about how: ”Digital is changing how we deliver every part of government,” and “harnessing the best of digital and technology, and the best use of data to improve public services right across the board.”

From that three things jumped out at me:

  • The first is that the “best use of data” in government’s opinion may conflict with that of the citizen.
  • The second, is how to define “public services” right across the board in a world in which boundaries between private and public in the provision of services have become increasingly blurred.
  • And the third is the power of tech to offer both opportunity and risk if used in “every part of government” and effects on access to, involvement in, and the long-term future of, democracy.

What’s the story so far?

In my experience so far of trying to be a digital citizen “across the board” I’ve seen a few systems come and go. I still have my little floppy paper Government Gateway card, navy blue with yellow and white stripes. I suspect it is obsolete. I was a registered Healthspace user, and used it twice. It too, obsolete. I tested my GP online service. It was a mixed experience.

These user experiences are shaping how I interact with new platforms and my expectations of organisations, and I will be interested to see what the next iteration, nhs alpha, offers.

How platforms and organisations interact with me, and my data, is however increasingly assumed without consent. This involves new data collection, not only using data from administrative or commercial settings to which I have agreed, but new scooping of personal data all around us in “smart city” applications.

Just having these digital applications will be of no benefit and all the disadvantages of surveillance for its own sake will be realised.

So how do we know that all these data collected are used – and by whom? How do we ensure that all the tracking actually gets turned into knowledge about pedestrian and traffic workflow to make streets and roads safer and smoother in their operation, to make street lighting more efficient, or the environment better to breathe in and enjoy? And that we don’t just gift private providers tonnes of valuable data which they simply pass on to others for profit?

Because without making things better, in this Internet-of-Things will be a one-way ticket to power in the hands of providers and loss of control, and quality of life. We’ll work around it, but buying a separate SIM card for trips into London, avoiding certain parks or bridges, managing our FitBits to the nth degree under a pseudonym. But being left no choice but to opt out of places or the latest technology to enjoy, is also tedious. If we want to buy a smart TV to access films on demand, but don’t want it to pass surveillance or tracking information back to the company how can we find out with ease which products offer that choice?

Companies have taken private information that is none of their business, and quite literally, made it their business.

The consumer technology hijack of “smart” to always mean marketing surveillance creates a divide between those who will comply for convenience and pay the price in their privacy, and those who prize privacy highly enough to take steps that are less convenient, but less compromised.

But even wanting the latter, it can be so hard to find out how to do, that people feel powerless and give-in to the easy option on offer.

Today’s system of governance and oversight that manages how our personal data are processed by providers of public and private services we have today, in both public and private space, is insufficient to meet the values most people reasonably expect, to be able to live their life without interference.

We’re busy playing catch up with managing processing and use, when many people would like to be able to control collection.

The Best use of Data: Today

My experience of how the government wants to ‘best use data’ is that until 2013 I assumed the State was responsible with it.

I feel bitterly let down.

care.data taught me that the State thinks my personal data and privacy are something to exploit, and “the best use of my data” for them, may be quite at odds with what individuals expect. My trust in the use of my health data by government has been low ever since. Saying one thing and doing another, isn’t making it more trustworthy.

I found out in 2014 how my children’s personal data are commercially exploited and given to third parties including press outside safe settings, by the Department for Education. Now my trust is at rock bottom. I tried to take a look at what the National Pupil Database stores on my own children and was refused a subject access request, meanwhile the commercial sector and Fleet Street press are given out not only identifiable data, but ‘highly sensitive’ data. This just seems plain wrong in terms of security, transparency and respect for the person.

The attitude that there is an entitlement of the State to individuals’ personal data has to go.

The State has pinched 20 m children’s privacy without asking. Tut Tut indeed. [see Very British Problems for a translation].

And while I support the use of public administrative data in deidentified form in safe settings, it’s not to be expected that anything goes. But the feeling of entitlement to access our personal data for purposes other than that for which we consented, is growing, as it stretches to commercial sector data. However suggesting that public feeling measured based on work with 0.0001% of the population, is “wide public support for the use and re-use of private sector data for social research” seems tenuous.

Even so, comments even in that tiny population suggested, “many participants were taken by surprise at the extent and size of data collection by the private sector” and some “felt that such data capture was frequently unwarranted.” “The principal concerns about the private sector stem from the sheer volume of data collected with and without consent from individuals and the profits being made from linking data and selling data sets.”

The Best use of Data: The Future

Young people, despite seniors often saying “they don’t care about privacy” are leaving social media in search of greater privacy.

These things cannot be ignored if the call for digital transformation between the State and the citizen is genuine because try and do it to us and it will fail. Change must be done with us. And ethically.

And not “ethics” as in ‘how to’, but ethics of “should we.” Qualified transparent evaluation as done in other research areas, not an add on, but integral to every project, to look at issues such as:

  • whether participation is voluntary, opt-out or covert
  • how participants can get and give informed consent
  • accessibility to information about the collection and its use
  • small numbers, particularly of vulnerable people included
  • identifiable data collection or disclosure
  • arrangements for dealing with disclosures of harm and recourse
  • and how the population that will bear the risks of participating in the research is likely to benefit from the knowledge derived from the research or not.

Ethics is not about getting away with using personal data in ways that won’t get caught or hauled over the coals by civil society.

It’s balancing risk and benefit in the public interest, and not always favouring the majority, but doing what is right and fair.

We hear a lot at the moment on how the government may see lives, shaped by digital skills, but too little of heir vison for what living will look and feel like, in smart cities of the future.

My starting question is, how does government hope society will live there and is it up to them to design it? If not, who is because these smart-city systems are not designing themselves. You’ve heard of Stepford wives. I wonder what do we do if we do not want to live like Milton Keynes man?

I hope that the world my children will inherit will be more just, more inclusive, and with a more sustainable climate to support food, livelihoods and kinder than it is today. Will ‘smart’ help or hinder?

What is rarely discussed in technology discussions is how the service should look regardless of technology. The technology assumed as inevitable, becomes the centre of service delivery.

I’d like to first understand what is the central and local government vision for “public services”  provision for people of the future? What does it mean for everyday services like schools and health, and how does it balance security and our freedoms?

Because without thinking about how and who provides those services for people, there is a hole in the discussion of “the best use of data” and their improvement “right across the board”.

The UK government has big plans for big data sharing, sharing across all public bodies, some tailored for individual interventions.

While there are interesting opportunities for public benefit from at-scale systems, the public benefit is at risk not only from lack of trust in how systems gather data and use them, but that interoperability in service, and the freedom for citizens to transfer provider, gets lost in market competition.

Openness and transparency can be absent in public-private partnerships until things go wrong. Given the scale of smart-cities, we must have more than hope that data management and security will not be one of those things.

How will we know if new plans are designed well, or not?

When I look at my children’s future and how our current government digital decision making may affect it, I wonder if their future will be more or less kind. More or less fun.

Will they be left with the autonomy to make good choices of their own?

The hassle we feel when we feel watched all the time, by every thing that we own, in every place we go, having to check every check box has a reasonable privacy setting, has a cumulative cost in our time and anxieties.

Smart technology has invaded not only our public space and our private space, but has nudged into our head space.

I for one have had enough already. For my kids I want better. Technology should mean progress for people, not tyranny.

Living in smart cities, connected in the Internet-of-Things, run on their collective Big Data and paid for by commercial corporate providers, threatens not only their private lives and well-being, their individual and independent lives, but ultimately independent and democratic government as we know it.

*****

This is the start of a four part set of thoughts: Beginnings with smart technology and data triggered by the Sprint16 session (part one). I think about this more in depth in “Smart systems and Public Services” (Part two) here, and the design and development of smart technology making “The Best Use of Data” looking at today in a UK company case study (Part three) before thoughts on “The Best Use of Data” used in predictions and the Future (Part four).

Breaking up is hard to do. Restructuring education in England.

This Valentine’s I was thinking about the restructuring of education in England and its wide ranging effects. It’s all about the break up.

The US EdTech market is very keen to break into the UK, and our front door is open.

We have adopted the model of Teach First partnered with Teach America, while some worry we do not ask “What is education for?

Now we hear the next chair of Oftsed is to be sought from the US, someone who is renowned as “the scourge of the unions.”

Should we wonder how long until the management of schools themselves is US-sourced?

The education system in England has been broken up in recent years into manageable parcels  – for private organisations, schools within schools, charity arms of commercial companies, and multi-school chains to take over – in effect, recent governments have made reforms that have dismantled state education as I knew it.

Just as the future vision of education outlined in the 2005 Direct Democracy co-authored by Michael Gove said, “The first thing to do is to make existing state schools genuinely independent of the state.”

Free schools touted as giving parents the ultimate in choice, are in effect another way to nod approval to the outsourcing of the state, into private hands, and into big chains. Despite seeing the model fail spectacularly abroad, the government seems set on the same here.

Academies, the route that finagles private corporations into running public-education is the preferred model, says Mr Cameron. While there are no plans to force schools to become academies, the legislation currently in ping-pong under the theme of coasting schools enables just that. The Secretary of State can impose academisation. Albeit only on Ofsted labeled ‘failing’ schools.

What fails appears sometimes to be a school that staff and parents cannot understand as anything less than good, but small. While small can be what parents want, small pupil-teacher ratios, mean higher pupil-per teacher costs. But the direction of growth is towards ‘big’ is better’.

“There are now 87 primary schools with more than 800 pupils, up from 77 in 2014 and 58 in 2013. The number of infants in classes above the limit of 30 pupils has increased again – with 100,800 pupils in these over-sized classes, an increase of 8% compared with 2014.” [BBC]

All this restructuring creates costs about which the Department wants to be less than transparent.  And has lost track of.

If only we could see that these new structures raised standards?  But,” while some chains have clearly raised attainment, others achieve worse outcomes creating huge disparities within the academy sector.”

If not delivering better results for children, then what is the goal?

A Valentine’s view of Public Service Delivery: the Big Break up

Breaking up the State system, once perhaps unthinkable is possible through the creation of ‘acceptable’ public-private partnerships (as opposed to outright privatisation per se). Schools become academies through a range of providers and different pathways, at least to start with, and as they fail, the most successful become the market leaders in an oligopoly. Ultimately perhaps, this could become a near monopoly. Delivering ‘better’. Perhaps a new model, a new beginning, a new provider offering salvation from the flood of ‘failing’ schools coming to the State’s rescue.

In order to achieve this entry to the market by outsiders, you must first remove conditions seen as restrictive, giving more ‘freedom’ to providers; to cut corners make efficiency savings on things like food standards, required curriculum, and numbers of staff, or their pay.

And what if, as a result, staff leave, or are hard to recruit?

Convincing people that “tech” and “digital” will deliver cash savings and teach required skills through educational machine learning is key if staff costs are to be reduced, which in times of austerity and if all else has been cut, is the only budget left to slash.

Self-taught systems’ providers are convincing in their arguments that tech is the solution.

Sadly I remember when a similar thing was tried on paper. My first year of GCSE maths aged 13-14  was ‘taught’ at our secondary comp by working through booklets in a series that we self-selected from the workbench in the classroom. Then we picked up the master marking-copy once done. Many of the boys didn’t need long to work out the first step was an unnecessary waste of time. The teacher had no role in the classroom. We were bored to bits. By the final week at end of the year they sellotaped the teacher to his chair.

I kid you not.

Teachers are so much more than knowledge transfer tools, and yet by some today seem to be considered replaceable by technology.

The US is ahead of us in this model, which has grown hand-in-hand with commercialism in schools. Many parents are unhappy.

So is the DfE setting us up for future heartbreak if it wants us to go down the US route of more MOOCs, more tech, and less funding and fewer staff? Where’s the cost benefit risk analysis and transparency?

We risk losing the best of what is human from the classroom, if we will remove the values they model and inspire. Unions and teachers and educationalists are I am sure, more than aware of all these cumulative changes. However the wider public seems little engaged.

For anyone ‘in education’ these changes will all be self-evident and their balance of risks and benefits a matter of experience, and political persuasion. As a parent I’ve only come to understand these changes, through researching how our pupils’ personal and school data have been commercialised,  given away from the National Pupil Database without our consent, since legislation changed in 2013; and the Higher Education student and staff data sold.

Will more legislative change be needed to keep our private data accessible in public services operating in an increasingly privately-run delivery model? And who will oversee that?

The Education Market is sometimes referred to as ‘The Wild West’. Is it getting a sheriff?

The news that the next chair of Oftsed is to be sought from the US did set alarm bells ringing for some in the press, who fear US standards and US-led organisations in British schools.

“The scourge of unions” means not supportive of staff-based power and in health our junior doctors have clocked exactly what breaking their ‘union’ bargaining power is all about.  So who is driving all this change in education today?

Some ed providers might be seen as profiting individuals from the State break up. Some were accused of ‘questionable practices‘. Oversight has been lacking others said. Margaret Hodge in 2014 was reported to have said: “It is just wrong to hand money to a company in which you have a financial interest if you are a trustee.”

I wonder if she has an opinion on a lead non-executive board member at the Department for Education also being the director of one of the biggest school chains? Or the ex Minister now employed by the same chain? Or that his campaign was funded by the same Director?  Why this register of interests is not transparent is a wonder.

It could appear to an outsider that the private-public revolving door is well oiled with sweetheart deals.

Are the reforms begun by Mr Gove simply to be executed until their end goal, whatever that may be, through Nikky Morgan or she driving her own new policies?

If Ofsted were  to become US-experience led, will the Wild West be tamed or US providers invited to join the action, reshaping a new frontier? What is the end game?

Breaking up is not hard to do, but in whose best interest is it?

We need only look to health to see the similar pattern.

The structures are freed up, and boundaries opened up (if you make the other criteria) in the name of ‘choice’. The organisational barriers to break up are removed in the name of ‘direct accountability’. And enabling plans through more ‘business intelligence’ gathered from data sharing, well, those plans abound.

Done well, new efficient systems and structures might bring public benefits, the right technology can certainly bring great things, but have we first understood what made the old less efficient if indeed it was and where are those baselines to look back on?

Where is the transparency of the end goal and what’s the price the Department is prepared to pay in order to reach it?

Is reform in education, transparent in its ideology and how its success is being measured if not by improved attainment?

The results of change can also be damaging. In health we see failing systems and staff shortages and their knock-on effects into patient care. In schools, these failures damage children’s start in life, it’s not just a ‘system’.

Can we assess if and how these reforms are changing the right things for the right reasons? Where is the transparency of what problems we are trying to solve, to assess what solutions work?

How is change impact for good and bad being measured, with what values embedded, with what oversight, and with whose best interests at its heart?

2005’s Direct Democracy could be read as a blueprint for co-author Mr Gove’s education reforms less than a decade later.

Debate over the restructuring of education and its marketisation seems to have bypassed most of us in the public, in a way health has not.

Underperformance as measured by new and often hard to discern criteria, means takeover at unprecedented pace.

And what does this mean for our most vulnerable children? SEN children are not required to be offered places by academies. The 2005 plans co-authored by Mr Gove also included: “killing the government’s inclusion policy stone dead,” without an alternative.

Is this the direction of travel our teachers and society supports?

What happens when breakups happen and relationship goals fail?

Who picks up the pieces? I fear the state is paying heavily for the break up deals, investing heavily in new relationships, and yet will pay again for failure. And so will our teaching staff, and children.

While Mr Hunt is taking all the heat right now, for his part in writing Direct Democracy and its proposals to privatise health – set against the current health reforms and restructuring of junior doctors contracts – we should perhaps also look to Mr Gove co-author, and ask to better understand the current impact of his recent education reforms, compare them with what he proposed in 2005, and prepare for the expected outcomes of change before it happens (see p74).

One outcome was that failure was to be encouraged in this new system, and Sweden held up as an exemplary model:

“Liberating state schools would also allow the all-important freedom to fail.”

As Anita Kettunen, principal of JB Akersberga in Sweden reportedly said when the free schools chain funded by a private equity firm failed:

“if you’re going to have a system where you have a market, you have to be ready for this.”

Breaking up can be hard to do. Failure hurts. Are we ready for this?
******

 

Abbreviated on Feb 18th.

 

The front door to our children’s personal data in schools

“EdTech UK will be a pro-active organisation building and accelerating a vibrant education and learning technology sector and leading new developments with our founding partners. It will also be a front door to government, educators, companies and investors from Britain and globally.”

Ian Fordham, CEO, EdTech UK

This front door is a gateway to access our children’s personal data and through it some companies are coming into our schools and homes and taking our data without asking.  And with that, our children lose control over their safeguarded digital identity. Forever.

Companies are all “committed to customer privacy” in those privacy policies which exist at all. However, typically this means they also share your information with ‘our affiliates, our licensors, our agents, our distributors and our suppliers’ and their circles are wide and often in perpetuity. Many simply don’t have a published policy.

Where do they store any data produced in the web session? Who may access it and use it for what purposes? Or how may they use the personal data associated with staff signing up with payment details?

According to research from London & Partners, championed by Boris Johnson, Martha Lane-Fox and others in EdTech, education is one of the fastest growing tech sectors in Britain and is worth £45bn globally; a number set to reach a staggering £129bn by 2020. And perhaps the EdTech diagrams in US dollars shows where the UK plan to draw companies from. If you build it, they will come.

The enthusiasm that some US EdTech type entrepreneurs I have met or listened to speak, is akin to religious fervour. Such is their drive for tech however, that they appear to forget that education is all about the child. Individual children. Not cohorts, or workforces. And even when they do it can be sincerely said, but lacks substance when you examine policies in practice.

How is the DfE measuring the cost and benefit of tech and its applications in education?

Is anyone willing to say not all tech is good tech, not every application is a wise application? Because every child is unique, not every app is one size fits all?

My 7-yo got so caught up in the game and in the mastery of the app their class was prescribed for homework in the past, that she couldn’t master the maths and harmed her confidence. (Imagine something like this, clicking on the two correct sheep with numbers stamped on them, that together add up to 12, for example, before they fall off and die.)

She has no problem with maths. Nor doing sums under pressure. She told me happily today she’d come joint second in a speed tables test. That particular app style simply doesn’t suit her.

I wonder if other children and parents find the same and if so, how would we know if these apps do more harm than good?

Nearly 300,000 young people in Britain have an anxiety disorder according to the Royal College of Psychiatrists. Feeling watched all the time on-and offline is unlikely to make anxiety any better.

How can the public and parents know that edTech which comes into the home with their children, is behaviourally sound?

How can the public and parents know that edTech which affects their children, is ethically sound in both security and application?

Where is the measured realism in the providers’ and policy makers fervour when both seek to marketise edTech and our personal data for the good of the economy, and ‘in the public interest’.

Just because we can, does not always mean we should. Simply because data linkage is feasible, even if it brings public benefit, cannot point blank mean it will always be in our best interest.

In whose best Interest is it anyway?

Right now, I’m not convinced that the digital policies at the heart of the Department for Education, the EdTech drivers or many providers have our children’s best interests at heart at all. It’s all about the economy; when talking if at all about children using the technology, many talk only of ‘preparing the workforce’.

Are children and parents asked to consent at individual level to the terms and conditions of the company and told what data will be extracted from the school systems about their child? Or do schools simply sign up their children and parents en masse, seeing it as part of their homework management system?

How much ‘real’ personal data they use varies. Some use only pseudo-IDs assigned by the teacher. Others log, store and share everything they do assigned to their ID or real email address , store performance over time and provide personalised reports of results.

Teachers and schools have a vital role to play in understanding data ethics and privacy to get this right and speaking to many, it doesn’t seem something they feel well equipped to do. Parents aren’t always asked. But should schools not always have to ask before giving data to a commercial third party or when not in an ’emergency’ situation?

I love tech. My children love making lego robots move with code. Or driving drones with bananas. Or animation. Technology offers opportunity for application in and outside schools for children that are fascinating, and worthy, and of benefit.

If however all parents are to protect children’s digital identity for future, and to be able to hand over any control and integrity over their personal data to them as adults,  we must better accommodate children’s data privacy in this 2016 gold rush for EdTech.

Pupils and parents need to be assured their software is both educationally and ethically sound.  Who defines those standards?

Who is in charge of Driving, Miss Morgan?

Microsoft’s vice-president of worldwide education, recently opened the BETT exhibition and praised teachers for using technology to achieve amazing things in the classroom, and urged innovators to  “join hands as a global community in driving this change”.

While there is a case to say no exposure to technology in today’s teaching would be neglectful, there is a stronger duty to ensure exposure to technology is positive and inclusive, not harmful.

Who regulates that?

We are on the edge of an explosion of tech and children’s personal data ‘sharing’ with third parties in education.

Where is its oversight?

The community of parents and children are at real risk of being completely left out these decisions, and exploited.

The upcoming “safeguarding” policies online are a joke if the DfE tells us loudly to safeguard children’s identity out front, and quietly gives their personal data away for cash round the back.

The front door to our children’s data “for government, educators, companies and investors from Britain and globally” is wide open.

Behind the scenes  in pupil data privacy, it’s a bit of a mess. And these policy makers and providers forgot to ask first,  if they could come in.

If we build it, would you come?

My question now is, if we could build something better on pupil data privacy AND better data use, what would it look like?

Could we build an assessment model of the collection, use and release of data in schools that could benefit pupils and parents, AND educational establishments and providers?

This could be a step towards future-proofing public trust which will be vital for companies who want a foot-in-the door of EdTech. Design an ethical framework for digital decision making and a practical data model for use in Education.

Educationally and ethically sound.

If together providers, policy makers, schools at group Trust level, could meet with Data Protection and Privacy civil society experts to shape a tool kit of how to assess privacy impact, to ensure safeguarding and freedoms, enable safe data flow and help design cybersecurity that works for them and protects children’s privacy that is lacking today, designing for tomorrow, would you come?

Which door will we choose?

*******

image credit: @ Ben Buschfeld Wikipedia

*added February 13th: Oftsed Chair sought from US

Thinking to some purpose