Category Archives: change

Views on a National AI strategy

Today was the APPG AI Evidence Meeting – The National AI Strategy: How should it look? Here’s some of my personal views and takeaways.

Have the Regulators the skills and competency to hold organisations to account for what they are doing? asked Roger Taylor, the former Chair of Ofqual the exams regulator, as he began the panel discussion, chaired by Lord Clement-Jones.

A good question was followed by another.

What are we trying to do with AI? asked Andrew Strait, Associate Director of Research Partnerships at Ada Lovelace Institute and formerly of DeepMind and Google. The goal of a strategy should not be to have more AI for the sake of having more AI, he said, but an articulation of values and goals. (I’d suggest the government may be in fact in favour of exactly that, more AI for its own sake where its appplication is seen as a growth market.) And interestingly he suggested that the Scottish strategy has more values-based model, such as fairness. [I had, it seems, wrongly assumed that a *national* AI strategy to come, would include all of the UK.]

The arguments on fairness are well worn in AI discussion and getting old. And yet they still too often fail to ask whether these tools are accurate or even work at all. Look at the education sector and one company’s product, ClassCharts, that claimed AI as its USP for years, but the ICO found in 2020 that the company didn’t actually use any AI at all. If company claims are not honest, or not accurate, then they’re not fair to anyone, never mind across everyone.

Fairness is still too often thought of in terms of explainability of a computer algorithm, not the entire process it operates in. As I wrote back in 2019, “yes we need fairness accountability and transparency. But we need those human qualities to reach across thinking beyond computer code. We need to restore humanity to automated systems and it has to be re-instated across whole processes.”

Strait went on to say that safe and effective AI would be something people can trust. And he asked the important question: who gets to define what a harm is? Rightly identifying that the harm identified by a developer of a tool, may be very different from those people affected by it. (No one on the panel attempted to define or limit what AI is, in these discussions.) He suggested that the carbon footprint from AI may counteract the benefit it would have to apply AI in the pursuit of climate-change goals. “The world we want to create with AI” was a very interesting position and I’d have liked to hear him address what he meant by that, who is “we”, and any assumptions within it.

Lord Clement-Jones asked him about some of the work that Ada Lovelace had done on harms such as facial recognition, and also asked whether some sector technologies are so high risk that they must be regulated?  Strait suggested that we lack adequate understanding of what harms are — I’d suggest academia and civil society have done plenty of work on identifying those, they’ve just been too often  ignored until after the harm is done and there are legal challenges. Strait also suggested he thought the Online Harms agenda was ‘a fantastic example’ of both horizontal and vertical regulation. [Hmm, let’s see. Many people would contest that, and we’ll see what the Queen’s Speech brings.]

Maria Axente then went on to talk about children and AI.  Her focus was on big platforms but also mentioned a range of other application areas. She spoke of the data governance work going on at UNICEF. She included the needs for driving awareness of the risks for children and AI, and digital literacy. The potential for limitations on child  development, the exacerbation of the digital divide,  and risks in public spaces but also hoped for opportunities. She suggested that the AI strategy may therefore be the place for including children.

This of course was something I would want to discuss at more length, but in summary the last decade of Westminster policy affecting children, even the Children’s Commissioner most recent Big Ask survey, bypass the question of children’s *rights* completely. If the national AI strategy by contrast would address rights, [the foundation upon which data laws are built] and create the mechanisms in public sector interactions with children that would enable them to be told if and how their data is being used (in AI systems or otherwise) and be able to exercise the choices that public engagement time and time again says is what people want, then that would be a *huge* and positive step forward to effective data practice across the public sector and for use of AI. Otherwise I see a risk that a strategy on AI and children will ignore children as rights holders across a full range of rights in the digital environment, and focus only on the role of AI in child protection, a key DCMS export aim, and ignore the invasive nature of safety tech tools, and its harms.

Next Dr Jim Weatherall from Astra Zeneca tied together  leveraging “the UK unique strengths of the NHS” and “data collected there” wanting a close knitting together of the national AI strategy and the national data strategy, so that healthcare, life sciences and biomedical sector can become “an international renowned asset.”  He’d like to see students doing data science modules in studies and international access to talent to work for AZ.

Lord Clement-Jones then asked him how to engender public trust in data use. Weatherall said a number of false starts in the past are hindering progress, but that he saw the way forward was data trusts and citizen juries.

His answer ignores the most obvious solution: respect existing law and human rights, using data only in ways that people want and give their permission to do so. Then show them that you did that, and nothing more. In short, what medConfidential first proposed in 2014, the creation of data usage reports.

The infrastructure for managing personal data controls in the public sector, as well as its private partners, must be the basic building block for any national AI strategy.  Views from public engagement work, polls, and outreach has not changed significantly since those done in 2013-14, but ask for the same over and over again. Respect for ‘red lines’ and to have control and choice. Won’t government please make it happen?

If the government fails to put in place those foundations, whatever strategy it builds will fall in the same ways they have done to date, like care.data did by assuming it was acceptable to use data in the way that the government wanted, without a social licence, in the name of “innovation”. Aims that were championed by companies such as Dr Foster, that profited from reusing personal data from the public sector, in a “hole and corner deal” as described by the chairman of the House of Commons committee of public accounts in 2006. Such deals put industry and “innovation” ahead of what the public want in terms of ‘red lines’ for acceptable re-uses of their own personal data and for data re-used in the public interest vs for commercial profit.  And “The Department of Health failed in its duty to be open to parliament and the taxpayer.” That openness and accountability are still missing nearly ten years on in the scope creep of national datasets and commercial reuse, and in expanding data policies and research programmes.

I disagree with the suggestion made that Data Trusts will somehow be more empowering to everyone than mechanisms we have today for data management. I believe Data Trusts will further stratify those who are included and those excluded, and benefit those who have capacity to be able to participate, and disadvantage those who cannot choose. They are also a figleaf of acceptability that don’t solve the core challenge . Citizen juries cannot do more than give a straw poll. Every person whose data is used has entitlement to rights in law, and the views of a jury or Trust cannot speak for everyone or override those rights protected in law.

Tabitha Goldstaub spoke next and outlined some of what AI Council Roadmap had published. She suggested looking at removing barriers to best support the AI start-up community.

As I wrote when the roadmap report was published, there are basics missing in government’s own practice that could be solved. It had an ambition to, “Lead the development of data governance options and its uses. The UK should lead in developing appropriate standards to frame the future governance of data,” but the Roadmap largely ignored the governance infrastructures that already exist. One can only read into that a desire to change and redesign what those standards are.

I believe that there should be no need to change the governance of data but instead make today’s rights able to be exercised and deliver enforcement to make existing governance actionable. Any genuine “barriers” to data use in data protection law,  are designed as protections for people; the people the public sector, its staff and these arms length bodies are supposed to serve.

Blaming AI and algorithms, blaming lack of clarity in the law, blaming “barriers” is often avoidance of one thing. Human accountability. Accountability for ignorance of the law or lack of consistent application. Accountability for bad policy, bad data and bad applications of tools is a human responsibility. Systems you choose to apply to human lives affect people, sometimes forever and in the most harmful ways, so those human decisions must be accountable.

I believe that some simple changes in practice when it comes to public administrative data could bring huge steps forward there:

  1. An audit of existing public admin data held, by national and local government, and consistent published registers of databases and algorithms / AI / ML currently in use.
  2. Identify the lawful basis for each set of data processes, their earliest records dates and content.
  3. Publish that resulting ROPA and storage limitations.
  4. Assign accountable owners to databases, tools and the registers.
  5. Sort out how you will communicate with people whose data you unlawfully process to meet the law, or stop processing it.
  6. And above all, publish a timeline for data quality processes and show that you understand how the degradation of data accuracy, quality affect the rights and responsibilities in law that change over time, as a result.

Goldstaub went on to say on ethics and inclusion, that if it’s not diverse, it’s not ethical. Perhaps the next panel itself and similar events could take a lesson learned from that, as such APPG panel events are not as diverse as they could or should be themselves.  Some of the biggest harms in the use of AI are after all for those in communities least represented, and panels like this tend to ignore lived reality.

The Rt Rev Croft then wrapped up the introductory talks on that more human note, and by exploding some myths.  He importantly talked about the consequences he expects of the increasing use of AI and its deployment in ‘the future of work’ for example, and its effects for our humanity. He proposed 5 topics for inclusion in the strategy and suggested it is essential to engage a wide cross section of society. And most importantly to ask, what is this doing to us as people?

There were then some of the usual audience questions asked on AI, transparency, garbage-in garbage-out, challenges of high risk assessment, and agreements or opposition to the EU AI regulation.

What frustrates me most in these discussions is that the technology is an assumed given, and the bias that gives to the discussion, is itself ignored. A holistic national AI strategy should be looking at if and why AI at all. What are the consequences of this focus on AI and what policy-making-oxygen and capacity does it take away from other areas of what government could or should be doing? The questioner who asks how adaptive learning could use AI for better learning in education, fails to ask what does good learning look like, and if and how adaptive tools fit into that, analogue or digital, at all.

I would have liked to ask panelists if they agree that proposals of public engagement and digital literacy distract from lack of human accountability for bad policy decisions that use machine-made support? Taking  examples from 2020 alone, there were three applications of algorithms and data in the public sector challenged by civil society because of their harms: from the Home Office dropping its racist visa algorithm, DWP court case finding ‘irrational and unlawful’ in Universal Credit decisions, and the “mutant algorithm” of summer 2020 exams. Digital literacy does nothing to help people in those situations. What AI has done is to increase the speed and scale of the harms caused by harmful policy, such as the ‘Hostile Environment’ which is harmful by design.

Any Roadmap, AI Council recommendations, and any national strategy if serious about what good looks like, must answer how would those harms be prevented in the public sector *before* being applied. It’s not about the tech, AI or not, but misuse of power. If the strategy or a Roadmap or ethics code fails to state how it would prevent such harms, then it isn’t serious about ethics in AI, but ethics washing its aims under the guise of saying the right thing.

One unspoken problem right now is the focus on the strategy solely for the delivery of a pre-determined tool (AI). Who cares what the tool is? Public sector data comes from the relationship between people and the provision of public services by government at various levels, and its AI strategy seems to have lost sight of that.

What good would look like in five years would be the end of siloed AI discussion as if it is a desirable silver bullet, and mythical numbers of ‘economic growth’ as a result, but see AI treated as is any other tech and its role in end-to-end processes or service delivery would be discussed proportionately. Panelists would stop suggesting that the GDPR is hard to understand or people cannot apply it.  Almost all of the same principles in UK data laws have applied for over twenty years. And regardless of the GDPR, the Convention 108 applies to the UK post-Brexit unchanged, including associated Council of Europe Guidelines on AI, data protection, privacy and profiling.

Data laws. AI regulation. Profiling. Codes of Practice on children, online safety or biometrics and emotional or gait recognition. There *are* gaps in data protection law when it comes to biometric data not used for unique identification purposes. But much of this is already rolled into other law and regulation for the purposes of upholding human rights and the rule of law. The challenge in the UK is often not having the law, but its lack of enforcement. There are concerns in civil society that the DCMS is seeking to weaken core ICO duties even further. Recent  government, council and think tank roadmaps talk of the UK leading on new data governance, but in reality simply want to see established laws rewritten to be less favourable of rights. To be less favourable towards people.

Data laws are *human* rights-based laws. We will never get a workable UK national data strategy or national AI strategy if government continues to ignore the very fabric of what they are to be built on. Policy failures will be repeated over and over until a strategy supports people to exercise their rights and have them respected.

Imagine if the next APPG on AI asked what would human rights’ respecting practice and policy look like, and what infrastructure would the government need to fund or build to make it happen?  In public-private sector areas (like edTech). Or in the justice system, health, welfare, children’s social care. What could that Roadmap look like and how we can make it happen over what timeframe? Strategies that could win public trust *and* get the sectoral wins the government and industry are looking for. Then we might actually move forwards on getting a functional strategy that would work, for delivering public services and where both AI and data fit into that.

A fresh start for edtech? Maybe. But I wouldn’t start from here.

In 1924 the Hibbert Journal published what is accepted as the first printed copy of a well-known joke.

A genial Irishman, cutting peat in the wilds of Connemara, was once asked by a pedestrian Englishman to direct him on his way to Letterfrack. With the wonted enthusiasm of his race the Irishman flung himself into the problem and, taking the wayfarer to the top of a hill commanding a wide prospect of bogs, lakes, and mountains, proceeded to give him, with more eloquence than precision, a copious account of the route to be taken. He then concluded as follows: ‘Tis the divil’s own country, sorr, to find your way in. But a gintleman with a face like your honour’s can’t miss the road; though, if it was meself that was going to Letterfrack, faith, I wouldn’t start from here.’

Ty Goddard asked some sensible questions in TES on April 4 on the UK edTech strategy, under the overarching question, ‘A fresh start for edtech? Maybe. But the road is bumpy.’

We’d hope so, since he’s on the DfE edTech board and aims “to accelerate the edtech sector in Britain and globally.”

“The questions now being asked are whether you can protect learning at a time of national emergency? Can you truly connect educators working from home with their pupils?”

and he rightly noted that,

“One problem schools are now attempting to overcome is that many lack the infrastructure, experience and training to use digital resources to support a wholesale move to online teaching at short notice.”

He calls for “bold investment and co-ordination across Whitehall led by Downing Street to really set a sprint towards super-fast connectivity to schools, pupils’ homes and investment in actual devices for students. The Department for Education, too, has done much to think through our recent national edtech strategy – now it needs to own and explain it.”

But the own and explain it, is the same problematic starting point as care-data had in the NHS in 2014. And we know how that went.

The edTech demands and drive for the UK are not a communications issue. Nor are they simply problems of infrastructure, or the age-old idea of shipping suitable tech at scale. The ‘fresh start’ isn’t going to be what anyone wants, least of all the edTech evangelists if we start from where they are.

Demonstrators of certain programmes, platforms, and products to promote to others and drive adoption, is ‘the divil’s own country‘.

The edTech UK strategy in effect avoided online learning, and the reasons for that were not public knowledge but likely well founded. They’re mostly unevidenced and often any available research comes from the companies themselves or their partners and promoter think tanks and related, or self interested bodies.

I’ve not seen anyone yet talk about disadvantage and deprivation from not issuing course curriculum standard text books to every child.  Why on earth can secondary schools not afford to give each child their text book home? A darn sight cheaper than tech, independent of data costs and a guide to exactly what the exams will demand. Should we not seek to champion the most appropriate and equitable learning solutions, in addition to, rather than exclusively, the digital ones? GSCE children I support(ed) in foreign languages each improved once they had written materials. Getting out Chromebooks by contrast, simply interfered in the process, and wasted valuable classroom time.

Technology can deliver most vital communications, at speed and scale. It can support admin, expand learning and level the playing field through accessible tools. But done wrongly, it makes things worse than without.

Its procurement must assess any potential harmful consequences and safeguard against them, and not accept short term benefits, at the cost of long term harm. It should be safe, fair, and transparent.

“Responsible technology is no longer a nice thing to do to look good, it’s becoming a fundamental pillar of corporate business models. In a post-Cambridge Analytica world, consumers are demanding better technology and more transparency. Companies that do create those services are the ones that will have a better, brighter future.”

Kriti Sharma, VP of AI, Sage, (Doteveryone 2019 event, Responsible Technology)

The hype of ‘edTech’ achievement in the classroom so far, far outweighs the evidence of delivery. Neil Selwyn, Professor in the Faculty of Education, Monash University, Australia, writing in the Impact magazine of the Chartered College in January 2019 summed up:

“the impacts of technology use on teaching and learning remain uncertain. Andreas Schleicher – the OECD’s director of education – caused some upset in 2015 when suggesting that ICT has negligible impact on classrooms. Yet he was simply voicing what many teachers have long known: good technology use in education is very tricky to pin down.”

That won’t stop edTech being part of the mainstay of the UK export strategy post-Brexit whenever that may now be. But let’s be very clear that if the Department wants to be a world leader it shouldn’t promote products whose founders were last most notably interviewing fellow students online about their porn preferences. Or who are based in offshore organisations with very odd financial structures. Do your due diligence. Work with reputable people and organisations and build a trustworthy network of trustworthy products framed by the rule of law, that is rights’ respecting and appropriate to children. But don’t start with the products.

Above all build a strategy for education, for administrative support, for respecting rights, and for teaching in which tools that may or may not be technology-based add value; but don’t start with the product promotion.

To date the aims are to serve two masters. Our children’s education, and the UK edTech export strategy. You can if you’re prepared to do the proper groundwork, but it’s lacking right now. What is certain, is that if you get it wrong for UK children, the other will inevitably fail.

Covid19 must not be misused to direct our national edTech strategy. I wouldn’t start from here isn’t a joke, it’s a national call for change.

Here’s ten reasons where, why, and how to start instead.

1. The national edTech strategy board should start by demonstrating what it wants to see from others, with full transparency of its members, aims, terms of reference, partners and meeting minutes. There should be no need FOI to ask for them. There are much more sensitive subjects that operate in the open. It unfortunately emulates other DfE strategy, and the UK edTech network which has an in-crowd, and long standing controlling members. Both would be the richer for transparency and openness.

2. Stop bigging up the ‘Big Three’  and doing their market monopolisation for them, unless you want people to see you simply as promoting your friends’-on-the-board/foundation/ethics committee’s products. Yes,” many [educational settings] lack the infrastructure” but that should never mean encouraging ownership and delivery by only closed commercial partners.  That is the route to losing control of your state education curriculum, staff training  and (e)quality,  its delivery, risk management, data,  and cost control.

3. Start with designing for fairness in public sector systems. Minimum acceptable ethical standards could be framed around for example, accessibility, design, and restrictions on commercial exploitation and in-product advertising. This needs to be in place first, before fitting products ‘on top’ of an existing unfair, and imbalanced system, to avoid embedding disadvantage and the commodification of children in education, even further.

5. Accessibility and Internet access is a social justice issue.  Again as we’ve argued for at defenddigitalme for some time, these come *before* you promote products on top of the delivery systems:

  • Accessibility standards for all products used in state education should be defined and made compulsory in procurement processes, to ensure access for all and reduce digital exclusion.
  • All schools must be able to connect to high-speed broadband services to ensure equality of access and participation in the educational, economic, cultural and social opportunities of the world wide web.
  • Ensure a substantial improvement in support available to public and school library networks. CILIP has pointed to CIPFA figures of a net reduction of 178 libraries in England between 2009-10 and 2014-15.

6. Core national education infrastructure must be put on the national risk register, as we’ve argued for previously at defenddigitalme (see 6.6). Dependence such as MS Office 365, major cashless payment systems, and Google for Education all need assessed and to be part of the assessment for regular and exceptional delivery of education. We currently operate in the dark. And it should be unthinkable that companies get seats at the national UK edTech strategy table without full transparency over questions on their practices, policy and meeting the rule of law.

7. Shift the power balance back to schools and families, where they can trust an approved procurement route, and children and legal guardians can trust school staff to only be working with suppliers that are not overstepping the boundaries of lawful processing. Incorporate (1) the Recommendation CM/Rec(2018)7 of the Committee of Ministers to member States on Guidelines to respect, protect and fulfil the rights of the child in the digital environment  and (2) respect the UN General comment No. 16 (2013) on State obligations regarding the impact of the business sector on children’s rights, across the education and wider public sector.

8. Start with teacher training. Why on earth is the national strategy all about products, when it should be starting with people?

  • Introduce data protection and pupil privacy into basic teacher training, to support a rights-respecting environment in policy and practice, using edTech and broader data processing, to give staff the clarity, consistency and confidence in applying the high standards they need.
  • Ensure ongoing training is available and accessible to all staff for continuous professional development.
  • A focus on people, nor products, will deliver fundamental basics needed for good tech use.

9. Safe data by design and default. I’m tired of hearing from CEOs of companies that claim to be social entrepreneurs, or non-profit, or teachers who’ve designed apps, how well intentioned their products are. Show me instead. Meet the requirements of the rule of law.

  • Local systems must stop shipping out (often sensitive) pupil data at scale and speed to companies, and instead stay in control of terms and conditions, data purposes, and ban product developments for example.
  • Companies must stop using pupil data for their own purposes for profit, or to make inferences about autism or dyslexia for example, if that’s not your stated product aim, it’s likely unlawful.
  • Stop national pupil data distribution for third-party reuse. Start safe access instead.  And get the Home Office out of education.
  • Establish fair and independent oversight mechanisms of national pupil data, so that transparency and trust are consistently maintained across the public sector, and throughout the chain of data use, from collection, to the end of its life cycle, including annual data usage reports for each child.

10. We need a law that works for children’s rights. Develop a legislative framework for the fair use of a child’s digital footprint from the classroom for direct educational and administrative purposes at local level, including commercial acceptable use policies.  Build the national edTech strategy with a rights’ based framework and lawful basis in an Education and Privacy Act. Without this, you are building on sand.

Women Leading in AI — Challenging the unaccountable and the inevitable

Notes [and my thoughts] from the Women Leading in AI launch event of the Ten Principles of Responsible AI report and recommendations, February 6, 2019.

Speakers included Ivana Bartoletti (GemServ), Jo Stevens MP, Professor Joanna J Bryson, Lord Tim Clement-Jones, Roger Taylor (Centre for Data Ethics and Innovation, Chair), Sue Daley (techUK), Reema Patel, Nuffield Foundation and Ada Lovelace Institute.

Challenging the unaccountable and the ‘inevitable’ is the title of the conclusion of the Women Leading in AI report Ten Principles of Responsible AI, launched this week, and this makes me hopeful.

“There is nothing inevitable about how we choose to use this disruptive technology. […] And there is no excuse for failing to set clear rules so that it remains accountable, fosters our civic values and allows humanity to be stronger and better.”

Ivana Bartoletti, co-founder of Women Leading in AI, began the event, hosted at the House of Commons by Jo Stevens, MP for Cardiff Central, and spoke brilliantly of why it matters right now.

Everyone’s talking about ethics, she said, but it has limitations. I agree with that. This was by contrast very much a call to action.

It was nearly impossible not to cheer, as she set out without any of the usual bullshit, the reasons why we need to stop “churning out algorithms which discriminate against women and minorities.”

Professor Joanna J Bryson took up multiple issues, such as why

  • innovation, ‘flashes in the pan’ are not sustainable and not what we’re looking for things in that work for us [society].
  • The power dynamics of data, noting Facebook, Google et al are global assets, and are also global problems, and flagged the UK consultation on taxation open now.
  • And that it is critical that we do not have another nation with access to all of our data.

She challenged the audience to think about the fact that inequality is higher now than it has been since World War I. That the rich are getting richer and that imbalance of not only wealth, but of the control individuals have in their own lives, is failing us all.

This big picture thinking while zooming in on detailed social, cultural, political and tech issues, fascinated me most that evening. It frustrated the man next to me apparently, who said to me at the end, ‘but they haven’t addressed anything on the technology.’

[I wondered if that summed up neatly, some of why fixing AI cannot be a male dominated debate. Because many of these issues for AI, are not of the technology, but of people and power.] 

Jo Stevens, MP for Cardiff Central, hosted the event and was candid about politicians’ level of knowledge and the need to catch up on some of what matters in the tech sector.

We grapple with the speed of tech, she said. We’re slow at doing things and tech moves quickly. It means that we have to learn quickly.

While discussing how regulation is not something AI tech companies should fear, she suggested that a constructive framework whilst protecting society against some of the problems we see is necessary and just, because self-regulation has failed.

She talked about their enquiry which began about “fake news” and disinformation, but has grown to include:

  • wider behavioural economics,
  • how it affects democracy.
  • understanding the power of data.
  • disappointment with social media companies, who understand the power they have, and fail to be accountable.

She wants to see something that changes the way big business works, in the way that employment regulation challenged exploitation of the workforce and unsafe practices in the past.

The bias (conscious or unconscious) and power imbalance has some similarity with the effects on marginalised communities — women, BAME, disabilities — and she was looking forward to see the proposed solutions, and welcomed the principles.

Lord Clement-Jones, as Chair of the Select Committee on Artificial Intelligence, picked up the values they had highlighted in the March 2018 report, Artificial Intelligence, AI in the UK: ready, willing and able?

Right now there are so many different bodies, groups in parliament and others looking at this [AI / Internet / The Digital World] he said, so it was good that the topic is timely, front and centre with a focus on women, diversity and bias.

He highlighted, the importance of maintaining public trust. How do you understand bias? How do you know how algorithms are trained and understand the issues? He fessed up to being a big fan of DotEveryone and their drive for better ‘digital understanding’.

[Though sometimes this point is over complicated by suggesting individuals must understand how the AI works, the consensus of the evening was common sensed — and aligned with the Working Party 29 guidance — that data controllers must ensure they explain clearly and simply to individuals, how the profiling or automated decision-making process works, and what its effect is for them.]

The way forward he said includes:

  • Designing ethics into algorithms up front.
  • Data audits need to be diverse in order to embody fairness and diversity in the AI.
  • Questions of the job market and re-skilling.
  • The enforcement of ethical frameworks.

He also asked how far bodies will act, in different debates. Deciding who decides on that is still a debate to be had.

For example, aware of the social credit agenda and scoring in China, we should avoid the same issues. He also agreed with Joanna, that international cooperation is vital, and said it is important that we are not disadvantaged in this global technology. He expected that we [the Government Office for AI] will soon promote a common set of AI ethics, at the G20.

Facial recognition and AI are examples of areas that require regulation for safe use of the tech and to weed out those using it for the wrong purposes, he suggested.

However, on regulation he held back. We need to be careful about too many regulators he said. We’ve got the ICO, FCA, CMA, OFCOM, you name it, we’ve already got it, and they risk tripping over one another. [What I thought as CDEI was created para 31.]

We [the Lords Committee] didn’t suggest yet another regulator for AI, he said and instead the CDEI should grapple with those issues and encourage ethical design in micro-targeting for example.

Roger Taylor (Chair of the CDEI), — after saying it felt as if the WLinAI report was like someone had left their homework on his desk,  supported the concept of the WLinAI principles are important, and  agreed it was time for practical things, and what needs done.

Can our existing regulators do their job, and cover AI? he asked, suggesting new regulators will not be necessary. Bias he rightly recognised, already exists in our laws and bodies with public obligations, and in how AI is already operating;

  • CVs sorting. [problematic IMO > See Amazon, US teachers]
  • Policing.
  • Creditworthiness.

What evidence is needed, what process is required, what is needed to assure that we know how it is actually operating? Who gets to decide to know if this is fair or not? While these are complex decisions, they are ultimately not for technicians, but a decision for society, he said.

[So far so good.]

Then he made some statements which were rather more ambiguous. The standards expected of the police will not be the same as those for marketeers micro targeting adverts at you, for example.

[I wondered how and why.]

Start up industries pay more to Google and Facebook than in taxes he said.

[I wondered how and why.]

When we think about a knowledge economy, the output of our most valuable companies is increasingly ‘what is our collective truth? Do you have this diagnosis or not? Are you a good credit risk or not? Even who you think you are — your identity will be controlled by machines.’

What can we do as one country [to influence these questions on AI], in what is a global industry? He believes, a huge amount. We are active in the financial sector, the health service, education, and social care — and while we are at the mercy of large corporations, even large corporations obey the law, he said.

[Hmm, I thought, considering the Google DeepMind-Royal Free agreement that didn’t, and venture capitalists not renowned for their ethics, and yet advise on some of the current data / tech / AI boards. I am sceptical of corporate capture in UK policy making.]

The power to use systems to nudge our decisions, he suggested, is one that needs careful thought. The desire to use the tech to help make decisions is inbuilt into what is actually wrong with the technology that enables us to do so. [With this I strongly agree, and there is too little protection from nudge in data protection law.]

The real question here is, “What is OK to be owned in that kind of economy?” he asked.

This was arguably the neatest and most important question of the evening, and I vigorously agreed with him asking it, but then I worry about his conclusion in passing, that he was, “very keen to hear from anyone attempting to use AI effectively, and encountering difficulties because of regulatory structures.

[And unpopular or contradictory a view as it may be, I find it deeply ethically problematic for the Chair of the CDEI to be held by someone who had a joint-venture that commercially exploited confidential data from the NHS without public knowledge, and its sale to the Department of Health was described by the Public Accounts Committee, as a “hole and corner deal”. That was the route towards care.data, that his co-founder later led for NHS England. The company was then bought by Telstra, where Mr Kelsey went next on leaving NHS Engalnd. The whole commodification of confidentiality of public data, without regard for public trust, is still a barrier to sustainable UK data policy.]

Sue Daley (Tech UK) agreed this year needs to be the year we see action, and the report is a call to action on issues that warrant further discussion.

  • Business wants to do the right thing, and we need to promote it.
  • We need two things — confidence and vigilance.
  • We’re not starting from scratch, and talked about GDPR as the floor not the ceiling. A starting point.

[I’m not quite sure what she was after here, but perhaps it was the suggestion that data regulation is fundamental in AI regulation, with which I would agree.]

What is the gap that needs filled she asked? Gap analysis is what we need next and avoid duplication of effort —need to avoid complexity and duplicity of work with other bodies. If we can answer some of the big, profound questions need to be addressed to position the UK as the place where companies want to come to.

Sue was the only speaker that went on to talk about the education system that needs to frame what skills are needed for a future world for a generation, ‘to thrive in the world we are building for them.’

[The Silicon Valley driven entrepreneur narrative that the education system is broken, is not an uncontroversial position.]

She finished with the hope that young people watching BBC icons the night before would see, Alan Turing [winner of the title] and say yes, I want to be part of that.

Listening to Reema Patel, representative of the Ada Lovelace Institute, was the reason I didn’t leave early and missed my evening class. Everything she said resonated, and was some of the best I have heard in the recent UK debate on AI.

  • Civic engagement, the role of the public is as yet unclear with not one homogeneous, but many publics.
  • The sense of disempowerment is important, with disconnect between policy and decisions made about people’s lives.
  • Transparency and literacy are key.
  • Accountability is vague but vital.
  • What does the social contract look like on people using data?
  • Data may not only be about an individual and under their own responsibility, but about others and what does that mean for data rights, data stewardship and articulation of how they connect with one another, which is lacking in the debate.
  • Legitimacy; If people don’t believe it is working for them, it won’t work at all.
  • Ensuring tech design is responsive to societal values.

2018 was a terrible year she thought. Let’s make 2019 better. [Yes!]


Comments from the floor and questions included Professor Noel Sharkey, who spoke about the reasons why it is urgent to act especially where technology is unfair and unsafe and already in use. He pointed to Compass (Durham police), and predictive policing using AI and facial recognition, with 5% accuracy, and that the Met was not taking these flaws seriously. Liberty produced a strong report on it out this week.

Caroline, from Women in AI echoed my own comments on the need to get urgent review in place of these technologies used with children in education and social care. [in particular where used for prediction of child abuse and interventions in family life].

Joanna J Bryson added to the conversation on accountability, to say people are not following existing software and audit protocols,  someone just needs to go and see if people did the right thing.

The basic question of accountability, is to ask if any flaw is the fault of a corporation, of due diligence, or of the users of the tool? Telling people that this is the same problem as any other software, makes it much easier to find solutions to accountability.

Tim Clement-Jones asked, on how many fronts can we fight on at the same time? If government has appeared to exempt itself from some of these issues, and created a weak framework for itself on handing data, in the Data Protection Act — critically he also asked, is the ICO adequately enforcing on government and public accountability, at local and national levels?

Sue Daley also reminded us that politicians need not know everything, but need to know what the right questions are to be asking? What are the effects that this has on my constituents, in employment, my family? And while she also suggested that not using the technology could be unethical, a participant countered that it’s not the worst the thing to have to slow technology down and ensure it is safe before we all go along with it.

My takeaways of the evening, included that there is a very large body of women, of whom attendees were only a small part, who are thinking, building and engineering solutions to some of these societal issues embedded in policy, practice and technology. They need heard.

It was genuinely electric and empowering, to be in a room dominated by women, women reflecting diversity of a variety of publics, ages, and backgrounds, and who listened to one another. It was certainly something out of the ordinary.

There was a subtle but tangible tension on whether or not  regulation beyond what we have today is needed.

While regulating the human behaviour that becomes encoded in AI, we need to ensure ethics of human behaviour, reasonable expectations and fairness are not conflated with the technology [ie a question of, is AI good or bad] but how it is designed, trained, employed, audited, and assess whether it should be used at all.

This was the most effective group challenge I have heard to date, counter the usual assumed inevitability of a mythical omnipotence. Perhaps Julia Powles, this is the beginnings of a robust, bold, imaginative response.

Why there’s not more women or people from minorities working in the sector, was a really interesting if short, part of the discussion. Why should young women and minorities want to go into an environment that they can see is hostile, in which they may not be heard, and we still hold *them* responsible for making work work?

And while there were many voices lamenting the skills and education gaps, there were probably fewer who might see the solution more simply, as I do. Schools are foreshortening Key Stage 3 by a year, replacing a breadth of subjects, with an earlier compulsory 3 year GCSE curriculum which includes RE, and PSHE, but means that at 12, many children are having to choose to do GCSE courses in computer science / coding, or a consumer-style iMedia, or no IT at all, for the rest of their school life. This either-or content, is incredibly short-sighted and surely some blend of non-examined digital skills should be offered through to 16 to all, at least in parallel importance with RE or PSHE.

I also still wonder, about all that incredibly bright and engaged people are not talking about and solving, and missing in policy making, while caught up in AI. We need to keep thinking broadly, and keep human rights at the centre of our thinking on machines. Anaïs Nin wrote over 70 years ago about the risks of growth in technology to expand our potential for connectivity through machines, but diminish our genuine connectedness as people.

“I don’t think the [American] obsession with politics and economics has improved anything. I am tired of this constant drafting of everyone, to think only of present day events”.

And as I wrote about nearly 3 years ago, we still seem to have no vision for sustainable public policy on data, or establishing a social contract for its use as Reema said, which underpins the UK AI debate. Meanwhile, the current changing national public policies in England on identity and technology, are becoming catastrophic.

Challenging the unaccountable and the ‘inevitable’ in today’s technology and AI debate, is an urgent call to action.

I look forward to hearing how Women Leading in AI plan to make it happen.


References:

Women Leading in AI website: http://womenleadinginai.org/
WLiAI Report: 10 Principles of Responsible AI
@WLinAI #WLinAI

image credits 
post: creative commons Mark Dodds/Flickr
event photo:  / GemServ

Can Data Trusts be trustworthy?

The Lords Select Committee report on AI in the UK in March 2018, suggested that,“the Government plans to adopt the Hall-Pesenti Review recommendation that ‘data trusts’ be established to facilitate the ethical sharing of data between organisations.”

Since data distribution already happens, what difference would a Data Trust model make to ‘ethical sharing‘?

A ‘set of relationships underpinned by a repeatable framework, compliant with parties’ obligations’ seems little better than what we have today, with all its problems including deeply unethical policy and practice.

The ODI set out some of the characteristics Data Trusts might have or share. As importantly, we should define what Data Trusts are not. They should not simply be a new name for pooling content and a new single distribution point. Click and collect.

But is a Data Trust little more than a new description for what goes on already? Either a physical space or legal agreements for data users to pass around the personal data from the unsuspecting, and sometimes unwilling, public. Friends-with-benefits who each bring something to the party to share with the others?

As with any communal risk, it is the standards of the weakest link, the least ethical, the one that pees in the pool, that will increase reputational risk for all who take part, and spoil it for everyone.

Importantly, the Lords AI Committee report recognised that there is an inherent risk how the public would react to Data Trusts, because there is no social license for this new data sharing.

“Under the current proposals, individuals who have their personal data contained within these trusts would have no means by which they could make their views heard, or shape the decisions of these trusts.

Views those keen on Data Trusts seem keen to ignore.

When the Administrative Data Research Network was set up in 2013, a new infrastructure for “deidentified” data linkage, extensive public dialogue was carried across across the UK. It concluded in a report with very similar findings as was apparent at dozens of care.data engagement events in 2014-15;

There is not public support for

  • “Creating large databases containing many variables/data from a large number of public sector sources,
  • Establishing greater permanency of datasets,
  • Allowing administrative data to be linked with business data, or
  • Linking of passively collected administrative data, in particular geo-location data”

The other ‘red-line’ for some participants was allowing “researchers for private companies to access data, either to deliver a public service or in order to make profit. Trust in private companies’ motivations were low.”

All of the above could be central to Data Trusts. All of the above highlight that in any new push to exploit personal data, the public must not be the last to know. And until all of the above are resolved, that social-license underpinning the work will always be missing.

Take the National Pupil Database (NPD) as a case study in a Data Trust done wrong.

It is a mega-database of over 20 other datasets. Raw data has been farmed out for years under terms and conditions to third parties, including users who hold an entire copy of the database, such as the somewhat secretive and unaccountable Fischer Family Trust, and others, who don’t answer to Freedom-of-Information, and whose terms are hidden under commercial confidentilaity. Buying and benchmarking data from schools and selling it back to some, profiling is hidden from parents and pupils, yet the FFT predictive risk scoring can shape a child’s school experience from age 2. They don’t really want to answer how staff tell if a child’s FFT profile and risk score predictions are accurate, or of they can spot errors or a wrong data input somewhere.

Even as the NPD moves towards risk reduction, its issues remain. When will children be told how data about them are used?

Is it any wonder that many people in the UK feel a resentment of institutions and orgs who feel entitled to exploit them, or nudge their behaviour, and a need to ‘take back control’?

It is naïve for those working in data policy and research to think that it does not apply to them.

We already have safe infrastructures in the UK for excellent data access. What users are missing, is the social license to do so.

Some of today’s data uses are ethically problematic.

No one should be talking about increasing access to public data, before delivering increased public understanding. Data users must get over their fear of what if the public found out.

If your data use being on the front pages would make you nervous, maybe it’s a clue you should be doing something differently. If you don’t trust the public would support it, then perhaps it doesn’t deserve to be trusted. Respect individuals’ dignity and human rights. Stop doing stupid things that undermine everything.

Build the social license that care.data was missing. Be honest. Respect our right to know, and right to object. Build them into a public UK data strategy to be understood and be proud of.


Part 1. Ethically problematic
Ethics is dissolving into little more than a buzzword. Can we find solutions underpinned by law, and ethics, and put the person first?

Part 2. Can Data Trusts be trustworthy?
As long as data users ignore data subjects rights, Data Trusts have no social license.



Is education preparing us for the jobs of the future?

The Fabian Women, Glass Ceiling not Glass Slipper event, asked last week:

Is Education preparing us for the jobs of the future?

The panel talked about changing social and political realities. We considered the effects on employment. We began discussion how those changes should feed into education policy and practice today. It is discussion that should be had by the public. So far, almost a year after the Referendum, the UK government is yet to say what post-Brexit Britain might look like. Without a vision, any mandate for the unknown, if voted for on June 9th, will be meaningless.

What was talked about and what should be a public debate:

  • What jobs will be needed in the future?
  • Post Brexit, what skills will we need in the UK?
  • How can the education system adapt and improve to help future generations develop skills in this ever changing landscape?
  • How do we ensure women [and anyone else] are not left behind?

Brexit is the biggest change management project I may never see.

As the State continues making and remaking laws, reforming education, and starts exiting the EU, all in parallel, technology and commercial companies won’t wait to see what the post-Brexit Britain will look like. In our state’s absence of vision, companies are shaping policy and ‘re-writing’ their own version of regulations. What implications could this have for long term public good?

What will be needed in the UK future?

A couple of sentences from Alan Penn have stuck with me all week. Loosely quoted, we’re seeing cultural identity shift across the country, due to the change of our available employment types. Traditional industries once ran in a family, with a strong sense of heritage. New jobs don’t offer that. It leaves a gap we cannot fill with “I’m a call centre worker”. And this change is unevenly felt.

There is no tangible public plan in the Digital Strategy for dealing with that change in the coming 10 to 20 years employment market and what it means tied into education. It matters when many believe, as do these authors in American Scientific, “around half of today’s jobs will be threatened by algorithms. 40% of today’s top 500 companies will have vanished in a decade.”

So what needs thought?

  • Analysis of what that regional jobs market might look like, should be a public part of the Brexit debate and these elections →
    We need to see those goals, to ensure policy can be planned for education and benchmark its progress towards achieving its aims
  • Brexit and technology will disproportionately affect different segments of the jobs market and therefore the population by age, by region, by socio-economic factors →
    Education policy must therefore address aspects of skills looking to the future towards employment in that new environment, so that we make the most of opportunities, and mitigate the harms.
  • Brexit and technology will disproportionately affect communities → What will be done to prevent social collapse in regions hardest hit by change?

Where are we starting from today?

Before we can understand the impact of change, we need to understand what the present looks like. I cannot find a map of what the English education system looks like. No one I ask seems to have one or have a firm grasp across the sector, of how and where all the parts of England’s education system fit together, or their oversight and accountability. Everyone has an idea, but no one can join the dots. If you have, please let me know.

Nothing is constant in education like change; in laws, policy and its effects in practice, so I shall start there.

1. Legislation

In retrospect it was a fatal flaw, missed in post-Referendum battles of who wrote what on the side of a bus, that no one did an assessment of education [and indeed other] ‘legislation in progress’. There should have been recommendations made on scrapping inappropriate government bills in entirety or in parts. New laws are now being enacted, rushed through in wash up, that are geared to our old status quo, and we risk basing policy only on what we know from the past, because on that, we have data.

In the timeframe that Brexit will become tangible, we will feel the effects of the greatest shake up of Higher Education in 25 years. Parts of the Higher Education and Research Act, and Technical and Further Education Act are unsuited to the new order post-Brexit.

What it will do: The new HE law encourages competition between institutions, and the TFE Act centred in large part on how to manage insolvency.

What it should do: Policy needs to promote open, collaborative networks if within a now reduced research and academic circle, scholarly communities are to thrive.

If nothing changes, we will see harm to these teaching institutions and people in them. The stance on counting foreign students in total migrant numbers, to take an example, is singularly pointless.

Even the Royal Society report on Machine Learning noted the UK approach to immigration as a potential harm to prosperity.

Local authorities cannot legally build schools under their authority today, even if needed. They must be free schools. This model has seen high turnover and closures, a rather instable model.

Legislation has recently not only meant restructure, but repurposing of what education [authorities] is expected to offer.

A new Statutory Instrument — The School and Early Years Finance (England) Regulations 2017 — makes music, arts and playgrounds items; ‘That may be removed from maintained schools’ budget shares’.

How will this withdrawal of provision affect skills starting from the Early Years throughout young people’s education?

2. Policy

Education policy if it continues along the grammar school path, will divide communities into ‘passed’ and the ‘unselected’. A side effect of selective schooling— a feature or a bug dependent on your point of view — is socio-economic engineering. It builds class walls in the classroom, while others, like Fabian Women, say we should be breaking through glass ceilings. Current policy in a wider sense, is creating an environment that is hostile to human integration. It creates division across the entire education system for children aged 2–19.

The curriculum is narrowing, according to staff I’ve spoken to recently, as a result of measurement focus on Progress 8, and due to funding constraints.

What effect will this have on analysis of knowledge, discernment, how to assess when computers have made a mistake or supplied misinformation, and how to apply wisdom? Skills that today still distinguish human from machine learning.

What narrowing the curriculum does: Students have fewer opportunities to discover their skill set, limiting opportunities for developing social skills and cultural development, and their development as rounded, happy, human beings.

What we could do: Promote long term love of learning in-and-outside school and in communities. Reinvest in the arts, music and play, which support mental and physical health and create a culture in which people like to live as well as work. Library and community centres funding must be re-prioritised, ensuring inclusion and provision outside school for all abilities.

Austerity builds barriers of access to opportunity and skills. Children who cannot afford to, are excluded from extra curricular classes. We already divide our children through private and state education, into those who have better facilities and funding to enjoy and explore a fully rounded education, and those whose funding will not stretch much beyond the bare curriculum. For SEN children, that has already been stripped back further.

All the accepted current evidence says selective schooling limits social mobility and limits choice. Talk of evidence based profession is hard to square with passion for grammars, an against-the-evidence based policy.

Existing barriers are likely to become entrenched in twenty years. What does it do to society, if we are divided in our communities by money, or gender, or race, and feel disempowered as individuals? Are we less responsible for our actions if there’s nothing we can do about it? If others have more money, more power than us, others have more control over our lives, and “no matter what we do, we won’t pass the 11 plus”?

Without joined-up scrutiny of these policy effects across the board, we risk embedding these barriers into future planning. Today’s data are used to train “how the system should work”. If current data are what applicants in 5 years will base future expectations on, will their decisions be objective and will in-built bias be transparent?

3. Sociological effects of legislation.

It’s not only institutions that will lose autonomy in the Higher Education and Research Act.

At present, the risk to the autonomy of science and research is theoretical — but the implications for academic freedom are troubling. [Nature 538, 5 (06 October 2016)]

The Secretary of State for Education now also has new Powers of Information about individual applicants and students. Combined with the Digital Economy Act, the law can ride roughshod over students’ autonomy and consent choices. Today they can opt out of UCAS automatically sharing their personal data with the Student Loans Company for example. Thanks to these new powers, and combined with the Digital Economy Act, that’s gone.

The Act further includes the intention to make institutions release more data about course intake and results under the banner of ‘transparency’. Part of the aim is indisputably positive, to expose discrimination and inequality of all kinds. It also aims to make the £ cost-benefit return “clearer” to applicants — by showing what exams you need to get in, what you come out with, and then by joining all that personal data to the longitudinal school record, tax and welfare data, you see what the return is on your student loan. The government can also then see what your education ‘cost or benefit’ the Treasury. It is all of course much more nuanced than that, but that’s the very simplified gist.

This ‘destinations data’ is going to be a dataset we hear ever more about and has the potential to influence education policy from age 2.

Aside from the issue of personal data disclosiveness when published by institutions — we already know of individuals who could spot themselves in a current published dataset — I worry that this direction using data for ‘advice’ is unhelpful. What if we’re looking at the wrong data upon which to base future decisions? The past doesn’t take account of Brexit or enable applicants to do so.

Researchers [and applicants, the year before they apply or start a course] will be looking at what *was* — predicted and achieved qualifying grades, make up of the class, course results, first job earnings — what was for other people, is at least 5 years old by the time it’s looked at it. Five years is a long time out of date.

4. Change

Teachers and schools have long since reached saturation point in the last 5 years to handle change. Reform has been drastic, in structures, curriculum, and ongoing in funding. There is no ongoing teacher training, and lack of CPD take up, is exacerbated by underfunding.

Teachers are fed up with change. They want stability. But contrary to the current “strong and stable” message, reality is that ahead we will get anything but, and must instead manage change if we are to thrive. Politically, we will see backlash when ‘stable’ is undeliverable.

But Teaching has not seen ‘stable’ for some time. Teachers are asking for fewer children, and more cash in the classroom. Unions talk of a focus on learning, not testing, to drive school standards. If the planned restructuring of funding happens, how will it affect staff retention?

We know schools are already reducing staff. How will this affect employment, adult and children’s skill development, their ambition, and society and economy?

Where could legislation and policy look ahead?

  • What are the big Brexit targets and barriers and when do we expect them?
  • How is the fall out from underfunding and reduction of teaching staff expected to affect skills provision?
  • State education policy is increasingly hands-off. What is the incentive for local schools or MATs to look much beyond the short term?
  • How do local decisions ensure education is preparing their community, but also considering society, health and (elderly) social care, Post-Brexit readiness and women’s economic empowerment?
  • How does our ageing population shift in the same time frame?

How can the education system adapt?

We need to talk more about other changes in the system in parallel to Brexit; join the dots, plus the potential positive and harmful effects of technology.

Gender here too plays a role, as does mitigating discrimination of all kinds, confirmation bias, and even in the tech itself, whether AI for example, is going to be better than us at decision-making, if we teach AI to be biased.

Dr Lisa Maria Mueller talked about the effects and influence of age, setting and language factors on what skills we will need, and employment. While there are certain skills sets that computers are and will be better at than people, she argued society also needs to continue to cultivate human skills in cultural sensitivities, empathy, and understanding. We all nodded. But how?

To develop all these human skills is going to take investment. Investment in the humans that teach us. Bennie Kara, Assistant Headteacher in London, spoke about school cuts and how they will affect children’s futures.

The future of England’s education must be geared to a world in which knowledge and facts are ubiquitous, and readily available online than at any other time. And access to learning must be inclusive. That means including SEN and low income families, the unskilled, everyone. As we become more internationally remote, we must put safeguards in place if we to support thriving communities.

Policy and legislation must also preserve and respect human dignity in a changing work environment, and review not only what work is on offer, but *how*; the kinds of contracts and jobs available.

Where might practice need to adapt now?

  • Re-consider curriculum content with its focus on facts. Will success risk being measured based on out of date knowledge, and a measure of recall? Are these skills in growing or dwindling need?
  • Knowledge focus must place value on analysis, discernment, and application of facts that computers will learn and recall better than us. Much of that learning happens outside school.
  • Opportunities have been cut, together with funding. We need communities brought back together, if they are not to collapse. Funding centres of local learning, restoring libraries and community centres will be essential to local skill development.

What is missing?

Although Sarah Waite spoke (in a suitably Purdah appropriate tone), about the importance of basic skills in the future labour market we didn’t get to talking about education preparing us for the lack of jobs of the future and what that changed labour market will look like.

What skills will *not* be needed? Who decides? If left to companies’ sponsor led steer in academies, what effects will we see in society?

Discussions of a future education model and technology seem to share a common theme: people seem reduced in making autonomous choices. But they share no positive vision.

  • Technology should empower us, but it seems to empower the State and diminish citizens’ autonomy in many of today’s policies, and in future scenarios especially around the use of personal data and Digital Economy.
  • Technology should enable greater collaboration, but current tech in education policy is focused too little on use on children’s own terms, and too heavily on top-down monitoring: of scoring, screen time, search terms. Further restrictions by Age Verification are coming, and may access and reduce participation in online services if not done well.
  • Infrastructure weakness is letting down the skill training: University Technical Colleges (UTCs) are not popular and failing to fill places. There is lack of an overarching area wide strategic plan for pupils in which UTCS play a part. Local Authorities played an important part in regional planning which needs restored to ensure joined up local thinking.

How do we ensure women are not left behind?

The final question of the evening asked how women will be affected by Brexit and changing job market. Part of the risks overall, the panel concluded, is related to [lack of] equal-pay. But where are the assessments of the gendered effects in the UK of:

  • community structural change and intra-family support and effect on demand for social care
  • tech solutions in response to lack of human interaction and staffing shortages including robots in the home and telecare
  • the disproportionate drop out of work, due to unpaid care roles, and difficulty getting back in after a break.
  • the roles and types of work likely to be most affected or replaced by machine learning and robots
  • and how will women be empowered or not socially by technology?

We quickly need in education to respond to the known data where women are already being left behind now. The attrition rate for example in teaching in England after two-three years is poor, and getting worse. What will government do to keep teachers teaching? Their value as role models is not captured in pupils’ exams results based entirely on knowledge transfer.

Our GCSEs this year go back to pure exam based testing, and remove applied coursework marking, and is likely to see lower attainment for girls than boys, say practitioners. Likely to leave girls behind at an earlier age.

“There is compelling evidence to suggest that girls in particular may be affected by the changes — as research suggests that boys perform more confidently when assessed by exams alone.”

Jennifer Tuckett spoke about what fairness might look like for female education in the Creative Industries. From school-leaver to returning mother, and retraining older women, appreciating the effects of gender in education is intrinsic to the future jobs market.

We also need broader public understanding of the loop of the impacts of technology, on the process and delivery of teaching itself, and as school management becomes increasingly important and is male dominated, how will changes in teaching affect women disproportionately? Fact delivery and testing can be done by machine, and supports current policy direction, but can a computer create a love of learning and teach humans how to think?

“There is a opportunity for a holistic synthesis of research into gender, the effect of tech on the workplace, the effect of technology on care roles, risks and opportunities.”

Delivering education to ensure women are not left behind, includes avoiding women going into education as teenagers now, to be led down routes without thinking of what they want and need in future. Regardless of work.

Education must adapt to changed employment markets, and the social and community effects of Brexit. If it does not, barriers will become embedded. Geographical, economic, language, familial, skills, and social exclusion.

In short

In summary, what is the government’s Brexit vision? We must know what they see five, 10, and for 25 years ahead, set against understanding the landscape as-is, in order to peg other policy to it.

With this foundation, what we know and what we estimate we don’t know yet can be planned for.

Once we know where we are going in policy, we can do a fit-gap to map how to get people there.

Estimate which skills gaps need filled and which do not. Where will change be hardest?

Change is not new. But there is current potential for massive long term economic and social lasting damage to our young people today. Government is hindered by short term political thinking, but it has a long-term responsibility to ensure children are not mis-educated because policy and the future environment are not aligned.

We deserve public, transparent, informed debate to plan our lives.

We enter the unknown of the education triangle at our peril; Brexit, underfunding, divisive structural policy, for the next ten years and beyond, without appropriate adjustment to pre-Brexit legislation and policy plans for the new world order.

The combined negative effects on employment at scale and at pace must be assessed with urgency, not by big Tech who will profit, but with an eye on future fairness, and public economic and social good. Academy sponsors, decision makers in curriculum choices, schools with limited funding, have no incentives to look to the wider world.

If we’re going to go it alone, we’d be better be robust as a society, and that can’t be just some of us, and can’t only be about skills as seen as having an tangible output.

All this discussion is framed by the premise that education’s aim is to prepare a future workforce for work, and that it is sustainable.

Policy is increasingly based on work that is measured by economic output. We must not leave out or behind those who do not, or cannot, or whose work is unmeasured yet contributes to the world.

‘The only future worth building includes everyone,’ said the Pope in a recent TedTalk.

What kind of future do you want to see yourself living in? Will we all work or will there be universal basic income? What will happen on housing, an ageing population, air pollution, prisons, free movement, migration, and health? What will keep communities together as their known world in employment, and family life, and support collapse? How will education enable children to discover their talents and passions?

Human beings are more than what we do. The sense of a country of who we are and what we stand for is about more than our employment or what we earn. And we cannot live on slogans alone.

Who do we think we in the UK will be after Brexit, needs real and substantial answers. What are we going to *do* and *be* in the world?

Without this vision, any mandate as voted for on June 9th, will be made in the dark and open to future objection writ large. ‘We’ must be inclusive based on a consensus, not simply a ‘mandate’.

Only with clear vision for all these facets fitting together in a model of how we will grow in all senses, will we be able to answer the question, is education preparing us [all] for the jobs of the future?

More than this, we must ask if education is preparing people for the lack of jobs, for changing relationships in our communities, with each other, and with machines.

Change is coming, Brexit or not. But Brexit has exacerbated the potential to miss opportunities, embed barriers, and see negative side-effects from changes already underway in employment, in an accelerated timeframe.

If our education policy today is not gearing up to that change, we must.

Failing a generation is not what post-Brexit Britain needs

Basically Britain needs Prof. Brian Cox shaping education policy:

“If it were up to me I would increase pay and conditions and levels of responsibility and respect significantly, because it is an investment that would pay itself back many times over in the decades to come.”

Don’t use children as ‘measurement probes’ to test schools

What effect does using school exam results to reform the school system have on children? And what effect does it have on society?

Last autumn Ofqual published a report and their study on consistency of exam marking and metrics.

The report concluded that half of pupils in English Literature, as an example, are not awarded the “correct” grade on a particular exam paper due to marking inconsistencies and the design of the tests.
Given the complexity and sensitivity of the data, Ofqual concluded, it is essential that the metrics stand up to scrutiny and that there is a very clear understanding behind the meaning and application of any quality of marking.  They wrote that, “there are dangers that information from metrics (particularly when related to grade boundaries) could be used out of context.”

Context and accuracy are fundamental to the value of and trust in these tests. And at the moment, trust is not high in the system behind it. There must also be trust in policy behind the system.

This summer two sets of UK school tests, will come under scrutiny. GCSEs and SATS. The goal posts are moving for children and schools across the country. And it’s bad for children and bad for Britain.

Grades A-G will be swapped for numbers 1 -9

GCSE sitting 15-16 year olds will see their exams shift to a numerical system, scoring from the highest Grade 9 to Grade 1, with the three top grades replacing the current A and A*. The alphabetical grading system will be fully phased out by 2019.

The plans intended that roughly the same proportion of students as have achieved a Grade C will be awarded a new Grade 4 and as Schools Week reported: “There will be two GCSE pass rates in school performance tables.”

One will measure grade 5s or above, and this will be called the ‘strong’ pass rate. And the other will measure grade 4s or above, and this will be the ‘standard’ pass rate.

Laura McInerney summed up, “in some senses, it’s not a bad idea as it will mean it is easier to see if the measures are comparable. We can check if the ‘standard’ rate is better or worse over the next few years. (This is particularly good for the DfE who have been told off by the government watchdog for fiddling about with data so much that no one can tell if anything has worked anymore).”

There’s plenty of confusion in parents, how the numerical grading system will work. The confusion you can gauge in playground conversations, is also reflected nationally in a more measurable way.

Market research in a range of audiences – including businesses, head teachers, universities, colleges, parents and pupils – found that just 31 per cent of secondary school pupils and 30 per cent of parents were clear on the new numerical grading system.

So that’s a change in the GCSE grading structure. But why? If more differentiators are needed, why not add one or two more letters and shift grade boundaries? A policy need for these changes is unclear.

Machine marking is training on ten year olds

I wonder if any of the shift to numerical marking, is due in any part to a desire to move GCSEs in future to machine marking?

This year, ten and eleven year olds, children in their last year of primary school, will have their SATs tests computer marked.

That’s everything in maths and English. Not multiple choice papers or one word answers, but full written responses. If their f, b or g doesn’t look like the correct  letter in the correct place in the sentence, then it gains no marks.

Parents are concerned about children whose handwriting is awful, but their knowledge is not. How well can they hope to be assessed? If exams are increasingly machine marked out of sight, many sent to India, where is our oversight of the marking process and accuracy?

The concerns I’ve heard simply among local parents and staff, seem reflected in national discussions and the assessor, Oftsed. TES has reported Ofsted’s most senior officials as saying that the inspectorate is just as reluctant to use this year’s writing assessments as it was in 2016. Teachers and parents locally are united in feeling it is not accurate, not fair, and not right.

The content is also to be tougher.

How will we know what is being accurately measured and the accuracy of the metrics with content changes at the same time? How will we know if children didn’t make the mark, or if the marks were simply not awarded?

The accountability of the process is less than transparent to pupils and parents. We have little opportunity for Ofqual’s recommended scrutiny of these metrics, or the data behind the system on our kids.

Causation, correlation and why we should care

The real risk is that no one will be able to tell if there is an error, where it stems from, and where there is a reason if pass rates should be markedly different from what was expected.

After the wide range of changes across pupil attainment, exam content, school progress scores, and their interaction and dependencies, can they all fit together and be comparable with the past at all?

If the SATS are making lots of mistakes simply due to being bad at reading ten year’ old’s handwriting, how will we know?

Or if GCSE scores are lower, will we be able to see if it is because they have genuinely differentiated the results in a wider spread, and stretched out the fail, pass and top passes more strictly than before?

What is likely, is that this year’s set of children who were expecting As and A star at GCSE but fail to be the one of the two children nationally who get the new grade 9, will be disappointed to feel they are not, after all, as great as they thought they were.

And next year, if you can’t be the one or two to get the top mark, will the best simply stop stretching themselves and rest a bit easier, because, whatever, you won’t get that straight grade As anyway?

Even if children would not change behaviours were they to know, the target range scoring sent by third party data processors to schools, discourages teachers from stretching those at the top.

Politicians look for positive progress, but policies are changing that will increase the number of schools deemed to have failed. Why?

Our children’s results are being used to reform the school system.

Coasting and failing schools can be compelled to become academies.

Government policy on this forced academisation was rejected by popular revolt. It appears that the government is determined that schools *will* become academies with the same fervour that they *will* re-introduce grammar schools. Both are unevidenced and unwanted. But there is a workaround.  Create evidence. Make the successful scores harder to achieve, and more will be seen to fail.

A total of 282 secondary schools in England were deemed to be failing by the government this January, as they “have not met a new set of national standards”.

It is expected that even more will attain ‘less’ this summer. Tim Leunig, Chief Analyst & Chief Scientific Adviser Department for Education, made a personal guess at two reaching the top mark.

The context of this GCSE ‘failure’ is the changes in how schools are measured. Children’s progress over 8 subjects, or “P8” is being used as an accountability measure of overall school quality.

But it’s really just: “a school’s average Attainment 8 score adjusted for pupils’ Key Stage 2 attainment.” [Dave Thomson, Education Datalab]

Work done by FFT Education Datalab showed that contextualising P8 scores can lead to large changes for some schools.  (Read more here and here). You cannot meaningfully compare schools with different types of intake, but it appears that the government is determined to do so. Starting ever younger if new plans go ahead.

Data is being reshaped to tell stories to fit to policy.

Shaping children’s future

What this reshaping doesn’t factor in at all, is the labelling of a generation or more, with personal failure, from age ten and up.

All this tinkering with the data, isn’t just data.

It’s tinkering badly with our kids sense of self, their sense of achievement, aspiration, and with that; the country’s future.

Education reform has become the aim, and it has replaced the aims of education.

Post-Brexit Britain doesn’t need policy that delivers ideology. We don’t need “to use children as ‘measurement probes’ to test schools.

Just as we shouldn’t use children’s educational path to test their net worth or cost to the economy. Or predict it in future.

Children’s education and human value cannot be measured in data.

DeepMind or DeepMined? NHS public data, engagement and regulation repackaged

A duty of confidentiality and the regulation of medical records are as old as the hills. Public engagement on attitudes in this in context of the NHS has been done and published by established social science and health organisations in the last three years. So why is Google DeepMind (GDM) talking about it as if it’s something new? What might assumed consent NHS-wide mean in this new context of engagement? Given the side effects for public health and medical ethics of a step-change towards assumed consent in a commercial product environment, is this ‘don’t be evil’ shift to ‘do no harm’ good enough?  Has Regulation failed patients?
My view from the GDM patient and public event, September 20.

Involving public and patients

Around a hundred participants joined the Google DeepMind public and patient event,  in September after which Paul Wicks gave his view in the BMJ afterwards, and rightly started with the fact the event was held in the aftermath of some difficult questions.

Surprisingly, none were addressed in the event presentations. No one mentioned data processing failings, the hospital Trust’s duty of confidentiality, or criticisms in the press earlier this year. No one talked about the 5 years of past data from across the whole hospital or monthly extracts that were being shared and had first been extracted for GDM use without consent.

I was truly taken aback by the sense of entitlement that came across. The decision by the Trust to give away confidential patient records without consent earlier in 2015/16 was either forgotten or ignored and until the opportunity for questions,  the future model was presented unquestioningly. The model for an NHS-wide hand held gateway to your records that the announcement this week embeds.

What matters on reflection is that the overall reaction to this ‘engagement’ is bigger than the one event, bigger than the concepts of tools they could hypothetically consider designing, or lack of consent for the data already used.

It’s a massive question of principle, a litmus test for future commercial users of big, even national population-wide public datasets.

Who gets a say in how our public data are used? Will the autonomy of the individual be ignored as standard, assumed unless you opt out, and asked for forgiveness with a post-haste opt out tacked on?

Should patients just expect any hospital can now hand over all our medical histories in a free-for-all to commercial companies and their product development without asking us first?

Public and patient questions

Where data may have been used in the algorithms of the DeepMind black box, there was a black hole in addressing patient consent.

Public engagement with those who are keen to be involved, is not a replacement for individual permission from those who don’t want to be, and who expected a duty of patient-clinician confidentiality.

Tellingly, the final part of the event tried to be a capture our opinions on how to involve the public. Right off the bat the first question was one of privacy. Most asked questions about issues raised to date, rather than looking to design the future. Ignoring those and retrofitting a one-size fits all model under the banner of ‘engagement’ won’t work until they address concerns of those people they have already used and the breach of trust that now jeopardises people’s future willingness to be involved, not only in this project, but potentially other research.

This event should have been a learning event for Google which is good at learning and uses people to do it both by man and machine.

But from their post-media reaction after  this week’s announcement it seems not all feedback or lessons learned are welcome.

Google DeepMind executives were keen to use patient case studies and had patients themselves do the most talking, saying how important data is to treat kidney and eyecare, which I respect greatly. But there was very little apparent link why their experience was related to Google DeepMind at all or products created to date.

Google DeepMind has the data from every patient in the hospital in recent years, not only patients affected by this condition and not data from the people who will be supported directly by this app.

Yet GoogleDeepMind say this is “direct care” not research. Hard to be for direct care when you are no longer under the hospital’s care. Implied consent for use of sensitive health data, needs to be used in alignment with the purposes for which it was given. It must be fair and lawful.

If data users don’t get that, or won’t accept it, they should get out of healthcare and our public data right now. Or heed advice of critical friends and get it right to be trustworthy in future. .

What’s the plan ahead?

Beneath the packaging, this came across as a pitch on why Google DeepMind should get access to paid-for-by-the-taxpayer NHS patient data. They have no clinical background or duty of care. They say they want people to be part of a rigorous process, including a public/patient panel, but it’s a process they clearly want to shape and control, and for a future commercial model. Can a public panel be truly independent, and ethical, if profit plays a role?

Of course it’s rightly exciting for healthcare to see innovation and drives towards better clinical care, but not only the intent but how it gets done matters. This matters because it’s not a one-off.

The anticipation in the room of ‘if only we could access the whole NHS data cohort’ was tangible in the room, and what a gift it would be to commercial companies and product makers. Wrapped in heart wrenching stories. Stories of real-patients, with real-lives who genuinely want improvement for all. Who doesn’t want that? But hanging on the coat tails of Mr Suleyman were a range of commmercial companies and third party orgs asking for the same.

In order to deliver those benefits and avoid its risks there is well-established framework of regulation and oversight of UK  practitioners and use of medical records and in medical devices and tools: the General Medical Council, the Health and Social Care Information Centre (Now called ‘NHS Digital’), Confidentiality Advisory Group (CAG)and more, all have roles to play.

Google DeepMind and the Trusts have stepped outwith that framework and been playing catch up not only with public involvement, but also with MHRA regulatory approval.

One of the major questions is around the invisibility of data science decisions that have direct interventions in people’s life and death.

The ethics of data sciences in which decisions are automated, requires us to “guard against dangerous assumptions that algorithms are near-perfect, or more perfect than human judgement.”  (The Opportunities and Ethics of Big Data. [1])

If Google DeepMind now plans to share their API widely who will proof their tech? Who else gets to develop something similar?

Don’t be evil 2.0

Google DeepMind appropriated ‘do no harm’ as the health event motto, echoing the once favored Google motto ‘don’t be evil’.

However, they really needed to address that the fragility of some patients’ trust in their clinicians has been harmed already, before DeepMind has even run an algorithm on the data, simply because patient data was given away with patients’ permission.

A former Royal Free patient spoke to me at the event and said they were shocked to have to have first read in the papers that their confidential medical records had been given to Google without their knowledge. Another said his mother had been part of the cohort and has concerns. Why weren’t they properly informed? The public engagement work they should to my mind be doing, is with the London hospital individual patients whose data they have already been using without their consent, explaining why they got their confidential medical records without telling them, and addressing their questions and real concerns. Not at a flash public event.

I often think in the name, they just left off the ‘e’. They are Google. We are the deep mined. That may sound flippant but it’s not the intent. It’s entirely serious. Past patient data was handed over to mine, in order to think about building a potential future tool.

There was a lot of if, future, ambition, and sweeping generalisations and ‘high-level sketches’ of what might be one day. You need moonshots to boost discovery, but losing patient trust even of a few people, cannot be a casualty we should casually accept. For the company there is no side effect. For patients, it could last a lifetime.

If you go back to the roots of health care, you could take the since misappropriated Hippocratic Oath and quote not only, as Suleyman did, “do no harm” , but the next part. “I will not play God.”

Patriarchal top down Care.data was a disastrous model of engagement that confused communication with ‘tell the public loudly and often what we want to happen, what we think best, and then disregard public opinion.’ A model that doesn’t work.

The recent public engagement event on the National Data Guardian work consent models certainly appear from the talks to be learning those lessons. To get it wrong in commercial use, will be disastrous.

The far greater risk from this misadventure is not company  reputation, which seems to be top among Google DeepMind’s greatest concern. The risk that Google DeepMind seems prepared to take is one that is not at its cost, but that of public trust in the hospitals and NHS brand, public health, and its research.

Commercial misappropriation of patient data without consent could set back restoration of public trust and work towards a better model that has been work-in-progress since care.data car crash of 2013.

You might be able to abdicate responsibility if you think you’re not the driver. But where does the buck stop for contributory failure?

All this, says Google DeepMind, is nothing new, but Google isn’t other companies and this is a massive pilot move by a corporate giant into first appropriating and then brokering access to NHS-wide data to make an as-yet opaque private profit.  And being paid by the hospital trust to do so. Creating a data-sharing access infrastructure for the Royal Free is product development and one that had no permission to use 5 years worth of patient records to do so.

The care.data catastrophe may have damaged public trust and data access for public interest research for some time, but it did so doing commercial interests a massive favour. An assumption of ‘opt out’ rather than ‘opt in’ has become the NHS model. If the boundaries are changing of what is assumed under that, do the public still have no say in whether that is satisfactory? Because it’s not.

This example should highlight why an opt out model of NHS patient data is entirely unsatisfactory and cannot continue for these uses.

Should boundaries be in place?

So should boundaries in place in the NHS before this spreads. Hell yes. If as Mustafa said, it’s not just about developing technology but the process, regulatory and governance landscapes, then we should be told why their existing use of patient data intended for the Streams app development steam-rollered through those existing legal and ethical landscapes we have today. Those frameworks exist to preserve patients from quacks and skullduggery.

This then becomes about the duty of the controller and rights of the patient. It comes back to what we release, not only how it is used.

Can a panel of highly respected individuals intervene to embed good ethics if plans conflict with the purpose of making money from patients? Where are the boundaries between private and public good? Where they quash consent, where are its limitations and who decides? What boundaries do hospital trusts think they have on the duty of confidentiality?

It is for the hospitals as the data controllers from information received through their clinicians that responsibility lies.

What is next for Trusts? Giving an entire hospital patient database to supermarket pharmacies, because they too might make a useful tool? Mash up your health data with your loyalty card? All under assumed consent because product development is “direct care” because it’s clearly not research? Ethically it must be opt in.

App development is not using data for direct care. It is in product development. Post-truth packaging won’t fly. Dressing up the donkey by simply calling it by another name, won’t transform it into a unicorn, no matter how much you want to believe in it.

“In some sense I recognise that we’re an exceptional company, in other senses I think it’s important to put that in the wider context and focus on the patient benefit that we’re obviously trying to deliver.” [TechCrunch, November 22]

We’ve heard the cry, to focus on the benefit before. Right before care.data  failed to communicate to 50m people what it was doing with their health records. Why does Google think they’re different? They don’t. They’re just another company normalising this they say.

The hospitals meanwhile, have been very quiet.

What do patients want?

This was what Google DeepMind wanted to hear in the final 30 minutes of the event, but didn’t get to hear as all the questions were about what have you done so far and why?

There is already plenty of evidence what the public wants on the use of their medical records, from public engagement work that has already been done around NHS health data use from workshops and surveys since 2013. Public opinion is pretty clear. Many say companies should not get NHS records for commercial exploitation without consent at all (in the ESRC public dialogues on data in 2013, the Royal Statistical Society’s data trust deficit with lessons for policy makers work with Ipsos MORI in 2014, and the Wellcome Trust one-way mirror work in 2016 as well of course as the NHS England care.data public engagement workshops in 2014).

mirror

All those surveys and workshops show the public have consistent levels of concern about having a lack of control over who has access to their NHS data for what purposes and unlimited scope or future, and commercial purposes of their data is a red-line for many people.

A red-line which this Royal Free Google DeepMind project appeared to want to wipe out as if it had never been drawn at all.

I am sceptical that Google DeepMind has not done their research into existing public opinion on health data uses and research.

Those studies in public engagement already done by leading health and social science bodies state clearly that commercial use is a red line for some.

So why did they cross it without consent? Tell me why I should trust the hospitals to get this right with this company but trust you not to get it wrong with others. Because Google’s the good guys?

If this event and thinking ‘let’s get patients to front our drive towards getting more data’ sought to legitimise what they and these London hospitals are already getting wrong, I’m not sure that just ‘because we’re Google’ being big, bold and famous for creative disruption, is enough. This is a different game afoot. It will be a game-changer for patient rights to privacy if this scale of commercial product exploitation of identifiable NHS data becomes the norm at a local level to decide at will. No matter how terrific the patient benefit should be, hospitals can’t override patient rights.

If this steamrollers over consent and regulations, what next?

Regulation revolutionised, reframed or overruled

The invited speaker from Patients4Data spoke in favour of commercial exploitation as a benefit for the NHS but as Paul Wicks noted, was ‘perplexed as to why “a doctor is worried about crossing the I’s and dotting the T’s for 12 months (of regulatory approval)”.’

Appropriating public engagement is one thing. Appropriating what is seen as acceptable governance and oversight is another. If a new accepted model of regulation comes from this, we can say goodbye to the old one.  Goodbye to guaranteed patient confidentiality. Goodbye to assuming your health data are not open to commercial use.  Hello to assuming opt out of that use is good enough instead.

Trusted public regulatory and oversight frameworks exist for a reason. But they lag behind the industry and what some are doing. And if big players can find no retribution in skipping around them and then being approved in hindsight there’s not much incentive to follow the rules from the start. As TechCrunch suggested after the event, this is all “pretty standard playbook for tech firms seeking to workaround business barriers created by regulation.”

Should patients just expect any hospital can now hand over all our medical histories in a free-for-all to commercial companies without asking us first? It is for the Information Commissioner to decide whether the purposes of product design were what patients expected their data to be used for, when treated 5 years ago.

The state needs to catch up fast. The next private appropriation of the regulation of  AI collaboration oversight, has just begun. Until then, I believe civil society will not be ‘pedalling’ anything, but I hope will challenge companies cheek by jowl in any race to exploit personal confidential data and universal rights to privacy [2] by redesigning regulation on company terms.

Let’s be clear. It’s not direct care. It’s not research. It’s product development. For a product on which the commercial model is ‘I don’t know‘. How many companies enter a 5 year plan like that?

Benefit is great. But if you ignore the harm you are doing in real terms to real lives and only don’t see it because they’ve not talked to you, ask yourself why that is, not why you don’t believe it matters.

There should be no competition in what is right for patient care and data science and product development. The goals should be the same. Safe uses of personal data in ways the public expect, with no surprises. That means consent comes first in commercial markets.


[1] Olivia Varley-Winter, Hetan Shah, ‘The opportunities and ethics of big data: practical priorities for a national Council of Data Ethics.’ Theme issue ‘The ethical impact of data science’ compiled and edited by Mariarosaria Taddeo and Luciano Floridi. [The Royal Society, Volume 374, issue 2083]

[2] Universal rights to privacy: Upcoming Data Protection legislation (GDPR) already in place and enforceable from May 25, 2018 requires additional attention to fair processing, consent, the right to revoke it, to access one’s own and seek redress for inaccurate data. “The term “child” is not defined by the GDPR. Controllers should therefore be prepared to address these requirements in notices directed at teenagers and young adults.”

The Rights of the Child: Data policy and practice about children’s confidential data will impinge on principles set out in the United Nations Convention on the Rights of the Child, Article 12, the right to express views and be heard in decisions about them and Article 16 a right to privacy and respect for a child’s family and home life if these data will be used without consent. Similar rights that are included in the common law of confidentiality.

Article 8 of the Human Rights Act 1998 incorporating the European Convention on Human Rights Article 8.1 and 8.2 that there shall be no interference by a  public authority on the respect of private and family life that is neither necessary or proportionate.

Judgment of the Court of Justice of the European Union in the Bara case (C‑201/14) (October 2015) reiterated the need for public bodies to legally and fairly process personal data before transferring it between themselves. Trusts need to respect this also with contractors.

The EU Charter of Fundamental Rights, Article 52 also protects the rights of individuals about data and privacy and Article 52 protects the essence of these freedoms.

Datasharing, lawmaking and ethics: power, practice and public policy

“Lawmaking is the Wire, not Schoolhouse Rock. It’s about blood and war and power, not evidence and argument and policy.”

"We can't trust the regulators," they say. "We need to be able to investigate the data for ourselves." Technology seems to provide the perfect solution. Just put it all online - people can go through the data while trusting no one.  There's just one problem. If you can't trust the regulators, what makes you think you can trust the data?" 

Extracts from The Boy Who Could Change the World: The Writings of Aaron Swartz. Chapter: ‘When is Technology Useful? ‘ June 2009.

The question keeps getting asked, is the concept of ethics obsolete in Big Data?

I’ve come to some conclusions why ‘Big Data’ use keeps pushing the boundaries of what many people find acceptable, and yet the people doing the research, the regulators and lawmakers often express surprise at negative reactions. Some even express disdain for public opinion, dismissing it as ignorant, not ‘understanding the benefits’, yet to be convinced. I’ve decided why I think what is considered ‘ethical’ in data science does not meet public expectation.

It’s not about people.

Researchers using large datasets, often have a foundation in data science, applied computing, maths, and don’t see data as people. It’s only data. Creating patterns, correlations, and analysis of individual level data are not seen as research involving human subjects.

This is embodied in the nth number of research ethics reviews I have read in the last year in which the question is asked, does the research involve people? The answer given is invariably ‘no’.

And these data analysts using, let’s say health data, are not working in a subject that is founded on any ethical principle, contrasting with the medical world the data come from.

The public feels differently about the information that is about them, and may be known, only to them or select professionals. The values that we as the public attach to our data  and expectations of its handling may reflect the expectation we have of handling of us as people who are connected to it. We see our data as all about us.

The values that are therefore put on data, and on how it can and should be used, can be at odds with one another, the public perception is not reciprocated by the researchers. This may be especially true if researchers are using data which has been de-identified, although it may not be anonymous.

New legislation on the horizon, the Better Use of Data in Government,  intends to fill the [loop]hole between what was legal to share in the past and what some want to exploit today, and emphasises a gap in the uses of data by public interest, academic researchers, and uses by government actors. The first incorporate by-and-large privacy and anonymisation techniques by design, versus the second designed for applied use of identifiable data.

Government departments and public bodies want to identify and track people who are somehow misaligned with the values of the system; either through fraud, debt, Troubled Families, or owing Student Loans. All highly sensitive subjects. But their ethical data science framework will not treat them as individuals, but only as data subjects. Or as groups who share certain characteristics.

The system again intrinsically fails to see these uses of data as being about individuals, but sees them as categories of people – “fraud” “debt” “Troubled families.” It is designed to profile people.

Services that weren’t built for people, but for government processes, result in datasets used in research, that aren’t well designed for research. So we now see attempts to shoehorn historical practices into data use  by modern data science practitioners, with policy that is shortsighted.

We can’t afford for these things to be so off axis, if civil service thinking is exploring “potential game-changers such as virtual reality for citizens in the autism spectrum, biometrics to reduce fraud, and data science and machine-learning to automate decisions.”

In an organisation such as DWP this must be really well designed since “the scale at which we operate is unprecedented: with 800 locations and 85,000  colleagues, we’re larger than most retail operations.”

The power to affect individual lives through poor technology is vast and some impacts seem to be being badly ignored. The ‘‘real time earnings’ database improved accuracy of benefit payments was widely agreed to have been harmful to some individuals through the Universal Credit scheme, with delayed payments meaning families at foodbanks, and contributing to worse.

“We believe execution is the major job of every business leader,” perhaps not the best wording in on DWP data uses.

What accountability will be built-by design?

I’ve been thinking recently about drawing a social ecological model of personal data empowerment or control. Thinking about visualisation of wants, gaps and consent models, to show rather than tell policy makers where these gaps exist in public perception and expectations, policy and practice. If anyone knows of one on data, please shout. I think it might be helpful.

But the data *is* all about people

Regardless whether they are in front of you or numbers on a screen, big or small datasets using data about real lives are data about people. And that triggers a need to treat the data with an ethical approach as you would people involved face-to-face.

Researchers need to stop treating data about people as meaningless data because that’s not how people think about their own data being used. Not only that, but if the whole point of your big data research is to have impact, your data outcomes, will change lives.

Tosh, I know some say. But, I have argued, the reason being is that the applications of the data science/ research/ policy findings / impact of immigration in education review / [insert purposes of the data user’s choosing] are designed to have impact on people. Often the people about whom the research is done without their knowledge or consent. And while most people say that is OK, where it’s public interest research, the possibilities are outstripping what the public has expressed as acceptable, and few seem to care.

Evidence from public engagement and ethics all say, hidden pigeon-holing, profiling, is unacceptable. Data Protection law has special requirements for it, on autonomous decisions. ‘Profiling’ is now clearly defined under article 4 of the GDPR as ” any form of automated processing of personal data consisting of using those data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.”

Using big datasets for research that ‘isn’t interested in individuals’ may still intend to create results profiling groups for applied policing, or discriminate, to make knowledge available by location. The data may have been deidentified, but in application becomes no longer anonymous.

Big Data research that results in profiling groups with the intent for applied health policy impacts for good, may by the very point of research, with the intent of improving a particular ethnic minority access to services, for example.

Then look at the voting process changes in North Carolina and see how that same data, the same research knowledge might be applied to exclude, to restrict rights, and to disempower.

Is it possible to have ethical oversight that can protect good data use and protect people’s rights if they conflict with the policy purposes?

The “clear legal basis”is not enough for public trust

Data use can be legal and can still be unethical, harmful and shortsighted in many ways, for both the impacts on research – in terms of withholding data and falsifying data and avoiding the system to avoid giving in data – and the lives it will touch.

What education has to learn from health is whether it will permit the uses by ‘others’ outside education to jeopardise the collection of school data intended in the best interests of children, not the system. In England it must start to analyse what is needed vs wanted. What is necessary and proportionate and justifies maintaining named data indefinitely, exposed to changing scope.

In health, the most recent Caldicott review suggests scope change by design – that is a red line for many: “For that reason the Review recommends that, in due course, the opt-out should not apply to all flows of information into the HSCIC. This requires careful consideration with the primary care community.”

The community spoke out already, and strongly in Spring and Summer 2014 that there must be an absolute right to confidentiality to protect patients’ trust in the system. Scope that ‘sounds’ like it might sneakily change in future, will be a death knell to public interest research, because repeated trust erosion will be fatal.

Laws change to allow scope change without informing people whose data are being used for different purposes

Regulators must be seen to be trusted, if the data they regulate is to be trustworthy. Laws and regulators that plan scope for the future watering down of public protection, water down public trust from today. Unethical policy and practice, will not be saved by pseudo-data-science ethics.

Will those decisions in private political rooms be worth the public cost to research, to policy, and to the lives it will ultimately affect?

What happens when the ethical black holes in policy, lawmaking and practice collide?

At the last UK HealthCamp towards the end of the day, when we discussed the hard things, the topic inevitably moved swiftly to consent, to building big databases, public perception, and why anyone would think there is potential for abuse, when clearly the intended use is good.

The answer came back from one of the participants, “OK now it’s the time to say. Because, Nazis.” Meaning, let’s learn from history.

Given the state of UK politics, Go Home van policies, restaurant raids, the possibility of Trump getting access to UK sensitive data of all sorts from across the Atlantic, given recent policy effects on the rights of the disabled and others, I wonder if we would hear the gentle laughter in the room in answer to the same question today.

With what is reported as Whitehall’s digital leadership sharp change today, the future of digital in government services and policy and lawmaking does indeed seem to be more “about blood and war and power,” than “evidence and argument and policy“.

The concept of ethics in datasharing using public data in the UK is far from becoming obsolete. It has yet to begin.

We have ethical black holes in big data research, in big data policy, and big data practices in England. The conflicts between public interest research and government uses of population wide datasets, how the public perceive the use of our data and how they are used, gaps and tensions in policy and practice are there.

We are simply waiting for the Big Bang. Whether it will be creative, or destructive we are yet to feel.

*****

image credit: LIGO – graphical visualisation of black holes on the discovery of gravitational waves

References:

Report: Caldicott review – National Data Guardian for Health and Care Review of Data Security, Consent and Opt-Outs 2016

Report: The OneWay Mirror: Public attitudes to commercial access to health data

Royal Statistical Society Survey carried out by Ipsos MORI: The Data Trust Deficit

George and the Chinese Dragon. Public spending and the cost of dignity.

In 2005 I sat early one morning in an enormous international hotel chain’s breakfast room, in Guangzhou.

Most of the men fetching two adult breakfasts from the vast buffet wore cream coloured chinos, and button down shirts. They sported standardised haircuts with hints of silver. Stylish women sat at  impeccable tables, cradling  babies in pink hats or spoon feeding small children.

On a busy downtown street, close to the Chinese embassy, the hotel was popular with American parents-to-be.

My local colleague explained to me later, that her sadness over thousands of Chinese daughters exported from a one-child policy nation in 2005 was countered by the hope that loving foreign families were found for them.

She repeated with dignity, party mantras and explanations drilled at school. She has good job, (but still she could not afford children). Too little land, too few schools, and healthcare too expensive. She sighed. Her eyes lit up as she looked at my bump and asked if I knew “girl or boy?” If it were a girl, she added, how beautiful she would be with large open eyes. We laughed about the contradictory artificial stereotypes of beauty, from East and West, each nation wanting what the other did not have.

Ten tears later in 2015, British Ministers have been drawing on China often recently, as a model for us to follow; in health, education and for the economy. Seeking something they think we do not have. Seeking to instill ‘discipline, hard-working, economy-first’ spin.

At the recent ResearchEd conference, Nick Gibb, [1] Minister of State at the Department for Education, talked about the BBC documentary “Are Our Kids Tough Enough” for several minutes and the positive values of the Chinese and its education system. It supposedly triggered ‘a global debate’ when British pupils experienced “the harsh discipline of a Chinese classroom”.

The Global Times praised the  First Minister Mr. Osborne as “the first Western official in recent years who focused on business potential rather than raising a magnifying glass to the ‘human rights issue” during his recent visit [2] when he put economic growth first.

Jeremy Hunt, Secretary of State for Health, was quoted at the political party conference  saying that he saw tax cut changes necessary as a cultural shift.  He suggested we should adopt the ‘hardworking’ character of the Chinese.

An attribute that is as artificial as it is inane.

Collective efforts over the last year or more, to project ‘hard-working’ as a measure of contribution to UK society into politics has become more concentrated, especially around the election. People who are not working, are undermined by statements inferring the less productive for the nation, the less value someone has as a person. Comments are repeated in a sustained drip feed, from Lord Freud’s remarks a year ago that disabled workers were not worth the full wage, to Hancock’s recent revelation that the decision to not apply the new minimum wage to the under 25s from 2016 “was an active policy choice.”  Mr. Hunt spoke about dignity being self-earned, not dependent on richness per se, but being self-made.

“If that £16,500 is either a high proportion or entirely through the benefit system you are trapped. It matters if you are earning that yourself, because if you are earning it yourself you are independent and that is the first step towards self-respect.”

This choice to value some people’s work less than others and acceptance of spin, is concerning.

What values are Ministers suggesting we adopt in the relentless drive for economic growth? [3] When our Ministers ignore human rights and laud Chinese values in a bid to be seen as an accepting trading partner, I wonder at what cost to our international integrity?

Simple things we take for granted such as unimpeded internet  access are not available in China. In Chinese society, hard working is not seen as such a positive value. It is a tolerated norm, and sometimes an imposed one at that, where parents leave their child with grandparents in the countryside and visit twice a year on leave from their city-based jobs. Our Ministers’ version of hardworking Chinese is idyllic spin compared with reality.

China is about to launch a scheme to measure sincerity and how each citizen compares with others in terms of compliance and dissent. Using people’s social media data to determine their ‘worth’ is an ominous prospect.

Mark Kitto from 2012 on why you’ll never be Chinese is a great read. I agree, “there are hundreds of well-rounded, wise Chinese people with a modern world view, people who could, and would willingly, help their motherland face the issues that are growing into state-shaking problems .”

Despite such institutional issues, Mr. Osborne appears to have an open door for deals with the Chinese state. Few people missed the announcements he made in China that HS2 will likely be built by Chinese investors, despite home grown opposition. Ministers and EDF have reportedly agreed on a controversial £25bn development of Hinkley Point C, nuclear plant, with most of upfront costs provided by Chinese companies, although “we are the builders.” [4]

Large parts of UK utilities’ infrastructure is founded on Chinese sourced spending in the UK it’s hard to see who ‘we’ are meant to be. [5] And that infrastructure is a two-way trade. Just as Chinese money has bought many of our previously publicly owned utilities, we have sold a staggeringly long list of security related items to the Chinese state. [6]

In July 2014 the four House of Commons Select Committees: “repeated their previous Recommendation that the Government should apply significantly more cautious judgements when considering arms export licence applications for goods to authoritarian regimes which might be used for internal repression.” 

UK to China exports
Chris Patten, former Hong Kong Governor,  criticised Osborne’s lax attitude to human rights but individual and collective  criticism appear to go unheard.

This perhaps is one measure of British economic growth at all costs. Not only is Britain supplying equipment that might be used for internal repression but the Minister appears to have adopted a singularly authoritarian attitude and democratic legitimacy of the Committees has been ignored. That is concerning.

The packaging of how upcoming cuts will be presented is clear.  We will find out what “hard working families” means to the Treasury. We need to work harder, like the Chinese, and through this approach, we will earn our dignity. No doubt rebuilding Britain, on great British values. Welfare will continue to be labelled as benefits, and with it, a value judgement on economic productivity equated with human worth. Cutting welfare, will be packaged as helping those people to help themselves out of self inflicted ‘bad’ situations, in which they have lost their self worth or found an easy ‘lifestyle choice’.

As welfare spending is reduced, its percentage spend with big service providers has risen after reforms, and private companies profit where money was once recycled in the state system. There is a glaring gap in evidence for some of these decisions taken.

What is next? If for example, universal benefits such as Universal Infant Free School Meals are cut, it will take food literally from the mouths of babes, in families who cannot afford to lose hot school dinners, living in poverty but not qualifying for welfare. The policy may be flawed because Free School Meals based on pupil premium entitlement does not cater for all who need it, but catering for none of them is not an improvement.

Ministers focus the arguments of worth and value around the individual. Doctors have been told to work harder. Schools have been told to offer more childcare to enable parents to work harder. How much harder can we really expect people to work? Is the Treasury’s vision is for us all to work more to pay more taxes? It is flawed if by adopting the political aim, the vast majority of people take home little more pay and sacrifice spare time with our friends and loved ones, running our health into the ground as a result.

The Chinese have a proverb that shows a wisdom missing from Ministers’ recent comments: “Time is money, and it is difficult for one to use money to get time.”

I often remember the hotel breakfast room, and wonder how many mothers, in how many in cities in China miss their daughters, whom they could not afford to keep, through fear of the potential effect. How many young men live without women in their lives who would want to, but find the gender imbalance a barrier to meeting someone. How many are struggling to care for elderly parents.

Not all costs can be measured in money.

The grandmother I met on the station platform last Wednesday had looked after her grandchild for half the day and has him overnight weekdays, so that Mum can first sleep and then work a night shift stacking shelves. That’s her daughter’s second shift of the day. She hardly sees her son.  The husband works the shelf-stacking third shift to supplement his income as a mechanic.

That is a real British family.

Those parents can’t work any harder. Their family is already at breaking point. They take no state welfare.  They don’t qualify for any support.

Must we be so driven to become ‘hard working families’ that our children will barely know their parents? Are hungry pupils to make do as best they can at lunchtime? Are these side effects children must be prepared to pay if their parents work ever harder to earn enough to live and earn their ‘dignity’ as defined by the Secretary of State for health?

Dignity is surely inherent in being human. Not something you earn by what you do. At the heart of human rights is the belief that everybody should be treated equally and with dignity – no matter what their circumstances.

If we adopt the Ministers’ be-like-the-Chinese mantra, and accept human dignity is something that must be earned, we should ask now what price have they put on it?

MPs must slay the dragon of spin and demand transparency of the total welfare budget and government spend with its delivery providers. There is a high public cost of further public spending cuts. In order to justify them, it is not the public who must work harder, but the Treasury, in order to deliver a transparent business case what the further sacrifices of ‘hard working families’ will achieve.

 

###

[1] ResearchEd conference, Nick Gibb, Minister of State at the Department for Education

[2] New Statesman

[3] https://www.opendemocracy.net/ournhs/jen-persson/why-is-government-putting-health-watchdogs-on-leash-of-%E2%80%98promoting-economic-growth

[4] The Sun: George Osborne party conference speech with 25 mentions of builders: “We are the builders”, said Mr. Osborne.

[5] The Drum: Li Ka Shing and British investment https://www.thedrum.com/opinion/2015/01/28/meet-li-ka-shing-man-o2-his-sights-has-quietly-become-one-britains-biggest

[6] Arms exports to authoritarian regimes and countries of concern worldwide The Committees http://www.publications.parliament.uk/pa/cm201415/cmselect/cmquad/608/60805.htm#a104

 

[image: Wassily Kandinsky ca 1911, George and the Dragon]