In 2018, ethics became the new fashion in UK data circles.
The launch of the Women Leading in AIprinciples of responsible AI, has prompted me to try and finish and post these thoughts, which have been on my mind for some time. If two parts of 1K is tl:dr for you, then in summary, we need more action on:
Ethics as a route to regulatory avoidance.
Framing AI and data debates as a cost to the Economy.
Reframing the debate around imbalance of risk.
Challenging the unaccountable and the ‘inevitable’.
In 2019, the calls to push aside old wisdoms for new, for everyone to focus on the value-laden words of ‘innovation’ and ‘ethics’, appears an ever louder attempt to reframe regulation and law as barriers to business, asking to cast them aside.
On Wednesday evening, at the launch of the Women Leading in AIprinciples of responsible AI, the chair of the CDEI said in closing, he was keen to hear from companies where, “they were attempting to use AI effectively and encountering difficulties due to regulatory structures.”
Perhaps it’s ingenious PR to make sure that what is in effect self-regulation, right across the business model, looks like it comes imposed from others, from the very bodies set up to fix it.
But as I think about in part 2, is this healthy for UK public policy and the future not of an industry sector, but a whole technology, when it comes to AI?
Framing AI and data debates as a cost to the Economy
Companies, organisations and individuals arguing against regulation are framing the debate as if it would come at a great cost to society and the economy. But we rarely hear, what effect do they expect on their company. What’s the cost/benefit expected for them. It’s disingenuous to have only part of that conversation. In fact the AI debate would be richer were it to be included. If companies think their innovation or profits are at risk from non-use, or regulated use, and there is risk to the national good associated with these products, we should be talking about all of that.
And in addition, we can talk about use and non-use in society. Too often, the whole debate is intangible. Show me real costs, real benefits. Real risk assessments. Real explanations that speak human. Industry should show society what’s in it for them.
You don’t want it to ‘turn out like GM crops’? Then learn their lessons on transparency, trustworthiness, and avoid the hype. And understand sometimes there is simply tech, people do not want.
Reframing the debate around imbalance of risk
And while we often hear about the imbalance of power associated with using AI, we also need to talk about the imbalance of risk.
And where company owners may see no risk from the product they assure is safe, there are intangible risks that need factored in, for example in education where a child’s learning pathway is determined by patterns of behaviour, and how tools shape individualised learning, as well as the model of education.
Companies may change business model, ownership, and move on to other sectors after failure. But with the levels of unfairness already felt in the relationship between the citizen and State — in programmes like Troubled Families, Universal Credit, Policing, and Prevent — where use of algorithms and ever larger datasets is increasing, long term harm from unaccountable failure will grow.
Society needs a rebalance of the system urgently to promote transparent fairness in interactions, including but not only those with new applications of technology.
We must find ways to reframe how this imbalance of risk is assessed, and is distributed between companies and the individual, or between companies and state and society, and enable access to meaningful redress when risks turn into harm.
If we are to do that, we need first to separate truth from hype, public good from self-interest and have a real discussion of risk across the full range from individual, to state, to society at large.
That’s not easy against a non-neutral backdrop and scant sources of unbiased evidence and corporate capture.
Challenging the unaccountable and the ‘inevitable’.
In 2017 the Care Quality Commission reported into online services in the NHS, and found serious concerns of unsafe and ineffective care. They have a cross-regulatory working group.
By contrast, no one appears to oversee that risk and the embedded use of automated tools involved in decision-making or decision support, in children’s services, or education. Areas where AI and cognitive behavioural science and neuroscience are already in use, without ethical approval, without parental knowledge or any transparency.
Meanwhile, as all this goes on, academics many are busy debating fixing algorithmic bias, accountability and its transparency.
Few are challenging the narrative of the ‘inevitability’ of AI.
Julia Powles and Helen Nissenbaum recently wrote that many of these current debates are an academic distraction, removed from reality. It is under appreciated how deeply these tools are already embedded in UK public policy. “Trying to “fix” A.I. distracts from the more urgent questions about the technology. It also denies us the possibility of asking: Should we be building these systems at all?”
Challenging the unaccountable and the ‘inevitable’ is the title of the conclusion of the Women Leading in AI report on principles, and makes me hopeful.
“There is nothing inevitable about how we choose to use this disruptive technology. […] And there is no excuse for failing to set clear rules so that it remains accountable, fosters our civic values and allows humanity to be stronger and better.”
“A new, a vast, and a powerful language is developed for the future use of analysis, in which to wield its truths so that these may become of more speedy and accurate practical application for the purposes of mankind than the means hitherto in our possession have rendered possible.” [on Ada Lovelace, The First tech Visionary, New Yorker, 2013]
What would Ada Lovelace have argued for in today’s AI debates? I think she may have used her voice not only to call for the good use of data analysis, but for her second strength.The power of her imagination.
James Ball recently wrote in The European [1]:
“It is becoming increasingly clear that the modern political war isn’t one against poverty, or against crime, or drugs, or even the tech giants – our modern political era is dominated by a war against reality.”
My overriding take away from three days spent at the Conservative Party Conference this week, was similar. It reaffirmed the title of a school debate I lost at age 15, ‘We only believe what we want to believe.’
James writes that it is, “easy to deny something that’s a few years in the future“, and that Conservatives, “especially pro-Brexit Conservatives – are sticking to that tried-and-tested formula: denying the facts, telling a story of the world as you’d like it to be, and waiting for the votes and applause to roll in.”
These positions are not confined to one party’s politics, or speeches of future hopes, but define perception of current reality.
I spent a lot of time listening to MPs. To Ministers, to Councillors, and to party members. At fringe events, in coffee queues, on the exhibition floor. I had conversations pressed against corridor walls as small press-illuminated swarms of people passed by with Queen Johnson or Rees-Mogg at their centre.
In one panel I heard a primary school teacher deny that child poverty really exists, or affects learning in the classroom.
In another, in passing, a digital Minister suggested that Pupil Referral Units (PRU) are where most of society’s ills start, but as a Birmingham head wrote this week, “They’ll blame the housing crisis on PRUs soon!” and “for the record, there aren’t gang recruiters outside our gates.”
This is no tirade on failings of public policymakers however. While it is easy to suspect malicious intent when you are at, or feel, the sharp end of policies which do harm, success is subjective.
It is clear that an overwhelming sense of self-belief exists in those responsible, in the intent of any given policy to do good.
Where policies include technology, this is underpinned by a self re-affirming belief in its power. Power waiting to be harnessed by government and the public sector. Even more appealing where it is sold as a cost-saving tool in cash strapped councils. Many that have cut away human staff are now trying to use machine power to make decisions. Some of the unintended consequences of taking humans out of the process, are catastrophic for human rights.
The disconnect between perception of risk, the reality of risk, and real harm, whether perceived or felt from these applied policies in real-life, is not so much, ‘easy to deny something that’s a few years in the future‘ as Ball writes, but a denial of the reality now.
Concerningly, there is lack of imagination of what real harms look like.There is no discussion where sometimes these predictive policies have no positive, or even a negative effect, and make things worse.
I’m deeply concerned that there is an unwillingness to recognise any failures in current data processing in the public sector, particularly at scale, and where it regards the well-known poor quality of administrative data. Or to be accountable for its failures.
Harms, existing harms to individuals, are perceived as outliers. Any broad sweep of harms across policy like Universal Credit, seem perceived as political criticism, which makes the measurable failures less meaningful, less real, and less necessary to change.
There is a worrying growing trend of finger-pointing exclusively at others’ tech failures instead. In particular, social media companies.
Imagination and mistaken ideas are reinforced where the idea is plausible, and shared. An oft heard and self-affirming belief was repeated in many fora between policymakers, media, NGOs regards children’s online safety. “There is no regulation online”. In fact, much that applies offline applies online. The Crown Prosecution Service Social Media Guidelines is a good place to start. [2] But no one discusses where children’s lives may be put at risk or less safe, through the use of state information about them.
Policymakers want data to give us certainty. But many uses of big data, and new tools appear to do little more than quantify moral fears, and yet still guide real-life interventions in real-lives.
Child abuse prediction, and school exclusion interventions should not be test-beds for technology the public cannot scrutinise or understand.
In one trial attempting to predict exclusion, this recent UK research project in 2013-16 linked children’s school records of 800 children in 40 London schools, with Metropolitan Police arrest records of all the participants. It found interventions created no benefit, and may have caused harm. [3]
“Anecdotal evidence from the EiE-L core workers indicated that in some instances schools informed students that they were enrolled on the intervention because they were the “worst kids”.”
“Keeping students in education, by providing them with an inclusive school environment, which would facilitate school bonds in the context of supportive student–teacher relationships, should be seen as a key goal for educators and policy makers in this area,” researchers suggested.
But policy makers seem intent to use systems that tick boxes, and create triggers to single people out, with quantifiable impact.
Some of these systems are known to be poor, or harmful.
When it comes to predicting and preventing child abuse, there is concern with the harms in US programmes ahead of us, such as both Pittsburgh, and Chicago that has scrapped its programme.
The Illinois Department of Children and Family Services ended a high-profile program that used computer data mining to identify children at risk for serious injury or death after the agency’s top official called the technology unreliable, and children still died.
“We are not doing the predictive analytics because it didn’t seem to be predicting much,” DCFS Director Beverly “B.J.” Walker told the Tribune.
Many professionals in the UK share these concerns. How long will they be ignored and children be guinea pigs without transparent error rates, or recognition of the potential harmful effects?
Why on earth not? At least for these high risk projects.
How long should children be the test subjects of machine learning tools at scale, without transparent error rates, audit, or scrutiny of their systems and understanding of unintended consequences?
Is harm to any child a price you’re willing to pay to keep using these systems to perhaps identify others, while we don’t know?
Is there an acceptable positive versus negative outcome rate?
The evidence so far of AI in child abuse prediction is not clearly showing that more children are helped than harmed.
Surely it’s time to stop thinking, and demand action on this.
It doesn’t take much imagination, to see the harms. Safe technology, and safe use of data, does not prevent the imagination or innovation, employed for good.
Where you are willing to sacrifice certainty of human safety for the machine decision, I want someone to be accountable for why.
References
[1] James Ball, The European, Those waging war against reality are doomed to failure, October 4, 2018.
[2] Thanks to Graham Smith for the link. “Social Media – Guidelines on prosecuting cases involving communications sent via social media. The Crown Prosecution Service (CPS) , August 2018.”
[3] Obsuth, I., Sutherland, A., Cope, A. et al. J Youth Adolescence (2017) 46: 538. https://doi.org/10.1007/s10964-016-0468-4 London Education and Inclusion Project (LEIP): Results from a Cluster-Randomized Controlled Trial of an Intervention to Reduce School Exclusion and Antisocial Behavior (March 2016)
This is bothering me in current and future data protection.
When is a profile no longer personal?
I’m thinking of a class, or even a year group of school children.
If you strip off enough identifiers or aggregate data so that individuals are no longer recognisable and could not be identified from other data — directly or indirectly — that is in your control or may come into your control, you no longer process personal data.
How far does Article 4(1) go to the boundary of what is identifiable on economic, cultural or social identity?
There is a growing number of research projects using public sector data (including education but often in conjunction and linkage with various others) in which personal data are used to profile and identify sets of characteristics, with a view to intervention.
Let’s take a case study.
Exclusions and absence, poverty, ethnicity, language, SEN and health, attainment indicators, their birth year and postcode areas are all profiled on individual level data in a data set of 100 London schools all to identify the characteristics of children more likely than others to drop out.
It’s a research project, with a view to shaping a NEET intervention program in early education (Not in Education, Employment or Training). There is no consent sought for using the education, health, probation, Police National Computer, and HMRC data like this, because it’s research, and enjoys an exemption.
Among the data collected BAME ethnicity and non-English language students in certain home postcodes are more prevalent. The names of pupils and DOB and their school address have been removed.
In what is in effect a training dataset, to teach the researchers, “what does a potential NEET look like?” pupils with characteristics like Mohammed Jones, are more likely than others to be prevalent.
It does not permit the identification of the data subject as himself, but the data knows exactly what a pupil like MJ looks like.
Armed with these profiles of what potential NEETs look like, researchers now work with the 100 London schools, to give the resulting knowledge, to help teachers identify their children at risk of potential drop out, or exclusion, or of becoming a NEET.
In one London school, MJ, is a perfect match for the profile. The teacher is relieved from any active judgement who should join the program, he’s a perfect match for what to look for. He’s asked to attend a special intervention group, to identify and work on his risk factors.
The data are accurate. His profile does match. But would he have gone on to become NEET?
Is this research, or was it a targeted intervention?
Are the tests for research exemptions met?
Is this profiling and automated decision-making?
If the teacher is asked to “OK” the list, but will not in practice edit it, does that make it exempt from the profiling restriction for children?
The GDPR also sets out the rules (at Article 6(4)) on factors a controller must take into account to assess whether a new processing purpose is compatible with the purpose for which the data were initially collected.
But if the processing is done only after the identifiers are removed that could identify MJ, not just someone like him, does it apply?
In a world that talks about ever greater personalisation, we are in fact being treated less and less as an individual, but instead we’re constantly assessed by comparison, and probability, how we measure up against other profiles of other people built up from historical data.
Then it is used to predict what someone with a similar profile would do, and therefore by inference, what we the individual would do.
What is the difference in reality, of having given the researchers all the education, health, probation, Police National Computer, and HMRC — as they had it — and then giving them the identifying school datasets with pupils’ named data, and saying “match them up.”
I worry that we are at great risk, in risk prediction, of not using the word research, to mean what we think it means.
And our children are unprotected from bias and unexpected consequences as a result.
First they came for the lists of lecturers. Did you speak out?
Last week Chris Heaton-Harris MP wrote to vice-chancellors to ask for a list of lecturers’ names and course content, “With particular reference to Brexit”. Academics on social media spoke out in protest. There has been little reaction however, to a range of new laws that permit the incremental expansion of the database state on paper and in practice.
The government is building ever more sensitive lists of names and addresses, without oversight. They will have access to information about our bank accounts. They are using our admin data to create distress-by-design in a ‘hostile environment.’ They are writing laws that give away young people’s confidential data, ignoring new EU law that says children’s data merits special protections.
Earlier this year, Part 5 of the new Digital Economy Act reduced the data protection infrastructure between different government departments. This week, in discussion on the Codes of Practice, some local government data users were already asking whether safeguards can be further relaxed to permit increased access to civil registration data and use our identity data for more purposes.
Now in the Data Protection Bill, the government has included clauses in Schedule 2, to reduce our rights to question how our data are used and that will remove a right to redress where things go wrong. Clause 15 designs-in open ended possibilities of Statutory Instruments for future change.
The House of Lords Select Committee on the Constitution point out on the report on the Bill, that the number and breadth of the delegated powers, are, “an increasingly common feature of legislation which, as we have repeatedly stated, causes considerable concern.”
Concern needs to translate into debate, better wording and safeguards to ensure Parliament maintains its role of scrutiny and where necessary constrains executive powers.
Take as case studies, three new Statutory Instruments on personal data from pupils, students, and staff. They all permit more data to be extracted from individuals and to be sent to national level:
SI 807/2017 The Education (Information About Children in Alternative Provision) (England) (Amendment) Regulations 2017
SI No. 886 The Education (Student Information) (Wales) Regulations 2017 (W. 214) and
SL(5)128 – The Education (Supply of Information about the School Workforce) (Wales) Regulations 2017
The SIs typically state “impact assessment has not been prepared for this Order as no impact on businesses or civil society organisations is foreseen. The impact on the public sector is minimal.” Privacy Impact Assessments are either not done, not published or refused via FOI.
That SI should have been a warning, not a process model to repeat.
From January, thanks to yet another rushed law without debate, (SI 807/2017) teen pregnancy, young offender and mental health labels will be added to children’s records for life in England’s National Pupil Database. These are on a named basis, and highly sensitive. Data from the National Pupil Database, including special needs data (SEN) are passed on for a broad range of purposes to third parties, and are also used across government in Troubled Families, shared with National Citizen Service, and stored forever; on a named basis, all without pupils’ consent or parents’ knowledge. Without a change in policy, young offender and pregnancy, will be handed out too.
Near-identical wording that was used in 2012 to change the law in England, reappears in the new SI for student data in Wales.
The Wales government introduced regulations for a new student database of names, date of birth and ethnicity, home address including postcode, plus exam results. The third parties listed who will get given access to the data without asking for students’ consent, include the Student Loans Company and “persons who, for the purpose of promoting the education or well-being of students in Wales, require the information for that purpose”, in SI No. 886, the Education (Student Information) (Wales) Regulations 2017 (W. 214).
The consultation was conflated with destinations data, and while it all sounds for the right reasons, the SI is broad on purposes and prescribed persons. It received 10 responses.
Separately, a 2017 consultation on the staff data collection received 34 responses about building a national database of teachers, including names, date of birth, National Insurance numbers, ethnicity, disability, their level of Welsh language skills, training, salary and more. Unions and the Information Commissioner’s Office both asked basic questions in the consultation that remain unanswered, including who will have access. It’s now law thanks to SL(5)128 – The Education (Supply of Information about the School Workforce) (Wales) Regulations 2017. The questions are open.
While I have been assured this weekend in writing that these data will not be used for commercial purposes or immigration enforcement, any meaningful safeguards are missing.
More failings on fairness
Where are the communications to staff, students and parents? What oversight will there be? Will a register of uses be published? And why does government get to decide without debate, that our fundamental right to privacy can be overwritten by a few lines of law? What protections will pupils, students and staff have in future how these data will be used and uses expanded for other things?
This is not what people expect or find reasonable. In 2015 UCAS had 37,000 students respond to an Applicant Data Survey. 62% of applicants think sharing their personal data for research is a good thing, and 64% see personal benefits in data sharing. But over 90% of applicants say they should be asked first, regardless of whether their data is to be used for research, or other things. This SI takes away their right to control their data and their digital identity.
It’s not in young people’s best interests to be made more digitally disempowered and lose control over their digital identity. The GDPR requires data privacy by design. This approach should be binned.
Meanwhile, the Digital Economy Act codes of practice talk about fair and lawful processing as if it is a real process that actually happens.
That gap between words on paper, and reality, is a caredata style catastrophe across every sector of public data and government waiting to happen. When will the public be told how data are used?
Better data must be fairer and safer in the future
The new UK Data Protection Bill is in Parliament right now, and its wording will matter. Safe data, transparent use, and independent oversight are not empty slogans to sling into the debate.
To ensure our public [personal] data are used well, we need to trust why they’re collected and see how they are used. But instead the government has drafted their own get-out-of-jail-free-card to remove all our data protection rights to know in the name of immigration investigation and enforcement, and other open ended public interest exemptions.
The pursuit of individuals and their rights under an anti-immigration rhetoric without evidence of narrow case need, in addition to all the immigration law we have, is not the public interest, but ideology.
If these exemptions becomes law, every one of us loses right to ask where our data came from, why it was used for that purpose, or course of redress.
The Digital Economy Act removed some of the infrastructure protections between Departments for datasharing. These clauses will remove our rights to know where and why that data has been passed around between them.
These lines are not just words on a page. They will have real effects on real people’s lives. These new databases are lists of names, and addresses, or attach labels to our identity that last a lifetime.
Even the advocates in favour of the Database State know that if we want to have good public services, their data use must be secure and trustworthy, and we have to be able to trust staff with our data.
As the Committee sits this week to review the bill line by line, the Lords must make sure common sense sees off the scattering of substantial public interest and immigration exemptions in the Data Protection Bill. Excessive exemptions need removed, not our rights.
Otherwise we can kiss goodbye to the UK as a world leader in tech that uses our personal data, or research that uses public data. Because if the safeguards are weak, the commercial players who get it wrong in trials of selling patient data, or who try to skip around the regulatory landscape asking to be treated better than everyone else, and fail to comply with Data Protection law, or when government is driven to chasing children out of education, it doesn’t just damage their reputation, or the potential of innovation for all, they damage public trust from everyone, and harm all data users.
Clause 15 leaves any future change open ended by Statutory Instrument. We can already see how SIs like these are used to create new national databases that can pop up at any time, without clear evidence of necessity, and without chance for proper scrutiny. We already see how data can be used, beyond reasonable expectations.
If we don’t speak out for our data privacy, the next time they want a list of names, they won’t need to ask. They’ll already know.
What is means to be human is going to be different. That was the last word of a panel of four excellent speakers, and the sparkling wit and charm of chair Timandra Harkness, at tonight’s Turing Institute event, hosted at the British Library, on the future of data.
The first speaker, Bernie Hogan, of the Oxford Internet Institute, spoke of Facebook’s emotion experiment, and the challenges of commercial companies ownership and concentrations of knowledge, as well as their decisions controlling what content you get to see.
He also explained simply what an API is in human terms. Like a plug in a socket and instead of electricity, you get a flow of data, but the data controller can control which data can come out of the socket.
And he brilliantly brought in a thought what would it mean to be able to go back in time to the Nuremberg trials, and regulate not only medical ethics, but the data ethics of indirect and computational use of information. How would it affect today’s thinking on AI and machine learning and where we are now?
“Available does not mean accessible, transparent does not mean accountable”
Charles from the Bureau of Investigative Journalism, who had also worked for Trinity Mirror using data analytics, introduced some of the issues that large datasets have for the public.
People rarely have the means to do any analytics well.
Even if open data are available, they are not necessarily accessible due to the volume of data to access, and constraints of common software (such as excel) and time constraints.
Without the facts they cannot go see a [parliamentary] representative or community group to try and solve the problem.
Local journalists often have targets for the number of stories they need to write, and target number of Internet views/hits to meet.
Putting data out there is only transparency, but not accountability if we cannot turn information into knowledge that can benefit the public.
“Trust, is like personal privacy. Once lost, it is very hard to restore.”
Jonathan Bamford, Head of Parliamentary and Government Affairs at the ICO, took us back to why we need to control data at all. Democracy. Fairness. The balance of people’s rights, like privacy, and Freedom-of-Information, and the power of data holders. The awareness that power of authorities and companies will affect the lives of ordinary citizens. And he said that even early on there was a feeling there was a need to regulate who knows what about us.
The third generation of Data Protection law he said, is now more important than ever to manage the whole new era of technology and use of data that did not exist when previous laws were made.
But, he said, the principles stand true today. Don’t be unfair. Use data for the purposes people expect. Security of data matters. As do rights to see the data people hold about us. Make sure data are relevant, accurate, necessary and kept for a sensible amount of time.
And even if we think that technology is changing, he argued, the principles will stand, and organisations need to consider these principles before they do things, considering privacy as a fundamental human right by default, and data protection by design.
After all, we should remember the Information Commissioner herself recently said,
“privacy does not have to be the price we pay for innovation. The two can sit side by side. They must sit side by side.
It’s not always an easy partnership and, like most relationships, a lot of energy and effort is needed to make it work. But that’s what the law requires and it’s what the public expects.”
“We must not forget, evil people want to do bad things. AI needs to be audited.”
Joanna J. Bryson was brilliant her multifaceted talk, summing up how data will affect our lives. She explained how implicit biases work, and how we reason, make decisions and showed up how we think in some ways in Internet searches. She showed in practical ways, how machine learning is shaping our future in ways we cannot see. And she said, firms asserting that doing these things fairly and openly and that regulation no longer fits new tech, “is just hoo-hah”.
She talked about the exciting possibilities and good use of data, but that , “we must not forget, evil people want to do bad things. AI needs to be audited.” She summed up, we will use data to predict ourselves. And she said:
“What is means to be human is going to be different.”
That is perhaps the crux of this debate. How do data and machine learning and its mining of massive datasets, and uses for ‘prediction’, affect us as individual human beings, and our humanity?
The last audience question addressed inequality. Solutions like transparency, subject access, accountability, and understanding biases and how we are used, will never be accessible to all. It needs a far greater digital understanding across all levels of society. How can society both benefit from and be involved in the future of data in public life? The conclusion was made, that we need more faith in public institutions working for people at scale.
But what happens when those institutions let people down, at scale?
The debate was less about the Future of Data in Public Life, and much more about how big data affects our personal lives. Most of the discussion was around how we understand the use of our personal information by companies and institutions, and how will we ensure democracy, fairness and equality in future.
The question went unanswered from an audience member, how do we protect ourselves from the harms we cannot see, or protect the most vulnerable who are least able to protect themselves?
“How can we future proof data protection legislation and make sure it keeps up with innovation?”
That audience question is timely given the new Data Protection Bill. But what legislation means in practice, I am learning rapidly, can be very different from what is in the written down in law.
One additional tool in data privacy and rights legislation is up for discussion, right now, in the UK. If it matters to you, take action.
NGOs could be enabled to make complaints on behalf of the public under article 80 of the General Data Protection Regulation (GDPR). However, the government has excluded that right from the draft UK Data Protection Bill launched last week.
“Paragraph 53 omits from Article 80, representation of data subjects, where provided for by Member State law” from paragraph 1 and paragraph 2,” [Data Protection Bill Explanatory notes, paragraph 681 p84/112]. 80 (2) gives members states the option to provide for NGOs to take action independently on behalf of many people that may have been affected.
If you want that right, a right others will be getting in other countries in the EU, then take action. Call your MP or write to them. Ask for Article 80, the right to representation, in UK law. We need to ensure that our human rights continue to be enacted and enforceable to the maximum, if, “what is means to be human is going to be different.”
For the Future of Data, has never been more personal.
Bird and Bird GDPR Tracker [Shows how and where GDPR has been supplemented locally, highlighting where Member States have taken the opportunities available in the law for national variation.]
DP Bill’s new immigration exemption can put EU citizens seeking a right to remain at considerable disadvantage [09.10] re: Schedule 2, paragraph 4, new Immigration exemption.
On Adequacy: Draconian powers in EU Withdrawal Bill can negate new Data Protection law[13.09]
Queen’s Speech, and the promised “Data Protection (Exemptions from GDPR) Bill” [29.06]
defenddigitalme
Response to the Data Protection Bill debate and Green Paper on Online Strategy [11.10.2017]
Queen’s speech: safeguarding children, data, and digital rights[22.6]
Jon Baines
Serious DCMS error about consent data protection[11.08]
Eoin O’Dell
The UK’s Data Protection Bill 2017: repeals and compensation – updated: On DCMS legislating for Art 82 GDPR. [14.09]
Data Protection Bill Consultation: General Data Protection Regulation Call for Views on exemptions
New Data Protection Bill: Our planned reforms [PDF]952KB, 30 pages
London Economics: Research and analysis to quantify benefits arising from personal data rights under the GDPR [PDF]3.76MB189 pages
ESRC joint submissions on EU General Data Protection Regulation in the UK – Wellcome led multi org submission plus submission from British Academy / Erdos [link]
Minister for Digital Matt Hancock’s keynote address to the UK Internet Governance Forum, 13 September [link].
“…the Data Protection Bill, which will bring our data protection regime into the twenty first century, giving citizens more sovereignty over their data, and greater penalties for those who break the rules.
“With AI and machine learning, data use is moving fast. Good use of data isn’t just about complying with the regulations, it’s about the ethical use of data too.
“So good governance of data isn’t just about legislation – as important as that is – it’s also about establishing ethical norms and boundaries, as a society. And this is something our Digital Charter will address too.”
The Queen’s Speech promised new laws to ensure that the United Kingdom retains its world-class regime protecting personal data. And the government proposes a new digital charter to make the United Kingdom the safest place to be online for children.
Improving online safety for children should mean one thing. Children should be able to use online services without being used by them and the people and organisations behind it. It should mean that their rights to be heard are prioritised in decisions about them.
As Sir Tim Berners-Lee is reported as saying, there is a need to work with companies to put “a fair level of data control back in the hands of people“. He rightly points out that today terms and conditions are “all or nothing”.
There is a gap in discussions that we fail to address when we think of consent to terms and conditions, or “handing over data”. It is that this assumes that these are always and can be always, conscious acts.
For children the question of whether accepting Ts&Cs giving them control and whether it is meaningful becomes even more moot. What are the agreeing to? Younger children cannot give free and informed consent. After all most privacy policies standardly include phrases such as, “If we sell all or a portion of our business, we may transfer all of your information, including personal information, to the successor organization,” which means in effect that “accepting” a privacy policy today, is effectively a blank cheque for anything tomorrow.
The GDPR requires terms and conditions to be laid out in policies that a child can understand.
The current approach to legislation around children and the Internet is heavily weighted towards protection from seen threats. The threats we need to give more attention to, are those unseen.
Our lives as measured in our behaviours and opinions, purchases and likes, are connected by trillions of sensors. My parents may have described using the Internet as going online. Today’s online world no longer means our time is spent ‘on the computer’, but being online, all day every day. Instead of going to a desk and booting up through a long phone cable, we have wireless computers in our pockets and in our homes, with functionality built-in to enable us to do other things; make a phonecall, make toast, and play. In a smart city surrounded by sensors under pavements, in buildings, cameras and tracking everywhere we go, we are living ever more inside an overarching network of cloud computers that store our data. And from all that data decisions are made, which adverts to show us, on which network sites, what we get offered and do not, and our behaviours and our conscious decision-making may be nudged quite invisibly.
Data about us, whether uniquely identifiable or not, is all too often collected passively, IP Address, linked sign-ins that extract friends lists, and some decide if we can either use the thing or not. It’s part of the deal. We get the service, they get to trade our identity, like Top Trumps, behind the scenes. But we often don’t see it, and under GDPR, there should be no contractual requirement as part of consent. I.e. agree or don’t get the service, is not an option.
As yet, we have not had debate in the UK what that means in concrete terms, and if we do not soon, we risk it becoming an afterthought that harms more than helps protect children’s privacy, and therefore their digital identity.
I think of five things needed by policy shapers to tackle it:
In depth understanding of what ‘online’ and the Internet mean
Consistent understanding of what threat models and risk are connected to personal data, which today are underestimated
A grasp of why data privacy training is vital to safeguarding
Confront the idea that user regulation as a stand-alone step will create a better online experience for users, when we know that perceived problems are created by providers or other site users
Siloed thinking that fails to be forward thinking or join the dots of tactics across Departments into cohesive inclusive strategy
If the government’s new “major new drive on internet safety” involves the world’s largest technology companies in order to make the UK the “safest place in the world for young people to go online,” then we must also ensure that these strategies and papers join things up and above all, a technical knowledge of how the Internet works needs to join the dots of risks and benefits in order to form a strategy that will actually make children safe, skilled and see into their future.
When it comes to children, there is a further question over consent and parental spyware. Various walk-to-school apps, lauded by the former Secretary of State two years running, use spyware and can be used without a child’s consent. Guardian Gallery, which could be used to scan for nudity in photos on anyone’s phone that the ‘parent’ phone holder has access to install it on, can be made invisible on the ‘child’ phone. Imagine this in coercive relationships.
If these technologies and the online environment are not correctly assessed with regard to “online safety” threat models for all parts of our population, then they fail to address the risk for the most vulnerable who need it.
What will the GDPR really mean for online safety improvement? What will it define as online services for remuneration in the IoT? And who will be considered as children, “targeted at” or “offered to”?
An active decision is required in the UK. Will 16 remain the default age needed for consent to access Information Society Services, or will we adopt 13 which needs a legal change?
As banal as these questions sound they need close attention paid, and clarity, between now and May 25, 2018 if the UK is to be GDPR ready for providers of online services to know who and how they should treat Internet access, participation and age [parental] verification.
How will the “controller” make “reasonable efforts to verify in such cases that consent is given or authorised by the holder of parental responsibility over the child”, and “taking into consideration available technology”.
These are fundamental questions of what the Internet is and means to people today. And if the current government approach to security is anything to go by, safety will not mean what we think it will mean.
It will matter how these plans join up. Age verification was not being considered in UK law in relation to how we would derogate GDPR, even as late as in October 2016 despite age verification requirements already in the Digital Economy Bill. It shows a lack of joined up digital thinking across our government and needs addressed with urgency to get into the next Parliamentary round.
In recent draft legislation I am yet to see the UK government address Internet rights and safety for young people as anything other than a protection issue, treating the online space in the same way as offline, irl, focused on stranger danger, and sexting.
The UK Digital Strategy commits to the implementation of the General Data Protection Regulation by May 2018, and frames it as a business issue, labelling data as “a global commodity” and as such, its handling is framed solely as a requirements needed to ensure “that our businesses can continue to compete and communicate effectively around the world” and that adoption “will ensure a shared and higher standard of protection for consumers and their data.”
The Digital Economy Bill, despite being a perfect vehicle for this has failed to take on children’s rights, and in particular the requirements of GDPR for consent at all. It was clear if we were to do any future digital transactions we need to level up to GDPR, not drop to the lowest common denominator between that and existing laws.
It was utterly ignored. So were children’s rights to have their own views heard in the consultation to comment on the GDPR derogations for children, with little chance for involvement from young people’s organisations, and less than a monthto respond.
We must now get this right in any new Digital Strategy and bill in the coming parliament.
Update received from Edmodo, VP Marketing & Adoption, June 1:
While everyone is focused on #WannaCry ransomware, it appears that a global edTech company has had a potential global data breach that few are yet talking about.
Edmodo is still claiming on its website it is, “The safest and easiest way for teachers to connect and collaborate with students, parents, and each other.” But is it true, and who verifies that safe is safe?
Edmodo data from 78 million users for sale
Matt Burgess wrote in VICE: “Education website Edmodo promises a way for “educators to connect and collaborate with students, parents, and each other”. However, 78 million of its customers have had their user account details stolen. Vice’s Motherboard reports that usernames, email addresses, and hashed passwords were taken from the service and have been put up for sale on the dark web for around $1,000 (£700).
“Data breach notification website LeakBase also has a copy of the data and provided it to Motherboard. According to LeakBase around 40 million of the accounts have email addresses connected to them. The company said it is aware of a “potential security incident” and is investigating.”
The Motherboard article by Joseph Cox, says it happened last month. What has been done since? Why is there no public information or notification about the breach on the company website?
Joseph doesn’t think profile photos are at risk, unless someone can log into an account. He was given usernames, email addresses, and hashed passwords, and as far as he knows, that was all that was stolen.
“The passwords have apparently been hashed with the robust bcrypt algorithm, and a string of random characters known as a salt, meaning hackers will have a much harder time obtaining user’s actual login credentials. Not all of the records include a user email address.”
So far I’ve been unable to find out from Edmodo directly. There is no telephone technical support. There is no human that can be reached dialling the headquarters telephone number.
Where’s the parental update?
No one has yet responded to say whether UK pupils and teachers’ data was among that reportedly stolen. (Update June 1, the company did respond with confirmation of UK users involved.)
While there is no mention of the other data the site holds being in the breach, details are as yet sketchy, and Edmodo holds children’s data. Where is the company assurance what was and was not stolen?
As it’s a platform log on I would want to know when parents will be told exactly what was compromised and how details have been exposed. I would want clarification if this could potentially be a weakness for further breaches of other integrated systems, or not.
Are edTech and IoT toys fit for UK children?
In 2016, more than 727,000 UK children had their information compromised following a cyber attack on VTech, including images. These toys are sold as educational, even if targeted at an early age.
In Spring 2017, CloudPets, the maker of Internet of Things teddy bears, “smart toys” left more than two million voice recordings from children online without any security protections and exposing children’s personal details.
As yet UK ministers have declined our civil society recommendations to act and take steps on the public sector security of national pupil data or on the private security of Internet connected toys and things. The latter in line with Germany for example.
It is right that the approach is considered. The UK government must take these risks seriously in an evidence based and informed way, and act, not with knee jerk reactions. But it must act.
Two months after Germany banned the Cayla doll, we still had them for sale here.
Parents are often accused of being uninformed, but we must be able to expect that our products pass a minimum standard of tech and data security testing as part of pre-sale consumer safety testing.
Parents have a responsibility to educate themselves to a reasonable level of user knowledge. But the opportunities are limited when there’s no transparency. Much of the use of a child’s personal data and system data’s interaction with our online behaviour, in toys, things, and even plain websites remains hidden to most of us.
So too, the Edmodo privacy policy contained no mention of profiling or behavioural web tracking, for example. Only when this savvy parent spotted it was happening, it appears the company responded properly to fix it. Given strict COPPA rules it is perhaps unsurprising, though it shouldn’t have happened at all.
How will the uses of these smart toys, and edTech apps be made safe, and is the government going to update regulations to do so?
Are public sector policy, practice and people, fit for managing UK children’s data privacy needs?
While these private edTech companies used directly in schools can expose children to risk, so too does public data collected in schools, being handed out to commercial companies, by government departments. Our UK government does not model good practice.
Two years on, I’m still working on asking for fixes in basic national pupil data improvement. To make safe data policy, this is far too slow.
These uses of data are not safe, and expose children to potential greater theft, loss and selling of their personal data. It must change.
Whether the government hands out children’s data to commercial companies at national level and doesn’t tell schools, or staff in schools do it directly through in-class app registrations, it is often done without consent, and without any privacy impact assessment or due diligence up front. Some send data to the US or Australia. Schools still tell parents these are ‘required’ without any choice. But have they ensured that there is an equal and adequate level of data protection offered to personal data that they extract from the SIMs?
School staff and teachers manage, collect, administer personal data daily, including signing up children as users of web accounts with technology providers. Very often telling parents after the event, and with no choice. How can they and not put others at risk, if untrained in the basics of good data handling practices?
In our UK schools, just like the health system, the basics are still not being fixed or good practices on offer to staff. Teachers in the UK, get no data privacy or data protection training in their basic teacher training. That’s according to what I’ve been told so far from teacher trainers, CDP leaders, union members and teachers themselves,
Would you train fire fighters without ever letting them have hose practice?
Infrastructure is known to be exposed and under invested, but it’s not all about the tech. Security investment must also be in people.
Systemic failures seen this week revealed by WannaCry are not limited to the NHS. This from George Danezis could be, with few tweaks, copy pasted into education. So the question is not if, but when the same happens in education, unless it’s fixed.
“…from poor security standards in heath informatics industries; poor procurement processes in heath organizations; lack of liability on any of the software vendors (incl. Microsoft) for providing insecure software or devices; cost-cutting from the government on NHS cyber security with no constructive alternatives to mitigate risks; and finally the UK/US cyber-offense doctrine that inevitably leads to proliferation of cyber-weapons and their use on civilian critical infrastructures.” [Original post]
Is Education preparing us for the jobs of the future?
The panel talked about changing social and political realities. We considered the effects on employment. We began discussion how those changes should feed into education policy and practice today. It is discussion that should be had by the public. So far, almost a year after the Referendum, the UK government is yet to say what post-Brexit Britain might look like. Without a vision, any mandate for the unknown, if voted for on June 9th, will be meaningless.
What was talked about and what should be a public debate:
What jobs will be needed in the future?
Post Brexit, what skills will we need in the UK?
How can the education system adapt and improve to help future generations develop skills in this ever changing landscape?
How do we ensure women [and anyone else] are not left behind?
Brexit is the biggest change management project I may never see.
As the State continues making and remaking laws, reforming education, and starts exiting the EU, all in parallel, technology and commercial companies won’t wait to see what the post-Brexit Britain will look like. In our state’s absence of vision, companies are shaping policy and ‘re-writing’ their own version of regulations. What implications could this have for long term public good?
What will be needed in the UK future?
A couple of sentences from Alan Penn have stuck with me all week. Loosely quoted, we’re seeing cultural identity shift across the country, due to the change of our available employment types. Traditional industries once ran in a family, with a strong sense of heritage. New jobs don’t offer that. It leaves a gap we cannot fill with “I’m a call centre worker”. And this change is unevenly felt.
There is no tangible public plan in the Digital Strategy for dealing with that change in the coming 10 to 20 years employment market and what it means tied into education. It matters when many believe, as do these authors in American Scientific, “around half of today’s jobs will be threatened by algorithms. 40% of today’s top 500 companies will have vanished in a decade.”
So what needs thought?
Analysis of what that regional jobs market might look like, should be a public part of the Brexit debate and these elections →
We need to see those goals, to ensure policy can be planned for education and benchmark its progress towards achieving its aims
Brexit and technology will disproportionately affect different segments of the jobs market and therefore the population by age, by region, by socio-economic factors →
Education policy must therefore address aspects of skills looking to the future towards employment in that new environment, so that we make the most of opportunities, and mitigate the harms.
Brexit and technology will disproportionately affect communities → What will be done to prevent social collapse in regions hardest hit by change?
Where are we starting from today?
Before we can understand the impact of change, we need to understand what the present looks like. I cannot find a map of what the English education system looks like. No one I ask seems to have one or have a firm grasp across the sector, of how and where all the parts of England’s education system fit together, or their oversight and accountability. Everyone has an idea, but no one can join the dots. If you have, please let me know.
Nothing is constant in education like change; in laws, policy and its effects in practice, so I shall start there.
1. Legislation
In retrospect it was a fatal flaw, missed in post-Referendum battles of who wrote what on the side of a bus, that no one did an assessment of education [and indeed other] ‘legislation in progress’. There should have been recommendations made on scrapping inappropriate government bills in entirety or in parts. New laws are now being enacted, rushed through in wash up, that are geared to our old status quo, and we risk basing policy only on what we know from the past, because on that, we have data.
In the timeframe that Brexit will become tangible, we will feel the effects of the greatest shake up of Higher Education in 25 years. Parts of the Higher Education and Research Act, and Technical and Further Education Act are unsuited to the new order post-Brexit.
What it will do: The new HE law encourages competition between institutions, and the TFE Act centred in large part on how to manage insolvency.
What it should do: Policy needs to promote open, collaborative networks if within a now reduced research and academic circle, scholarly communities are to thrive.
Legislation has recently not only meant restructure, but repurposing of what education [authorities] is expected to offer.
A new Statutory Instrument — The School and Early Years Finance (England) Regulations 2017 — makes music, arts and playgrounds items; ‘That may be removed from maintained schools’ budget shares’.
How will this withdrawal of provision affect skills starting from the Early Years throughout young people’s education?
2. Policy
Education policy if it continues along the grammar school path, will divide communities into ‘passed’ and the ‘unselected’. A side effect of selective schooling— a feature or a bug dependent on your point of view — is socio-economic engineering. It builds class walls in the classroom, while others, like Fabian Women, say we should be breaking through glass ceilings. Current policy in a wider sense, is creating an environment that is hostile to human integration. It creates division across the entire education system for children aged 2–19.
The curriculum is narrowing, according to staff I’ve spoken to recently, as a result of measurement focus on Progress 8, and due to funding constraints.
What effect will this have on analysis of knowledge, discernment, how to assess when computers have made a mistake or supplied misinformation, and how to apply wisdom? Skills that today still distinguish human from machine learning.
What narrowing the curriculum does: Students have fewer opportunities to discover their skill set, limiting opportunities for developing social skills and cultural development, and their development as rounded, happy, human beings.
What we could do: Promote long term love of learning in-and-outside school and in communities. Reinvest in the arts, music and play, which support mental and physical health and create a culture in which people like to live as well as work. Library and community centres funding must be re-prioritised, ensuring inclusion and provision outside school for all abilities.
Austerity builds barriers of access to opportunity and skills. Children who cannot afford to, are excluded from extra curricular classes. We already divide our children through private and state education, into those who have better facilities and funding to enjoy and explore a fully rounded education, and those whose funding will not stretch much beyond the bare curriculum. For SEN children, that has already been stripped back further.
Existing barriers are likely to become entrenched in twenty years. What does it do to society, if we are divided in our communities by money, or gender, or race, and feel disempowered as individuals? Are we less responsible for our actions if there’s nothing we can do about it? If others have more money, more power than us, others have more control over our lives, and “no matter what we do, we won’t pass the 11 plus”?
Without joined-up scrutiny of these policy effects across the board, we risk embedding these barriers into future planning. Today’s data are used to train “how the system should work”. If current data are what applicants in 5 years will base future expectations on, will their decisions be objective and will in-built bias be transparent?
3. Sociological effects of legislation.
It’s not only institutions that will lose autonomy in the Higher Education and Research Act.
At present, the risk to the autonomy of science and research is theoretical — but the implications for academic freedom are troubling. [Nature 538, 5 (06 October 2016)]
The Secretary of State for Education now also has new Powers of Information about individual applicants and students. Combined with the Digital Economy Act, the law can ride roughshod over students’ autonomy and consent choices. Today they can opt out of UCAS automatically sharing their personal data with the Student Loans Company for example. Thanks to these new powers, and combined with the Digital Economy Act, that’s gone.
The Act further includes the intention to make institutions release more data about course intake and results under the banner of ‘transparency’. Part of the aim is indisputably positive, to expose discrimination and inequality of all kinds. It also aims to make the £ cost-benefit return “clearer” to applicants — by showing what exams you need to get in, what you come out with, and then by joining all that personal data to the longitudinal school record, tax and welfare data, you see what the return is on your student loan. The government can also then see what your education ‘cost or benefit’ the Treasury. It is all of course much more nuanced than that, but that’s the very simplified gist.
This ‘destinations data’ is going to be a dataset we hear ever more about and has the potential to influence education policy from age 2.
Aside from the issue of personal data disclosiveness when published by institutions — we already know of individuals who could spot themselves in a current published dataset — I worry that this direction using data for ‘advice’ is unhelpful. What if we’re looking at the wrong data upon which to base future decisions? The past doesn’t take account of Brexit or enable applicants to do so.
Researchers [and applicants, the year before they apply or start a course] will be looking at what *was* — predicted and achieved qualifying grades, make up of the class, course results, first job earnings — what was for other people, is at least 5 years old by the time it’s looked at it. Five years is a long time out of date.
4. Change
Teachers and schools have long since reached saturation point in the last 5 years to handle change. Reform has been drastic, in structures, curriculum, and ongoing in funding. There is no ongoing teacher training, and lack of CPD take up, is exacerbated by underfunding.
Teachers are fed up with change. They want stability. But contrary to the current “strong and stable” message, reality is that ahead we will get anything but, and must instead manage change if we are to thrive. Politically, we will see backlash when ‘stable’ is undeliverable.
But Teaching has not seen ‘stable’ for some time. Teachers are asking for fewer children, and more cash in the classroom. Unions talk of a focus on learning, not testing, to drive school standards. If the planned restructuring of funding happens, how will it affect staff retention?
We know schools are already reducing staff. How will this affect employment, adult and children’s skill development, their ambition, and society and economy?
Where could legislation and policy look ahead?
What are the big Brexit targets and barriers and when do we expect them?
How is the fall out from underfunding and reduction of teaching staff expected to affect skills provision?
State education policy is increasingly hands-off. What is the incentive for local schools or MATs to look much beyond the short term?
How do local decisions ensure education is preparing their community, but also considering society, health and (elderly) social care, Post-Brexit readiness and women’s economic empowerment?
How does our ageing population shift in the same time frame?
How can the education system adapt?
We need to talk more about other changes in the system in parallel to Brexit; join the dots, plus the potential positive and harmful effects of technology.
Gender here too plays a role, as does mitigating discrimination of all kinds, confirmation bias, and even in the tech itself, whether AI for example, is going to be better than us at decision-making, if we teach AI to be biased.
Dr Lisa Maria Mueller talked about the effects and influence of age, setting and language factors on what skills we will need, and employment. While there are certain skills sets that computers are and will be better at than people, she argued society also needs to continue to cultivate human skills in cultural sensitivities, empathy, and understanding. We all nodded. But how?
To develop all these human skills is going to take investment. Investment in the humans that teach us. Bennie Kara, Assistant Headteacher in London, spoke about school cuts and how they will affect children’s futures.
The future of England’s education must be geared to a world in which knowledge and facts are ubiquitous, and readily available online than at any other time. And access to learning must be inclusive. That means including SEN and low income families, the unskilled, everyone. As we become more internationally remote, we must put safeguards in place if we to support thriving communities.
Policy and legislation must also preserve and respect human dignity in a changing work environment, and review not only what work is on offer, but *how*; the kinds of contracts and jobs available.
Where might practice need to adapt now?
Re-consider curriculum content with its focus on facts. Will success risk being measured based on out of date knowledge, and a measure of recall? Are these skills in growing or dwindling need?
Knowledge focus must place value on analysis, discernment, and application of facts that computers will learn and recall better than us. Much of that learning happens outside school.
Opportunities have been cut, together with funding. We need communities brought back together, if they are not to collapse. Funding centres of local learning, restoring libraries and community centres will be essential to local skill development.
What is missing?
Although Sarah Waite spoke (in a suitably Purdah appropriate tone), about the importance of basic skills in the future labour market we didn’t get to talking about education preparing us for the lack of jobs of the future and what that changed labour market will look like.
What skills will *not* be needed? Who decides? If left to companies’ sponsor led steer in academies, what effects will we see in society?
Discussions of a future education model and technology seem to share a common theme: people seem reduced in making autonomous choices. But they share no positive vision.
Technology should empower us, but it seems to empower the State and diminish citizens’ autonomy in many of today’s policies, and in future scenarios especially around the use of personal data and Digital Economy.
Technology should enable greater collaboration, but current tech in education policy is focused too little on use on children’s own terms, and too heavily on top-down monitoring: of scoring, screen time, search terms. Further restrictions by Age Verification are coming, and may access and reduce participation in online services if not done well.
Infrastructure weakness is letting down the skill training: University Technical Colleges (UTCs) are not popular and failing to fill places. There is lack of an overarching area wide strategic plan for pupils in which UTCS play a part. Local Authorities played an important part in regional planning which needs restored to ensure joined up local thinking.
How do we ensure women are not left behind?
The final question of the evening asked how women will be affected by Brexit and changing job market. Part of the risks overall, the panel concluded, is related to [lack of] equal-pay. But where are the assessments of the gendered effects in the UK of:
community structural change and intra-family support and effect on demand for social care
tech solutions in response to lack of human interaction and staffing shortages including robots in the home and telecare
the disproportionate drop out of work, due to unpaid care roles, and difficulty getting back in after a break.
the roles and types of work likely to be most affected or replaced by machine learning and robots
and how will women be empowered or not socially by technology?
We quickly need in education to respond to the known data where women are already being left behind now. The attrition rate for example in teaching in England after two-three years is poor, and getting worse. What will government do to keep teachers teaching? Their value as role models is not captured in pupils’ exams results based entirely on knowledge transfer.
Our GCSEs this year go back to pure exam based testing, and remove applied coursework marking, and is likely to see lower attainment for girls than boys, say practitioners. Likely to leave girls behind at an earlier age.
“There is compelling evidence to suggest that girls in particular may be affected by the changes — as research suggests that boys perform more confidently when assessed by exams alone.”
Jennifer Tuckett spoke about what fairness might look like for female education in the Creative Industries. From school-leaver to returning mother, and retraining older women, appreciating the effects of gender in education is intrinsic to the future jobs market.
We also need broader public understanding of the loop of the impacts of technology, on the process and delivery of teaching itself, and as school management becomes increasingly important and is male dominated, how will changes in teaching affect women disproportionately? Fact delivery and testing can be done by machine, and supports current policy direction, but can a computer create a love of learning and teach humans how to think?
“There is a opportunity for a holistic synthesis of research into gender, the effect of tech on the workplace, the effect of technology on care roles, risks and opportunities.”
Delivering education to ensure women are not left behind, includes avoiding women going into education as teenagers now, to be led down routes without thinking of what they want and need in future. Regardless of work.
Education must adapt to changed employment markets, and the social and community effects of Brexit. If it does not, barriers will become embedded. Geographical, economic, language, familial, skills, and social exclusion.
In short
In summary, what is the government’s Brexit vision? We must know what they see five, 10, and for 25 years ahead, set against understanding the landscape as-is, in order to peg other policy to it.
With this foundation, what we know and what we estimate we don’t know yet can be planned for.
Once we know where we are going in policy, we can do a fit-gap to map how to get people there.
Estimate which skills gaps need filled and which do not. Where will change be hardest?
Change is not new. But there is current potential for massive long term economic and social lasting damage to our young people today. Government is hindered by short term political thinking, but it has a long-term responsibility to ensure children are not mis-educated because policy and the future environment are not aligned.
We deserve public, transparent, informed debate to plan our lives.
We enter the unknown of the education triangle at our peril; Brexit, underfunding, divisive structural policy, for the next ten years and beyond, without appropriate adjustment to pre-Brexit legislation and policy plans for the new world order.
The combined negative effects on employment at scale and at pace must be assessed with urgency, not by big Tech who will profit, but with an eye on future fairness, and public economic and social good. Academy sponsors, decision makers in curriculum choices, schools with limited funding, have no incentives to look to the wider world.
If we’re going to go it alone, we’d be better be robust as a society, and that can’t be just some of us, and can’t only be about skills as seen as having an tangible output.
All this discussion is framed by the premise that education’s aim is to prepare a future workforce for work, and that it is sustainable.
Policy is increasingly based on work that is measured by economic output. We must not leave out or behind those who do not, or cannot, or whose work is unmeasured yet contributes to the world.
‘The only future worth building includes everyone,’ said the Pope in a recent TedTalk.
What kind of future do you want to see yourself living in? Will we all work or will there be universal basic income? What will happen on housing, an ageing population, air pollution, prisons, free movement, migration, and health? What will keep communities together as their known world in employment, and family life, and support collapse? How will education enable children to discover their talents and passions?
Human beings are more than what we do. The sense of a country of who we are and what we stand for is about more than our employment or what we earn. And we cannot live on slogans alone.
Who do we think we in the UK will be after Brexit, needs real and substantial answers. What are we going to *do* and *be* in the world?
Without this vision, any mandate as voted for on June 9th, will be made in the dark and open to future objection writ large. ‘We’ must be inclusive based on a consensus, not simply a ‘mandate’.
Only with clear vision for all these facets fitting together in a model of how we will grow in all senses, will we be able to answer the question, is education preparing us [all] for the jobs of the future?
More than this, we must ask if education is preparing people for the lack of jobs, for changing relationships in our communities, with each other, and with machines.
Change is coming, Brexit or not. But Brexit has exacerbated the potential to miss opportunities, embed barriers, and see negative side-effects from changes already underway in employment, in an accelerated timeframe.
If our education policy today is not gearing up to that change, we must.
Notes and thoughts from Full Fact’s event at Newspeak House in London on 27/3 to discuss fake news, the misinformation ecosystem, and how best to respond. The recording is here. The contributions and questions part of the evening began from 55.55.
What is fake news? Are there solutions?
1. Clickbait: celebrity pull to draw online site visitors towards traffic to an advertising model – kill the business model
2. Mischief makers: Deceptive with hostile intent – bots, trolls, with an agenda
3. Incorrectly held views: ‘vaccinations cause autism’ despite the evidence to the contrary. How can facts reach people who only believe what they want to believe?
Why does it matter? The scrutiny of people in power matters – to politicians, charities, think tanks – as well as the public.
It is fundamental to remember that we do in general believe that the public has a sense of discernment, however there is also a disconnect between an objective truth and some people’s perception of reality. Can this conflict be resolved? Is it necessary to do so? If yes, when is it necessary to do so and who decides that?
There is a role for independent tracing of unreliable information, its sources and its distribution patterns and identifying who continues to circulate fake news even when asked to desist.
Transparency about these processes is in the public interest.
Overall, there is too little public understanding of how technology and online tools affect behaviours and decision-making.
The Role of Media in Society
How do you define the media?
How can average news consumers distinguish between self-made and distributed content compared with established news sources?
What is the role of media in a democracy?
What is the mainstream media?
Does the media really represent what I want to understand? > Does the media play a role in failure of democracy if news is not representative of all views? > see Brexit, see Trump
What are news values and do we have common press ethics?
New problems in the current press model:
Failure of the traditional media organisations in fact checking; part of the problem is that the credible media is under incredible pressure to compete to gain advertising money share.
Journalism is under resourced. Verification skills are lacking and tools can be time consuming. Techniques like reverse image search, and verification take effort.
Press releases with numbers can be less easily scrutinised so how do we ensure there is not misinformation through poor journalism?
What about confirmation bias and reinforcement?
What about friends’ behaviours? Can and should we try to break these links if we are not getting a fair picture? The Facebook representative was keen to push responsibility for the bubble entirely to users’ choices. Is this fair given the opacity of the model?
Have we cracked the bubble of self-reinforcing stories being the only stories that mutual friends see?
Can we crack the echo chamber?
How do we start to change behaviours? Can we? Should we?
The risk is that if people start to feel nothing is trustworthy, we trust nothing. This harms relations between citizens and state, organisations and consumers, professionals and public and between us all. Community is built on relationships. Relationships are built on trust. Trust is fundamental to a functioning society and economy.
Is it game over?
Will Moy assured the audience that there is no need to descend into blind panic and there is still discernment among the public.
Then, it was asked, is perhaps part of the problem that the Internet is incapable in its current construct to keep this problem at bay? Is part of the solution re-architecturing and re-engineering the web?
What about algorithms? Search engines start with word frequency and neutral decisions but are now much more nuanced and complex. We really must see how systems decide what is published. Search engines provide but also restrict our access to facts and ‘no one gets past page 2 of search results’. Lack of algorithmic transparency is an issue, but will not be solved due to commercial sensitivities.
Fake news creation can be lucrative. Mangement models that rely on user moderation or comments to give balance can be gamed.
Are there appropriate responses to the grey area between trolling and deliberate deception through fake news that is damaging? In what context and background? Are all communities treated equally?
The question came from the audience whether the panel thought regulation would come from the select committee inquiry. The general response was that it was unlikely.
What are the solutions?
The questions I came away thinking about went unanswered, because I am not sure there are solutions as long as the current news model exists and is funded in the current way by current players.
I believe one of the things that permits fake news is the growing imbalance of money between the big global news distributors and independent and public interest news sources.
This loss of balance, reduces our ability to decide for ourselves what we believe and what matters to us.
The monetisation of news through its packaging in between advertising has surely contaminated the news content itself.
Think of a Facebook promoted post – you can personalise your audience to a set of very narrow and selective characteristics. The bubble that receives that news is already likely to be connected by similar interest pages and friends and the story becomes self reinforcing, showing up in friends’ timelines.
A modern online newsroom moves content on the webpage around according to what is getting the most views and trending topics in a list encourage the viewers to see what other people are reading, and again, are self reinforcing.
There is also a lack of transparency of power. Where we see a range of choices from which we may choose to digest a range of news, we often fail to see one conglomerate funder which manages them all.
The discussion didn’t address at all the fundamental shift in “what is news” which has taken place over the last twenty years. In part, I believe the responsibility for the credibility level of fake news in viewers lies with 24/7 news channels. They have shifted the balance of content from factual bulletins, to discussion and opinion. Now while the news channel is seen as a source of ‘news’ much of the time, the content is not factual, but opinion, and often that means the promotion and discussion of the opinions of their paymaster.
Most simply, how should I answer the question that my ten year old asks – how do I know if something on the Internet is true or not?
Can we really say it is up to the public to each take on this role and where do we fit the needs of the vulnerable or children into that?
Is the term fake news the wrong approach and something to move away from? Can we move solutions away from target-fixation ‘stop fake news’ which is impossible online, but towards what the problems are that fake news cause?
Interference in democracy. Interference in purchasing power. Interference in decision making. Interference in our emotions.
These interferences with our autonomy is not something that the web is responsible for, but the people behind the platforms must be accountable for how their technology works.
In the mean time, what can we do?
“if we ever want the spread of fake news to stop we have to take responsibility for calling out those who share fake news (real fake news, not just things that feel wrong), and start doing a bit of basic fact-checking ourselves.” [IB Times, Eliot Higgins is the founder of Bellingcat]
Not everyone has the time or capacity to each do that. As long as today’s imbalance of money and power exists, truly independent organisations like Bellingcat and FullFact have an untold value.
The billed Google and Twitter speakers were absent because they were invited to a meeting with the Home Secretary on 28/3. Speakers were Will Moy, Director of FullFact, Jenni Sargent Managing Director of firstdraftnews, Richard Allan, Facebook EMEA Policy Director and the event was chaired by Bill Thompson.