Update received from Edmodo, VP Marketing & Adoption, June 1:
While everyone is focused on #WannaCry ransomware, it appears that a global edTech company has had a potential global data breach that few are yet talking about.
Edmodo is still claiming on its website it is, “The safest and easiest way for teachers to connect and collaborate with students, parents, and each other.” But is it true, and who verifies that safe is safe?
Edmodo data from 78 million users for sale
Matt Burgess wrote in VICE: “Education website Edmodo promises a way for “educators to connect and collaborate with students, parents, and each other”. However, 78 million of its customers have had their user account details stolen. Vice’s Motherboard reports that usernames, email addresses, and hashed passwords were taken from the service and have been put up for sale on the dark web for around $1,000 (£700).
“Data breach notification website LeakBase also has a copy of the data and provided it to Motherboard. According to LeakBase around 40 million of the accounts have email addresses connected to them. The company said it is aware of a “potential security incident” and is investigating.”
The Motherboard article by Joseph Cox, says it happened last month. What has been done since? Why is there no public information or notification about the breach on the company website?
Joseph doesn’t think profile photos are at risk, unless someone can log into an account. He was given usernames, email addresses, and hashed passwords, and as far as he knows, that was all that was stolen.
“The passwords have apparently been hashed with the robust bcrypt algorithm, and a string of random characters known as a salt, meaning hackers will have a much harder time obtaining user’s actual login credentials. Not all of the records include a user email address.”
So far I’ve been unable to find out from Edmodo directly. There is no telephone technical support. There is no human that can be reached dialling the headquarters telephone number.
Where’s the parental update?
No one has yet responded to say whether UK pupils and teachers’ data was among that reportedly stolen. (Update June 1, the company did respond with confirmation of UK users involved.)
While there is no mention of the other data the site holds being in the breach, details are as yet sketchy, and Edmodo holds children’s data. Where is the company assurance what was and was not stolen?
As it’s a platform log on I would want to know when parents will be told exactly what was compromised and how details have been exposed. I would want clarification if this could potentially be a weakness for further breaches of other integrated systems, or not.
Are edTech and IoT toys fit for UK children?
In 2016, more than 727,000 UK children had their information compromised following a cyber attack on VTech, including images. These toys are sold as educational, even if targeted at an early age.
In Spring 2017, CloudPets, the maker of Internet of Things teddy bears, “smart toys” left more than two million voice recordings from children online without any security protections and exposing children’s personal details.
As yet UK ministers have declined our civil society recommendations to act and take steps on the public sector security of national pupil data or on the private security of Internet connected toys and things. The latter in line with Germany for example.
It is right that the approach is considered. The UK government must take these risks seriously in an evidence based and informed way, and act, not with knee jerk reactions. But it must act.
Two months after Germany banned the Cayla doll, we still had them for sale here.
Parents are often accused of being uninformed, but we must be able to expect that our products pass a minimum standard of tech and data security testing as part of pre-sale consumer safety testing.
Parents have a responsibility to educate themselves to a reasonable level of user knowledge. But the opportunities are limited when there’s no transparency. Much of the use of a child’s personal data and system data’s interaction with our online behaviour, in toys, things, and even plain websites remains hidden to most of us.
So too, the Edmodo privacy policy contained no mention of profiling or behavioural web tracking, for example. Only when this savvy parent spotted it was happening, it appears the company responded properly to fix it. Given strict COPPA rules it is perhaps unsurprising, though it shouldn’t have happened at all.
How will the uses of these smart toys, and edTech apps be made safe, and is the government going to update regulations to do so?
Are public sector policy, practice and people, fit for managing UK children’s data privacy needs?
While these private edTech companies used directly in schools can expose children to risk, so too does public data collected in schools, being handed out to commercial companies, by government departments. Our UK government does not model good practice.
Two years on, I’m still working on asking for fixes in basic national pupil data improvement. To make safe data policy, this is far too slow.
These uses of data are not safe, and expose children to potential greater theft, loss and selling of their personal data. It must change.
Whether the government hands out children’s data to commercial companies at national level and doesn’t tell schools, or staff in schools do it directly through in-class app registrations, it is often done without consent, and without any privacy impact assessment or due diligence up front. Some send data to the US or Australia. Schools still tell parents these are ‘required’ without any choice. But have they ensured that there is an equal and adequate level of data protection offered to personal data that they extract from the SIMs?
School staff and teachers manage, collect, administer personal data daily, including signing up children as users of web accounts with technology providers. Very often telling parents after the event, and with no choice. How can they and not put others at risk, if untrained in the basics of good data handling practices?
In our UK schools, just like the health system, the basics are still not being fixed or good practices on offer to staff. Teachers in the UK, get no data privacy or data protection training in their basic teacher training. That’s according to what I’ve been told so far from teacher trainers, CDP leaders, union members and teachers themselves,
Would you train fire fighters without ever letting them have hose practice?
Infrastructure is known to be exposed and under invested, but it’s not all about the tech. Security investment must also be in people.
Systemic failures seen this week revealed by WannaCry are not limited to the NHS. This from George Danezis could be, with few tweaks, copy pasted into education. So the question is not if, but when the same happens in education, unless it’s fixed.
“…from poor security standards in heath informatics industries; poor procurement processes in heath organizations; lack of liability on any of the software vendors (incl. Microsoft) for providing insecure software or devices; cost-cutting from the government on NHS cyber security with no constructive alternatives to mitigate risks; and finally the UK/US cyber-offense doctrine that inevitably leads to proliferation of cyber-weapons and their use on civilian critical infrastructures.” [Original post]
Time and again, thinking and discussion about these topics is siloed. At the Turing Institute, the Royal Society, the ADRN and EPSRC, in government departments, discussions on data, or within education practitioner, and public circles — we are all having similar discussions about data and ethics, but with little ownership and no goals for future outcomes. If government doesn’t get it, or have time for it, or policy lacks ethics by design, is it in the public interest for private companies, Google et al., to offer a fait accompli?
There is lots of talking about Machine Learning (ML), Artificial Intelligence (AI) and ethics. But what is being done to ensure that real values — respect for rights, human dignity, and autonomy — are built into practice in the public services delivery?
In most recent data policy it is entirely absent. The Digital Economy Act s33 risks enabling, through removal of inter and intra-departmental data protections, an unprecedented expansion of public data transfers, with “untrammelled powers”. Powers without codes of practice, promised over a year ago. That has fall out for the trustworthiness of legislative process, and data practices across public services.
Predictive analytics is growing but poorly understood in the public and public sector.
There is already dependence on computers in aspects of public sector work. Its interactions with others in sensitive situations demands better knowledge of how systems operate and can be wrong. Debt recovery, and social care to take two known examples.
Risk averse, staff appear to choose not to question the outcome of ‘algorithmic decision making’ or do not have the ability to do so. There is reportedly no analysis training for practitioners, to understand the basis or bias of conclusions. This has the potential that instead of making us more informed, decision-making by machine makes us humans less clever.
What does it do to professionals, if they feel therefore less empowered? When is that a good thing if it overrides discriminatory human decisions? How can we tell the difference and balance these risks if we don’t understand or feel able to challenge them?
In education, what is it doing to children whose attainment is profiled, predicted, and acted on to target extra or less focus from school staff, who have no ML training and without informed consent of pupils or parents?
If authorities use data in ways the public do not expect, such as to ID homes of multiple occupancy without informed consent, they will fail the future to deliver uses for good. The ‘public interest’, ‘user need,’ and ethics can come into conflict according to your point of view. The public and data protection law and ethics object to harms from use of data. This type of application has potential to be mind-blowingly invasive and reveal all sorts of other findings.
Widely informed thinking must be made into meaningful public policy for the greatest public good
Our politicians are caught up in the General Election and buried in Brexit.
Meanwhile, the commercial companies taking AI first rights to capitalise on existing commercial advantage could potentially strip public assets, use up our personal data and public trust, and leave the public with little public good. We are already used by global data players, and by machine-based learning companies, without our knowledge or consent. That knowledge can be used to profit business models, that pay little tax into the public purse.
There are valid macro economic arguments about whether private spend and investment are preferable compared with a state’s ability to do the same. But these companies make more than enough to do it all. Does it signal a failure to a commitment to the wider community; not paying just amounts of taxes, is it a red flag to a company’s commitment to public good?
What that public good should look like, depends on who is invited to participate in the room, and not to tick boxes, but to think and to build.
The Royal Society’s Report on AI and Machine Learning published on April 25, showed a working group of 14 participants, including two Google DeepMind representatives, one from Amazon, private equity investors, and academics from cognitive science and genetics backgrounds.
If we are going to form objective policies the inputs that form the basis for them must be informed, but must also be well balanced, and be seen to be balanced. Not as an add on, but be in the same room.
As Natasha Lomas in TechCrunch noted, “Public opinion is understandably a big preoccupation for the report authors — unsurprisingly so, given that a technology that potentially erodes people’s privacy and impacts their jobs risks being drastically unpopular.”
“The report also calls on researchers to consider the wider impact of their work and to receive training in recognising the ethical implications.”
What are those ethical implications? Who decides which matter most? How do we eliminate recognised discriminatory bias? What should data be used for and AI be working on at all? Who is it going to benefit? What questions are we not asking? Why are young people left out of this debate?
Who decides what the public should or should not know?
AI and ML depend on data. Data is often talked about as a panacea to problems of better working together. But data alone does not make people better informed. In the same way that they fail, if they don’t feel it is their job to pick up the fax. A fundamental building block of our future public and private prosperity is understanding data and how we, and the AI, interact. What is data telling us and how do we interpret it, and know it is accurate?
How and where will we start to educate young people about data and ML, if not about their own and use by government and commercial companies?
The whole of Chapter 5 in the report is very good as a starting point for policy makers who have not yet engaged in the area. Privacy while summed up too short in conclusions, is scattered throughout.
Blind spots remain, however.
Over willingness to accommodate existing big private players as their expertise leads design, development and a desire to ‘re-write regulation’.
Slowness to react to needed regulation in the public sector (caught up in Brexit) while commercial drivers and technology change forge ahead
‘How do we develop technology that benefits everyone’ must not only think UK, but global South, especially in the bias in how AI is being to taught, and broad socio-economic barriers in application
Predictive analytics and professional application = unwillingness to question the computer result. In children’s social care this is already having a damaging upturn in the family courts (S31)
Data and technology knowledge and ethics training, must be embedded across the public sector, not only post grad students in machine learning.
Young people are left out of discussions which, after all, are about their future. [They might have some of the best ideas, we miss at our peril.]
There is no time to waste
Children and young people have the most to lose while their education, skills, jobs market, economy, culture, care, and society goes through a series of gradual but seismic shift in purpose, culture, and acceptance before finding new norms post-Brexit. They will also gain the most if the foundations are right. One of these must be getting age verification right in GDPR, not allowing it to enable a massive data grab of child-parent privacy.
Although the RS Report considers young people in the context of a future workforce who need skills training, they are otherwise left out of this report.
“The next curriculum reform needs to consider the educational needs of young people through the lens of the implications of machine learning and associated technologies for the future of work.”
Yes it does, but it must give young people and the implications of ML broader consideration for their future, than classroom or workplace.
We are not yet talking about the effects of teaching technology to learn, and its effect on public services and interactions with the public. Questions that Sam Smith asked in Shadow of the smart machine: Will machine learning end?
At the end of this Information Age we are at a point when machine learning, AI and biotechnology are potentially life enhancing or could have catastrophic effects, if indeed “AI will cause people ‘more pain than happiness” as described by Alibaba’s founder Jack Ma.
The conflict between commercial profit and public good, what commercial companies say they will do and actually do, and fears and assurances over predicted outcomes is personified in the debate between Demis Hassabis, co-founder of DeepMind Technologies, (a London-based machine learning AI startup), and Elon Musk, discussing the perils of artificial intelligence.
Vanity Fair reported that, “Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.””
Musk was of the opinion that A.I. was probably humanity’s “biggest existential threat.”
We are not yet joining up multi disciplinary and cross sector discussions of threats and opportunities
Jobs, shift in needed skill sets for education, how we think, interact, value each other, accept or reject ownership and power models; and later, from the technology itself. We are not yet talking conversely, the opportunities that the seismic shifts offer in real terms. Or how and why to accept or reject or regulate them.
Where private companies are taking over personal data given in trust to public services, it is reckless for the future of public interest research to assume there is no public objection. How can we object, if not asked? How can children make an informed choice? How will public interest be assured to be put ahead of private profit? If it is intended on balance to be all about altruism from these global giants, then they must be open and accountable.
Private companies are shaping how and where we find machine learning and AI gathering data about our behaviours in our homes and public spaces.
SPACE10, an innovation hub for IKEA is currently running a survey on how the public perceives and “wants their AI to look, be, and act”, with an eye on building AI into their products, for us to bring flat-pack into our houses.
As the surveillance technology built into the Things in our homes attached to the Internet becomes more integral to daily life, authorities are now using it to gather evidence in investigations; from mobile phones, laptops, social media, smart speakers, and games. The IoT so far seems less about the benefits of collaboration, and all about the behavioural data it collects and uses to target us to sell us more things. Our behaviours tell much more than how we act. They show how we think inside the private space of our minds.
Do you want Google to know how you think and have control over that? The companies of the world that have access to massive amounts of data, and are using that data to now teach AI how to ‘think’. What is AI learning? And how much should the State see or know about how you think, or try to predict it?
Who cares, wins?
It is not overstated to say society and future public good of public services, depends on getting any co-dependencies right. As I wrote in the time of care.data, the economic value of data, personal rights and the public interest are not opposed to one another, but have synergies and co-dependency. One player getting it wrong, can create harm for all. Government must start to care about this, beyond the side effects of saving political embarrassment.
Without joining up all aspects, we cannot limit harms and make the most of benefits. There is nuance and unknowns. There is opaque decision making and secrecy, packaged in the wording of commercial sensitivity and behind it, people who can be brilliant but at the end of the day, are also, human, with all our strengths and weaknesses.
And we can get this right, if data practices get better, with joined up efforts.
Our future society, as our present, is based on webs of trust, on our social networks on- and offline, that enable business, our education, our cultural, and our interactions. Children must trust they will not be used by systems. We must build trustworthy systems that enable future digital integrity.
The immediate harm that comes from blind trust in AI companies is not their AI, but the hidden powers that commercial companies have to nudge public and policy maker behaviours and acceptance, towards private gain. Their ability and opportunity to influence regulation and future direction outweighs most others. But lack of transparency about their profit motives is concerning. Carefully staged public engagement is not real engagement but a fig leaf to show ‘the public say yes’.
The unwillingness by Google DeepMind, when asked at their public engagement event, to discuss their past use of NHS patient data, or the profit model plan or their terms of NHS deals with London hospitals, should be a warning that these questions need answers and accountability urgently.
Companies that have already extracted and benefited from personal data in the public sector, have already made private profit. They and their machines have learned for their future business product development.
A transparent accountable future for all players, private and public, using public data is a necessary requirement for both the public good and private profit. It is not acceptable for departments to hide their practices, just as it is unacceptable if firms refuse algorithmic transparency.
“Rebooting antitrust for the information age will not be easy. It will entail new risks: more data sharing, for instance, could threaten privacy. But if governments don’t want a data economy dominated by a few giants, they will need to act soon.” [The Economist, May 6]
If the State creates a single data source of truth, or private Giant tech thinks it can side-step regulation and gets it wrong, their practices screw up public trust. It harms public interest research, and with it our future public good.
But will they care?
If we care, then across public and private sectors, we must cherish shared values and better collaboration. Embed ethical human values into development, design and policy. Ensure transparency of where, how, who and why my personal data has gone.
We must ensure that as the future becomes “smarter”, we educate ourselves and our children to stay intelligent about how we use data and AI.
We must start today, knowing how we are used by both machines, and man.
Is Education preparing us for the jobs of the future?
The panel talked about changing social and political realities. We considered the effects on employment. We began discussion how those changes should feed into education policy and practice today. It is discussion that should be had by the public. So far, almost a year after the Referendum, the UK government is yet to say what post-Brexit Britain might look like. Without a vision, any mandate for the unknown, if voted for on June 9th, will be meaningless.
What was talked about and what should be a public debate:
What jobs will be needed in the future?
Post Brexit, what skills will we need in the UK?
How can the education system adapt and improve to help future generations develop skills in this ever changing landscape?
How do we ensure women [and anyone else] are not left behind?
Brexit is the biggest change management project I may never see.
As the State continues making and remaking laws, reforming education, and starts exiting the EU, all in parallel, technology and commercial companies won’t wait to see what the post-Brexit Britain will look like. In our state’s absence of vision, companies are shaping policy and ‘re-writing’ their own version of regulations. What implications could this have for long term public good?
What will be needed in the UK future?
A couple of sentences from Alan Penn have stuck with me all week. Loosely quoted, we’re seeing cultural identity shift across the country, due to the change of our available employment types. Traditional industries once ran in a family, with a strong sense of heritage. New jobs don’t offer that. It leaves a gap we cannot fill with “I’m a call centre worker”. And this change is unevenly felt.
There is no tangible public plan in the Digital Strategy for dealing with that change in the coming 10 to 20 years employment market and what it means tied into education. It matters when many believe, as do these authors in American Scientific, “around half of today’s jobs will be threatened by algorithms. 40% of today’s top 500 companies will have vanished in a decade.”
So what needs thought?
Analysis of what that regional jobs market might look like, should be a public part of the Brexit debate and these elections →
We need to see those goals, to ensure policy can be planned for education and benchmark its progress towards achieving its aims
Brexit and technology will disproportionately affect different segments of the jobs market and therefore the population by age, by region, by socio-economic factors →
Education policy must therefore address aspects of skills looking to the future towards employment in that new environment, so that we make the most of opportunities, and mitigate the harms.
Brexit and technology will disproportionately affect communities → What will be done to prevent social collapse in regions hardest hit by change?
Where are we starting from today?
Before we can understand the impact of change, we need to understand what the present looks like. I cannot find a map of what the English education system looks like. No one I ask seems to have one or have a firm grasp across the sector, of how and where all the parts of England’s education system fit together, or their oversight and accountability. Everyone has an idea, but no one can join the dots. If you have, please let me know.
Nothing is constant in education like change; in laws, policy and its effects in practice, so I shall start there.
1. Legislation
In retrospect it was a fatal flaw, missed in post-Referendum battles of who wrote what on the side of a bus, that no one did an assessment of education [and indeed other] ‘legislation in progress’. There should have been recommendations made on scrapping inappropriate government bills in entirety or in parts. New laws are now being enacted, rushed through in wash up, that are geared to our old status quo, and we risk basing policy only on what we know from the past, because on that, we have data.
In the timeframe that Brexit will become tangible, we will feel the effects of the greatest shake up of Higher Education in 25 years. Parts of the Higher Education and Research Act, and Technical and Further Education Act are unsuited to the new order post-Brexit.
What it will do: The new HE law encourages competition between institutions, and the TFE Act centred in large part on how to manage insolvency.
What it should do: Policy needs to promote open, collaborative networks if within a now reduced research and academic circle, scholarly communities are to thrive.
Legislation has recently not only meant restructure, but repurposing of what education [authorities] is expected to offer.
A new Statutory Instrument — The School and Early Years Finance (England) Regulations 2017 — makes music, arts and playgrounds items; ‘That may be removed from maintained schools’ budget shares’.
How will this withdrawal of provision affect skills starting from the Early Years throughout young people’s education?
2. Policy
Education policy if it continues along the grammar school path, will divide communities into ‘passed’ and the ‘unselected’. A side effect of selective schooling— a feature or a bug dependent on your point of view — is socio-economic engineering. It builds class walls in the classroom, while others, like Fabian Women, say we should be breaking through glass ceilings. Current policy in a wider sense, is creating an environment that is hostile to human integration. It creates division across the entire education system for children aged 2–19.
The curriculum is narrowing, according to staff I’ve spoken to recently, as a result of measurement focus on Progress 8, and due to funding constraints.
What effect will this have on analysis of knowledge, discernment, how to assess when computers have made a mistake or supplied misinformation, and how to apply wisdom? Skills that today still distinguish human from machine learning.
What narrowing the curriculum does: Students have fewer opportunities to discover their skill set, limiting opportunities for developing social skills and cultural development, and their development as rounded, happy, human beings.
What we could do: Promote long term love of learning in-and-outside school and in communities. Reinvest in the arts, music and play, which support mental and physical health and create a culture in which people like to live as well as work. Library and community centres funding must be re-prioritised, ensuring inclusion and provision outside school for all abilities.
Austerity builds barriers of access to opportunity and skills. Children who cannot afford to, are excluded from extra curricular classes. We already divide our children through private and state education, into those who have better facilities and funding to enjoy and explore a fully rounded education, and those whose funding will not stretch much beyond the bare curriculum. For SEN children, that has already been stripped back further.
Existing barriers are likely to become entrenched in twenty years. What does it do to society, if we are divided in our communities by money, or gender, or race, and feel disempowered as individuals? Are we less responsible for our actions if there’s nothing we can do about it? If others have more money, more power than us, others have more control over our lives, and “no matter what we do, we won’t pass the 11 plus”?
Without joined-up scrutiny of these policy effects across the board, we risk embedding these barriers into future planning. Today’s data are used to train “how the system should work”. If current data are what applicants in 5 years will base future expectations on, will their decisions be objective and will in-built bias be transparent?
3. Sociological effects of legislation.
It’s not only institutions that will lose autonomy in the Higher Education and Research Act.
At present, the risk to the autonomy of science and research is theoretical — but the implications for academic freedom are troubling. [Nature 538, 5 (06 October 2016)]
The Secretary of State for Education now also has new Powers of Information about individual applicants and students. Combined with the Digital Economy Act, the law can ride roughshod over students’ autonomy and consent choices. Today they can opt out of UCAS automatically sharing their personal data with the Student Loans Company for example. Thanks to these new powers, and combined with the Digital Economy Act, that’s gone.
The Act further includes the intention to make institutions release more data about course intake and results under the banner of ‘transparency’. Part of the aim is indisputably positive, to expose discrimination and inequality of all kinds. It also aims to make the £ cost-benefit return “clearer” to applicants — by showing what exams you need to get in, what you come out with, and then by joining all that personal data to the longitudinal school record, tax and welfare data, you see what the return is on your student loan. The government can also then see what your education ‘cost or benefit’ the Treasury. It is all of course much more nuanced than that, but that’s the very simplified gist.
This ‘destinations data’ is going to be a dataset we hear ever more about and has the potential to influence education policy from age 2.
Aside from the issue of personal data disclosiveness when published by institutions — we already know of individuals who could spot themselves in a current published dataset — I worry that this direction using data for ‘advice’ is unhelpful. What if we’re looking at the wrong data upon which to base future decisions? The past doesn’t take account of Brexit or enable applicants to do so.
Researchers [and applicants, the year before they apply or start a course] will be looking at what *was* — predicted and achieved qualifying grades, make up of the class, course results, first job earnings — what was for other people, is at least 5 years old by the time it’s looked at it. Five years is a long time out of date.
4. Change
Teachers and schools have long since reached saturation point in the last 5 years to handle change. Reform has been drastic, in structures, curriculum, and ongoing in funding. There is no ongoing teacher training, and lack of CPD take up, is exacerbated by underfunding.
Teachers are fed up with change. They want stability. But contrary to the current “strong and stable” message, reality is that ahead we will get anything but, and must instead manage change if we are to thrive. Politically, we will see backlash when ‘stable’ is undeliverable.
But Teaching has not seen ‘stable’ for some time. Teachers are asking for fewer children, and more cash in the classroom. Unions talk of a focus on learning, not testing, to drive school standards. If the planned restructuring of funding happens, how will it affect staff retention?
We know schools are already reducing staff. How will this affect employment, adult and children’s skill development, their ambition, and society and economy?
Where could legislation and policy look ahead?
What are the big Brexit targets and barriers and when do we expect them?
How is the fall out from underfunding and reduction of teaching staff expected to affect skills provision?
State education policy is increasingly hands-off. What is the incentive for local schools or MATs to look much beyond the short term?
How do local decisions ensure education is preparing their community, but also considering society, health and (elderly) social care, Post-Brexit readiness and women’s economic empowerment?
How does our ageing population shift in the same time frame?
How can the education system adapt?
We need to talk more about other changes in the system in parallel to Brexit; join the dots, plus the potential positive and harmful effects of technology.
Gender here too plays a role, as does mitigating discrimination of all kinds, confirmation bias, and even in the tech itself, whether AI for example, is going to be better than us at decision-making, if we teach AI to be biased.
Dr Lisa Maria Mueller talked about the effects and influence of age, setting and language factors on what skills we will need, and employment. While there are certain skills sets that computers are and will be better at than people, she argued society also needs to continue to cultivate human skills in cultural sensitivities, empathy, and understanding. We all nodded. But how?
To develop all these human skills is going to take investment. Investment in the humans that teach us. Bennie Kara, Assistant Headteacher in London, spoke about school cuts and how they will affect children’s futures.
The future of England’s education must be geared to a world in which knowledge and facts are ubiquitous, and readily available online than at any other time. And access to learning must be inclusive. That means including SEN and low income families, the unskilled, everyone. As we become more internationally remote, we must put safeguards in place if we to support thriving communities.
Policy and legislation must also preserve and respect human dignity in a changing work environment, and review not only what work is on offer, but *how*; the kinds of contracts and jobs available.
Where might practice need to adapt now?
Re-consider curriculum content with its focus on facts. Will success risk being measured based on out of date knowledge, and a measure of recall? Are these skills in growing or dwindling need?
Knowledge focus must place value on analysis, discernment, and application of facts that computers will learn and recall better than us. Much of that learning happens outside school.
Opportunities have been cut, together with funding. We need communities brought back together, if they are not to collapse. Funding centres of local learning, restoring libraries and community centres will be essential to local skill development.
What is missing?
Although Sarah Waite spoke (in a suitably Purdah appropriate tone), about the importance of basic skills in the future labour market we didn’t get to talking about education preparing us for the lack of jobs of the future and what that changed labour market will look like.
What skills will *not* be needed? Who decides? If left to companies’ sponsor led steer in academies, what effects will we see in society?
Discussions of a future education model and technology seem to share a common theme: people seem reduced in making autonomous choices. But they share no positive vision.
Technology should empower us, but it seems to empower the State and diminish citizens’ autonomy in many of today’s policies, and in future scenarios especially around the use of personal data and Digital Economy.
Technology should enable greater collaboration, but current tech in education policy is focused too little on use on children’s own terms, and too heavily on top-down monitoring: of scoring, screen time, search terms. Further restrictions by Age Verification are coming, and may access and reduce participation in online services if not done well.
Infrastructure weakness is letting down the skill training: University Technical Colleges (UTCs) are not popular and failing to fill places. There is lack of an overarching area wide strategic plan for pupils in which UTCS play a part. Local Authorities played an important part in regional planning which needs restored to ensure joined up local thinking.
How do we ensure women are not left behind?
The final question of the evening asked how women will be affected by Brexit and changing job market. Part of the risks overall, the panel concluded, is related to [lack of] equal-pay. But where are the assessments of the gendered effects in the UK of:
community structural change and intra-family support and effect on demand for social care
tech solutions in response to lack of human interaction and staffing shortages including robots in the home and telecare
the disproportionate drop out of work, due to unpaid care roles, and difficulty getting back in after a break.
the roles and types of work likely to be most affected or replaced by machine learning and robots
and how will women be empowered or not socially by technology?
We quickly need in education to respond to the known data where women are already being left behind now. The attrition rate for example in teaching in England after two-three years is poor, and getting worse. What will government do to keep teachers teaching? Their value as role models is not captured in pupils’ exams results based entirely on knowledge transfer.
Our GCSEs this year go back to pure exam based testing, and remove applied coursework marking, and is likely to see lower attainment for girls than boys, say practitioners. Likely to leave girls behind at an earlier age.
“There is compelling evidence to suggest that girls in particular may be affected by the changes — as research suggests that boys perform more confidently when assessed by exams alone.”
Jennifer Tuckett spoke about what fairness might look like for female education in the Creative Industries. From school-leaver to returning mother, and retraining older women, appreciating the effects of gender in education is intrinsic to the future jobs market.
We also need broader public understanding of the loop of the impacts of technology, on the process and delivery of teaching itself, and as school management becomes increasingly important and is male dominated, how will changes in teaching affect women disproportionately? Fact delivery and testing can be done by machine, and supports current policy direction, but can a computer create a love of learning and teach humans how to think?
“There is a opportunity for a holistic synthesis of research into gender, the effect of tech on the workplace, the effect of technology on care roles, risks and opportunities.”
Delivering education to ensure women are not left behind, includes avoiding women going into education as teenagers now, to be led down routes without thinking of what they want and need in future. Regardless of work.
Education must adapt to changed employment markets, and the social and community effects of Brexit. If it does not, barriers will become embedded. Geographical, economic, language, familial, skills, and social exclusion.
In short
In summary, what is the government’s Brexit vision? We must know what they see five, 10, and for 25 years ahead, set against understanding the landscape as-is, in order to peg other policy to it.
With this foundation, what we know and what we estimate we don’t know yet can be planned for.
Once we know where we are going in policy, we can do a fit-gap to map how to get people there.
Estimate which skills gaps need filled and which do not. Where will change be hardest?
Change is not new. But there is current potential for massive long term economic and social lasting damage to our young people today. Government is hindered by short term political thinking, but it has a long-term responsibility to ensure children are not mis-educated because policy and the future environment are not aligned.
We deserve public, transparent, informed debate to plan our lives.
We enter the unknown of the education triangle at our peril; Brexit, underfunding, divisive structural policy, for the next ten years and beyond, without appropriate adjustment to pre-Brexit legislation and policy plans for the new world order.
The combined negative effects on employment at scale and at pace must be assessed with urgency, not by big Tech who will profit, but with an eye on future fairness, and public economic and social good. Academy sponsors, decision makers in curriculum choices, schools with limited funding, have no incentives to look to the wider world.
If we’re going to go it alone, we’d be better be robust as a society, and that can’t be just some of us, and can’t only be about skills as seen as having an tangible output.
All this discussion is framed by the premise that education’s aim is to prepare a future workforce for work, and that it is sustainable.
Policy is increasingly based on work that is measured by economic output. We must not leave out or behind those who do not, or cannot, or whose work is unmeasured yet contributes to the world.
‘The only future worth building includes everyone,’ said the Pope in a recent TedTalk.
What kind of future do you want to see yourself living in? Will we all work or will there be universal basic income? What will happen on housing, an ageing population, air pollution, prisons, free movement, migration, and health? What will keep communities together as their known world in employment, and family life, and support collapse? How will education enable children to discover their talents and passions?
Human beings are more than what we do. The sense of a country of who we are and what we stand for is about more than our employment or what we earn. And we cannot live on slogans alone.
Who do we think we in the UK will be after Brexit, needs real and substantial answers. What are we going to *do* and *be* in the world?
Without this vision, any mandate as voted for on June 9th, will be made in the dark and open to future objection writ large. ‘We’ must be inclusive based on a consensus, not simply a ‘mandate’.
Only with clear vision for all these facets fitting together in a model of how we will grow in all senses, will we be able to answer the question, is education preparing us [all] for the jobs of the future?
More than this, we must ask if education is preparing people for the lack of jobs, for changing relationships in our communities, with each other, and with machines.
Change is coming, Brexit or not. But Brexit has exacerbated the potential to miss opportunities, embed barriers, and see negative side-effects from changes already underway in employment, in an accelerated timeframe.
If our education policy today is not gearing up to that change, we must.
“If it were up to me I would increase pay and conditions and levels of responsibility and respect significantly, because it is an investment that would pay itself back many times over in the decades to come.”
Don’t use children as ‘measurement probes’ to test schools
What effect does using school exam results to reform the school system have on children? And what effect does it have on society?
The report concluded that half of pupils in English Literature, as an example, are not awarded the “correct” grade on a particular exam paper due to marking inconsistencies and the design of the tests.
Given the complexity and sensitivity of the data, Ofqual concluded, it is essential that the metrics stand up to scrutiny and that there is a very clear understanding behind the meaning and application of any quality of marking. They wrote that, “there are dangers that information from metrics (particularly when related to grade boundaries) could be used out of context.”
Context and accuracy are fundamental to the value of and trust in these tests. And at the moment, trust is not high in the system behind it. There must also be trust in policy behind the system.
This summer two sets of UK school tests, will come under scrutiny. GCSEs and SATS. The goal posts are moving for children and schools across the country. And it’s bad for children and bad for Britain.
Grades A-G will be swapped for numbers 1 -9
GCSE sitting 15-16 year olds will see their exams shift to a numerical system, scoring from the highest Grade 9 to Grade 1, with the three top grades replacing the current A and A*. The alphabetical grading system will be fully phased out by 2019.
The plans intended that roughly the same proportion of students as have achieved a Grade C will be awarded a new Grade 4 and as Schools Week reported: “There will be two GCSE pass rates in school performance tables.”
One will measure grade 5s or above, and this will be called the ‘strong’ pass rate. And the other will measure grade 4s or above, and this will be the ‘standard’ pass rate.
Laura McInerney summed up, “in some senses, it’s not a bad idea as it will mean it is easier to see if the measures are comparable. We can check if the ‘standard’ rate is better or worse over the next few years. (This is particularly good for the DfE who have been told off by the government watchdog for fiddling about with data so much that no one can tell if anything has worked anymore).”
There’s plenty of confusion in parents, how the numerical grading system will work. The confusion you can gauge in playground conversations, is also reflected nationally in a more measurable way.
Market research in a range of audiences – including businesses, head teachers, universities, colleges, parents and pupils – found that just 31 per cent of secondary school pupils and 30 per cent of parents were clear on the new numerical grading system.
So that’s a change in the GCSE grading structure. But why? If more differentiators are needed, why not add one or two more letters and shift grade boundaries? A policy need for these changes is unclear.
Machine marking is training on ten year olds
I wonder if any of the shift to numerical marking, is due in any part to a desire to move GCSEs in future to machine marking?
This year, ten and eleven year olds, children in their last year of primary school, will have their SATs tests computer marked.
That’s everything in maths and English. Not multiple choice papers or one word answers, but full written responses. If their f, b or g doesn’t look like the correct letter in the correct place in the sentence, then it gains no marks.
Parents are concerned about children whose handwriting is awful, but their knowledge is not. How well can they hope to be assessed? If exams are increasingly machine marked out of sight, many sent to India, where is our oversight of the marking process and accuracy?
The concerns I’ve heard simply among local parents and staff, seem reflected in national discussions and the assessor, Oftsed. TES has reported Ofsted’s most senior officials as saying that the inspectorate is just as reluctant to use this year’s writing assessments as it was in 2016. Teachers and parents locally are united in feeling it is not accurate, not fair, and not right.
How will we know what is being accurately measured and the accuracy of the metrics with content changes at the same time? How will we know if children didn’t make the mark, or if the marks were simply not awarded?
The accountability of the process is less than transparent to pupils and parents. We have little opportunity for Ofqual’s recommended scrutiny of these metrics, or the data behind the system on our kids.
Causation, correlation and why we should care
The real risk is that no one will be able to tell if there is an error, where it stems from, and where there is a reason if pass rates should be markedly different from what was expected.
After the wide range of changes across pupil attainment, exam content, school progress scores, and their interaction and dependencies, can they all fit together and be comparable with the past at all?
If the SATS are making lots of mistakes simply due to being bad at reading ten year’ old’s handwriting, how will we know?
Or if GCSE scores are lower, will we be able to see if it is because they have genuinely differentiated the results in a wider spread, and stretched out the fail, pass and top passes more strictly than before?
What is likely, is that this year’s set of children who were expecting As and A star at GCSE but fail to be the one of the two children nationally who get the new grade 9, will be disappointed to feel they are not, after all, as great as they thought they were.
And next year, if you can’t be the one or two to get the top mark, will the best simply stop stretching themselves and rest a bit easier, because, whatever, you won’t get that straight grade As anyway?
Even if children would not change behaviours were they to know, the target range scoring sent by third party data processors to schools, discourages teachers from stretching those at the top.
Politicians look for positive progress, but policies are changing that will increase the number of schools deemed to have failed. Why?
Our children’s results are being used to reform the school system.
Government policy on this forced academisation was rejected by popular revolt. It appears that the government is determined that schools *will* become academies with the same fervour that they *will* re-introduce grammar schools. Both are unevidenced and unwanted. But there is a workaround. Create evidence. Make the successful scores harder to achieve, and more will be seen to fail.
A total of 282 secondary schools in England were deemed to be failing by the government this January, as they “have not met a new set of national standards”.
It is expected that even more will attain ‘less’ this summer. Tim Leunig, Chief Analyst & Chief Scientific Adviser Department for Education, made a personal guess at two reaching the top mark.
2 is my guess – not a formal DfE prediction. With a big enough sample, I think someone will get lucky… https://t.co/e4RqNy51TY
The context of this GCSE ‘failure’ is the changes in how schools are measured. Children’s progress over 8 subjects, or “P8” is being used as an accountability measure of overall school quality.
But it’s really just: “a school’s average Attainment 8 score adjusted for pupils’ Key Stage 2 attainment.” [Dave Thomson, Education Datalab]
Work done by FFT Education Datalab showed that contextualising P8 scores can lead to large changes for some schools. (Read more here and here). You cannot meaningfully compare schools with different types of intake, but it appears that the government is determined to do so. Starting ever younger if new plans go ahead.
Data is being reshaped to tell stories to fit to policy.
Shaping children’s future
What this reshaping doesn’t factor in at all, is the labelling of a generation or more, with personal failure, from age ten and up.
All this tinkering with the data, isn’t just data.
It’s tinkering badly with our kids sense of self, their sense of achievement, aspiration, and with that; the country’s future.
Education reform has become the aim, and it has replaced the aims of education.
Post-Brexit Britain doesn’t need policy that delivers ideology. We don’t need “to use children as ‘measurement probes’ to test schools.”
“With the Family Link app from Google, you can stay in the loop as your kid explores on their Android* device. Family Link lets you create a Google Account for your kid that’s like your account, while also helping you set certain digital ground rules that work for your family — like managing the apps your kid can use, keeping an eye on screen time, and setting a bedtime on your kid’s device.”
John Carr shared his blog post about the Google Family Link today which was the first I had read about the new US account in beta. In his post, with an eye on GDPR, he asks, what is the right thing to do?
What is the Family Link app?
Family Link requires a US based google account to sign up, so outside the US we can’t read the full details. However from what is published online, it appears to offer the following three key features:
“Approve or block the apps your kid wants to download from the Google Play Store.
Keep an eye on screen time. See how much time your kid spends on their favorite apps with weekly or monthly activity reports, and set daily screen time limits for their device. “
and
“Set device bedtime: Remotely lock your kid’s device when it’s time to play, study, or sleep.”
From the privacy and disclosure information it reads that there is not a lot of difference between a regular (over 13s) Google account and this one for under 13s. To collect data from under 13s it must be compliant with COPPA legislation.
If you google “what is COPPA” the first result says, “The Children’s Online Privacy Protection Act (COPPA) is a law created to protect the privacy of children under 13.”
But does this Google Family Link do that? What safeguards and controls are in place for use of this app and children’s privacy?
What data does it capture?
“In order to create a Google Account for your child, you must review the Disclosure (including the Privacy Notice) and the Google Privacy Policy, and give consent by authorizing a $0.30 charge on your credit card.”
Google captures the parent’s verified real-life credit card data.
Google captures child’s name, date of birth and email.
Google captures voice.
Google captures location.
Google may associate your child’s phone number with their account.
And lots more:
Google automatically collects and stores certain information about the services a child uses and how a child uses them, including when they save a picture in Google Photos, enter a query in Google Search, create a document in Google Drive, talk to the Google Assistant, or watch a video in YouTube Kids.
What does it offer over regular “13+ Google”?
In terms of general safeguarding, it doesn’t appear that SafeSearch is on by default but must be set and enforced by a parent.
Parents should “review and adjust your child’s Google Play settings based on what you think is right for them.”
Google rightly points out however that, “filters like SafeSearch are not perfect, so explicit, graphic, or other content you may not want your child to see makes it through sometimes.”
Ron Amadeo at Arstechnica wrote a review of the Family Link app back in February, and came to similar conclusions about added safeguarding value:
“Other than not showing “personalized” ads to kids, data collection and storage seems to work just like in a regular Google account. On the “Disclosure for Parents” page, Google notes that “your child’s Google Account will be like your own” and “Most of these products and services have not been designed or tailored for children.” Google won’t do any special content blocking on a kid’s device, so they can still get into plenty of trouble even with a monitored Google account.”
Your child will be able to share information, including photos, videos, audio, and location, publicly and with others, when signed in with their Google Account. And Google wants to see those photos.
There’s some things that parents cannot block at all.
Installs of app updates can’t be controlled, so leave a questionable grey area. Many apps are built on classic bait and switch – start with a free version and then the upgrade contains paid features. This is therefore something to watch for.
“Regardless of the approval settings you choose for your child’s purchases and downloads, you won’t be asked to provide approval in some instances, such as if your child: re-downloads an app or other content; installs an update to an app (even an update that adds content or asks for additional data or permissions); or downloads shared content from your Google Play Family Library. “
The child “will have the ability to change their activity controls, delete their past activity in “My Activity,” and grant app permissions (including things like device location, microphone, or contacts) to third parties”.
What’s in it for children?
You could argue that this gives children “their own accounts” and autonomy. But why do they need one at all? If I give my child a device on which they can download an app, then I approve it first.
If I am not aware of my under 13 year old child’s Internet time physically, then I’m probably not a parent who’s going to care to monitor it much by remote app either. Is there enough insecurity around ‘what children under 13 really do online’, versus what I see or they tell me as a parent, that warrants 24/7 built-in surveillance software?
I can use safe settings without this app. I can use a device time limiting app without creating a Google account for my child.
If parents want to give children an email address, yes, this allows them to have a device linked Gmail account to which you as a parent, cannot access content. But wait a minute, what’s this. Google can?
Google can read their mails and provide them “personalised product features”. More detail is probably needed but this seems clear:
“Our automated systems analyze your child’s content (including emails) to provide your child personally relevant product features, such as customized search results and spam and malware detection.”
And what happens when the under 13s turn 13? It’s questionable that it is right for Google et al. to then be able draw on a pool of ready-made customers’ data in waiting. Free from COPPA ad regulation. Free from COPPA privacy regulation.
Google knows when the child reaches 13 (the set-up requires a child’s date of birth, their first and last name, and email address, to set up the account). And they will inform the child directly when they become eligible to sign up to a regular account free of parental oversight.
What a birthday gift. But is it packaged for the child or Google?
What’s in it for Google?
The parental disclosure begins,
“At Google, your trust is a priority for us.”
If it truly is, I’d suggest they revise their privacy policy entirely.
Google’s disclosure policy also makes parents read a lot before you fully understand the permissions this app gives to Google.
I do not believe Family Link gives parents adequate control of their children’s privacy at all nor does it protect children from predatory practices.
While “Google will not serve personalized ads to your child“, your child “will still see ads while using Google’s services.”
Google also tailors the Family Link apps that the child sees, (and begs you to buy) based on their data:
“(including combining personal information from one service with information, including personal information, from other Google services) to offer them tailored content, such as more relevant app recommendations or search results.”
Contextual advertising using “persistent identifiers” is permitted under COPPA, and is surely a fundamental flaw. It’s certainly one I wouldn’t want to see duplicated under GDPR. Serving up ads that are relevant to the content the child is using, doesn’t protect them from predatory ads at all.
Google captures geolocators and knows where a child is and builds up their behavioural and location patterns. Google, like other online companies, captures and uses what I’ve labelled ‘your synthesised self’; the mix of online and offline identity and behavioural data about a user. In this case, the who and where and what they are doing, are the synthesised selves of under 13 year old children.
These data are made more valuable by the connection to an adult with spending power.
Google gains permission via the parent’s acceptance of the privacy policy, to pass personal data around to third parties and affiliates. An affiliate is an entity that belongs to the Google group of companies. Today, that’s a lot of companies.
Google’s ad network consists of Google services, like Search, YouTube and Gmail, as well as 2+ million non-Google websites and apps that partner with Google to show ads.
I also wonder if it will undo some of the previous pro-privacy features on any linked child’s YouTube account if Google links any logged in accounts across the Family Link and YouTube platforms.
Is this pseudo-safe use a good thing?
In practical terms, I’d suggest this app is likely to lull parents into a false sense of security. Privacy safeguarding is not the default set up.
It’s questionable that Google should adopt some sort of parenting role through an app. Parental remote controls via an app isn’t an appropriate way to regulate whether my under 13 year old is using their device, rather than sleeping.
It’s also got to raise questions about children’s autonomy at say, 12. Should I as a parent know exactly every website and app that my child visits? What does that do for parental-child trust and relations?
As for my own children I see no benefit compared with letting them have supervised access as I do already. That is without compromising my debit card details, or under a false sense of safeguarding. Their online time is based on age appropriate education and trust, and yes I have to manage their viewing time.
That said, if there are people who think parents cannot do that, is the app a step forward? I’m not convinced. It’s definitely of benefit to Google. But for families it feels more like a sop to adults who feel a duty towards safeguarding children, but aren’t sure how to do it.
Is this the best that Google can do by children?
In summary it seems to me that the Family Link app is a free gift from Google. (Well, free after the thirty cents to prove you’re a card-carrying adult.)
It gives parents three key tools: App approval (accept, pay, or block), Screen-time surveillance, and a remote Switch Off of child’s access.
In return, Google gets access to a valuable data set – a parent-child relationship with credit data attached – and can increase its potential targeted app sales. Yet Google can’t guarantee additional safeguarding, privacy, or benefits for the child while using it.
I think for families and child rights, it’s a false friend. None of these tools per se require a Google account. There are alternatives.
Children’s use of the Internet should not mean they are used and their personal data passed around or traded in hidden back room bidding by the Internet companies, with no hope of control.
There are other technical solutions to age verification and privacy too.
I’d ask, what else has Google considered and discarded?
Is this the best that a cutting edge technology giant can muster?
This isn’t designed to respect children’s rights as intended under COPPA or ready for GDPR, and it’s a shame they’re not trying.
If I were designing Family Link for children, it would collect no real identifiers. No voice. No locators. It would not permit others access to voice or images or need linked. It would keep children’s privacy intact, and enable them when older, to decide what they disclose. It would not target personalised apps/products at children at all.
GDPR requires active, informed parental consent for children’s online services. It must be revocable, personal data must collect the minimum necessary and be portable. Privacy policies must be clear to children. This, in terms of GDPR readiness, is nowhere near ‘it’.
Family Link needs to re-do their homework. And this isn’t a case of ‘please revise’.
Google is a multi-billion dollar company. If they want parental trust, and want to be GDPR and COPPA compliant, they should do the right thing.
When it comes to child rights, companies must do or do not. There is no try.
Notes and thoughts from Full Fact’s event at Newspeak House in London on 27/3 to discuss fake news, the misinformation ecosystem, and how best to respond. The recording is here. The contributions and questions part of the evening began from 55.55.
What is fake news? Are there solutions?
1. Clickbait: celebrity pull to draw online site visitors towards traffic to an advertising model – kill the business model
2. Mischief makers: Deceptive with hostile intent – bots, trolls, with an agenda
3. Incorrectly held views: ‘vaccinations cause autism’ despite the evidence to the contrary. How can facts reach people who only believe what they want to believe?
Why does it matter? The scrutiny of people in power matters – to politicians, charities, think tanks – as well as the public.
It is fundamental to remember that we do in general believe that the public has a sense of discernment, however there is also a disconnect between an objective truth and some people’s perception of reality. Can this conflict be resolved? Is it necessary to do so? If yes, when is it necessary to do so and who decides that?
There is a role for independent tracing of unreliable information, its sources and its distribution patterns and identifying who continues to circulate fake news even when asked to desist.
Transparency about these processes is in the public interest.
Overall, there is too little public understanding of how technology and online tools affect behaviours and decision-making.
The Role of Media in Society
How do you define the media?
How can average news consumers distinguish between self-made and distributed content compared with established news sources?
What is the role of media in a democracy?
What is the mainstream media?
Does the media really represent what I want to understand? > Does the media play a role in failure of democracy if news is not representative of all views? > see Brexit, see Trump
What are news values and do we have common press ethics?
New problems in the current press model:
Failure of the traditional media organisations in fact checking; part of the problem is that the credible media is under incredible pressure to compete to gain advertising money share.
Journalism is under resourced. Verification skills are lacking and tools can be time consuming. Techniques like reverse image search, and verification take effort.
Press releases with numbers can be less easily scrutinised so how do we ensure there is not misinformation through poor journalism?
What about confirmation bias and reinforcement?
What about friends’ behaviours? Can and should we try to break these links if we are not getting a fair picture? The Facebook representative was keen to push responsibility for the bubble entirely to users’ choices. Is this fair given the opacity of the model?
Have we cracked the bubble of self-reinforcing stories being the only stories that mutual friends see?
Can we crack the echo chamber?
How do we start to change behaviours? Can we? Should we?
The risk is that if people start to feel nothing is trustworthy, we trust nothing. This harms relations between citizens and state, organisations and consumers, professionals and public and between us all. Community is built on relationships. Relationships are built on trust. Trust is fundamental to a functioning society and economy.
Is it game over?
Will Moy assured the audience that there is no need to descend into blind panic and there is still discernment among the public.
Then, it was asked, is perhaps part of the problem that the Internet is incapable in its current construct to keep this problem at bay? Is part of the solution re-architecturing and re-engineering the web?
What about algorithms? Search engines start with word frequency and neutral decisions but are now much more nuanced and complex. We really must see how systems decide what is published. Search engines provide but also restrict our access to facts and ‘no one gets past page 2 of search results’. Lack of algorithmic transparency is an issue, but will not be solved due to commercial sensitivities.
Fake news creation can be lucrative. Mangement models that rely on user moderation or comments to give balance can be gamed.
Are there appropriate responses to the grey area between trolling and deliberate deception through fake news that is damaging? In what context and background? Are all communities treated equally?
The question came from the audience whether the panel thought regulation would come from the select committee inquiry. The general response was that it was unlikely.
What are the solutions?
The questions I came away thinking about went unanswered, because I am not sure there are solutions as long as the current news model exists and is funded in the current way by current players.
I believe one of the things that permits fake news is the growing imbalance of money between the big global news distributors and independent and public interest news sources.
This loss of balance, reduces our ability to decide for ourselves what we believe and what matters to us.
The monetisation of news through its packaging in between advertising has surely contaminated the news content itself.
Think of a Facebook promoted post – you can personalise your audience to a set of very narrow and selective characteristics. The bubble that receives that news is already likely to be connected by similar interest pages and friends and the story becomes self reinforcing, showing up in friends’ timelines.
A modern online newsroom moves content on the webpage around according to what is getting the most views and trending topics in a list encourage the viewers to see what other people are reading, and again, are self reinforcing.
There is also a lack of transparency of power. Where we see a range of choices from which we may choose to digest a range of news, we often fail to see one conglomerate funder which manages them all.
The discussion didn’t address at all the fundamental shift in “what is news” which has taken place over the last twenty years. In part, I believe the responsibility for the credibility level of fake news in viewers lies with 24/7 news channels. They have shifted the balance of content from factual bulletins, to discussion and opinion. Now while the news channel is seen as a source of ‘news’ much of the time, the content is not factual, but opinion, and often that means the promotion and discussion of the opinions of their paymaster.
Most simply, how should I answer the question that my ten year old asks – how do I know if something on the Internet is true or not?
Can we really say it is up to the public to each take on this role and where do we fit the needs of the vulnerable or children into that?
Is the term fake news the wrong approach and something to move away from? Can we move solutions away from target-fixation ‘stop fake news’ which is impossible online, but towards what the problems are that fake news cause?
Interference in democracy. Interference in purchasing power. Interference in decision making. Interference in our emotions.
These interferences with our autonomy is not something that the web is responsible for, but the people behind the platforms must be accountable for how their technology works.
In the mean time, what can we do?
“if we ever want the spread of fake news to stop we have to take responsibility for calling out those who share fake news (real fake news, not just things that feel wrong), and start doing a bit of basic fact-checking ourselves.” [IB Times, Eliot Higgins is the founder of Bellingcat]
Not everyone has the time or capacity to each do that. As long as today’s imbalance of money and power exists, truly independent organisations like Bellingcat and FullFact have an untold value.
The billed Google and Twitter speakers were absent because they were invited to a meeting with the Home Secretary on 28/3. Speakers were Will Moy, Director of FullFact, Jenni Sargent Managing Director of firstdraftnews, Richard Allan, Facebook EMEA Policy Director and the event was chaired by Bill Thompson.
In preparation for The General Data Protection Regulation (GDPR) there must be an active UK decision about policy in the coming months for children and the Internet – provision of ‘Information Society Services’. The age of consent for online content aimed at children from May 25, 2018 will be 16 by default unless UK law is made to lower it.
Age verification for online information services in the GDPR, will mean capturing parent-child relationships. This could mean a parent’s email or credit card unless there are other choices made. What will that mean for access to services for children and to privacy? It is likely to offer companies an opportunity for a data grab, and mean privacy loss for the public, as more data about family relationships will be created and collected than the content provider would get otherwise.
Our interactions create a blended identity of online and offline attributes which I suggested in a previous post, create synthesised versions of our selves raises questions on data privacy and security.
The goal may be to protect the physical child. The outcome will mean it simultaneously expose children and parents to risks that we would not otherwise be put through increased personal data collection. By increasing the data collected, it increases the associated risks of loss, theft, and harm to identity integrity. How will legislation balance these risks and rights to participation?
The UK government has various work in progress before then, that could address these questions:
As Sonia Livingstone wrote in the post on the LSE media blog about what to expect from the GDPR and its online challenges for children:
“Now the UK, along with other Member States, has until May 2018 to get its house in order”.
What will that order look like?
The Digital Strategy and Ed Tech
The Digital Strategy commits to changes in National Pupil Data management. That is, changes in the handling and secondary uses of data collected from pupils in the school census, like using it for national research and planning.
Access to NPD via the ONS VML would mean safe data use, in safe settings, by safe (trained and accredited) users.
Sensitive data — it remains to be seen how DfE intends to interpret ‘sensitive’ and whether that is the DPA1998 term or lay term meaning ‘identifying’ as it should — will no longer be seen by users for secondary uses outside safe settings.
However, a grey area on privacy and security remains in the “Data Exchange” which will enable EdTech products to “talk to each other”.
The aim of changes in data access is to ensure that children’s data integrity and identity are secure. Let’s hope the intention that “at all times, the need to preserve appropriate privacy and security will remain paramount and will be non-negotiable” applies across all closed pupil data, and not only to that which may be made available via the VML.
This strategy is still far from clear or set in place.
The Digital Strategy and consumer data rights
The Digital Strategy commits under the heading of “Unlocking the power of data in the UK economy and improving public confidence in its use” to the implementation of the General Data Protection Regulation by May 2018. The Strategy frames this as a business issue, labelling data as “a global commodity” and as such, its handling is framed solely as a requirements needed to ensure “that our businesses can continue to compete and communicate effectively around the world” and that adoption “will ensure a shared and higher standard of protection for consumers and their data.”
The GDPR as far as children goes, is far more about protection of children as people. It focuses on returning control over children’s own identity and being able to revoke control by others, rather than consumer rights.
That said, there are data rights issues which are also consumer issues and product safety failures posing real risk of harm.
Neither The Digital Economy Bill nor the Digital Strategy address these rights and security issues, particularly when posed by the Internet of Things with any meaningful effect.
In fact, the chapter Internet of Things and Smart Infrastructure [ 9/19] singularly miss out anything on security and safety:
“We want the UK to remain an international leader in R&D and adoption of IoT. We are funding research and innovation through the three year, £30 million IoT UK Programme.”
If it’s not scary enough for the public to think that their sex secrets and devices are hackable, perhaps it will kill public trust in connected devices more when they find strangers talking to their children through a baby monitor or toy. [BEUC campaign report on #Toyfail]
“The internet-connected toys ‘My Friend Cayla’ and ‘i-Que’ fail miserably when it comes to safeguarding basic consumer rights, security, and privacy. Both toys are sold widely in the EU.”
Digital skills and training in the strategy doesn’t touch on any form of change management plans for existing working sectors in which we expect to see machine learning and AI change the job market. This is something the digital and industrial strategy must be addressing hand in glove.
The tactics and training providers listed sound super, but there does not appear to be an aspirational strategy hidden between the lines.
The Digital Economy Bill and citizens’ data rights
While the rest of Europe in this legislation has recognised that a future thinking digital world without boundaries, needs future thinking on data protection and empowered citizens with better control of identity, the UK government appears intent on taking ours away.
To take only one example for children, the Digital Economy Bill in Cabinet Office led meetings was explicit about use for identifying and tracking individuals labelled under “Troubled Families” and interventions with them. Why, when consent is required to work directly with people, that consent is being ignored to access their information is baffling and in conflict with both the spirit and letter of GDPR. Students and Applicants will see their personal data sent to the Student Loans Company without their consent or knowledge. This overrides the current consent model in place at UCAS.
It is baffling that the government is pursuing the Digital Economy Bill data copying clauses relentlessly, that remove confidentiality by default, and will release our identities in birth, marriage and death data for third party use without consent through Chapter 2, the opening of the Civil Registry, without any safeguards in the bill.
Government has not only excluded important aspects of Parliamentary scrutiny in the bill, it is trying to introduce “almost untrammeled powers” (paragraph 21), that will “very significantly broaden the scope for the sharing of information” and “specified persons” which applies “whether the service provider concerned is in the public sector or is a charity or a commercial organisation” and non-specific purposes for which the information may be disclosed or used. [Reference: Scrutiny committee comments]
Future changes need future joined up thinking
While it is important to learn from the past, I worry that the effort some social scientists put into looking backwards, is not matched by enthusiasm to look ahead and making active recommendations for a better future.
Society appears to have its eyes wide shut to the risks of coercive control and nudge as research among academics and government departments moves in the direction of predictive data analysis.
Uses of administrative big data and publicly available social media data for example, in research and statistics, needs further new regulation in practice and policy but instead the Digital Economy Bill looks only at how more data can be got out of Department silos.
A certain intransigence about data sharing with researchers from government departments is understandable. What’s the incentive for DWP to release data showing its policy may kill people?
Westminster may fear it has more to lose from data releases and don’t seek out the political capital to be had from good news.
The ethics of data science are applied patchily at best in government, and inconsistently in academic expectations.
Some researchers have identified this but there seems little will to action:
“It will no longer be possible to assume that secondary data use is ethically unproblematic.”
Research and legislation alike seem hell bent on the low hanging fruit but miss out the really hard things. What meaningful benefit will it bring by spending millions of pounds on exploiting these personal data and opening our identities to risk just to find out whether X course means people are employed in Y tax bracket 5 years later, versus course Z where everyone ends up self employed artists? What ethics will be applied to the outcomes of those questions asked and why?
And while government is busy joining up children’s education data throughout their lifetimes from age 2 across school, FE, HE, into their HMRC and DWP interactions, there is no public plan in the Digital Strategy for the coming 10 to 20 years employment market, when many believe, as do these authors in American Scientific, “around half of today’s jobs will be threatened by algorithms. 40% of today’s top 500 companies will have vanished in a decade.”
What benefit will it have to know what was, or for the plans around workforce and digital skills list ad hoc tactics, but no strategy?
We must safeguard jobs and societal needs, but just teaching people to code is not a solution to a fundamental gap in what our purpose will be, and the place of people as a world-leading tech nation after Brexit. We are going to have fewer talented people from across the world staying on after completing academic studies, because they’re not coming at all.
There may be investment in A.I. but where is the investment in good data practices around automation and machine learning in the Digital Economy Bill?
To do this Digital Strategy well, we need joined up thinking.
Children should be able to use online services without being used and abused by them.
This article arrived on my Twitter timeline via a number of people. Doteveryone CEO Rachel Coldicutt summed up various strands of thought I started to hear hints of last month at #CPDP2017 in Brussels:
“As designers and engineers, we’ve contributed to a post-thought world. In 2017, it’s time to start making people think again.
“We need to find new ways of putting friction and thoughtfulness back into the products we make.” [Glanceable truthiness, 30.1.2017]
Let’s keep the human in discussions about technology, and people first in our products
All too often in technology and even privacy discussions, people have become ‘consumers’ and ‘customers’ instead of people.
The Digital Strategy may seek to unlock “the power of data in the UK economy” but policy and legislation must put equal if not more emphasis on “improving public confidence in its use” if that long term opportunity is to be achieved.
And in technology discussions about AI and algorithms we hear very little about people at all. Discussions I hear seem siloed instead into three camps: the academics, the designers and developers, the politicians and policy makers. And then comes the lowest circle, ‘the public’ and ‘society’.
It is therefore unsurprising that human rights have fallen down the ranking of importance in some areas of technology development.
In this post, I think out loud about what improving online safety for children in The Green Paper on Children’s Internet Safety means ahead of the General Data Protection Regulation in 2018. Children should be able to use online services without being used and abused by them. If this regulation and other UK Government policy and strategy are to be meaningful for children, I think we need to completely rethink the State approach to what data privacy means in the Internet of Things.
[listen on soundcloud]
Children in the Internet of Things
In 1979 Star Trek: The Motion Picture created a striking image of A.I. as Commander Decker merged with V’Ger and the artificial copy of Lieutenant Ilia, blending human and computer intelligence and creating an integrated, synthesised form of life.
Ten years later, Sir Tim Berners-Lee wrote his proposal and created the world wide web, designing the way for people to share and access knowledge with each other through networks of computers.
In the 90s my parents described using the Internet as spending time ‘on the computer’, and going online meant from a fixed phone point.
Today our wireless computers in our homes, pockets and school bags, have built-in added functionality to enable us to do other things with them at the same time; make toast, play a game, and make a phone call, and we live in the Internet of Things.
Although we talk about it as if it were an environment of inanimate appliances, it would be more accurate to think of the interconnected web of information that these things capture, create and share about our interactions 24/7, as vibrant snapshots of our lives, labelled with retrievable tags, and stored within the Internet.
Data about every moment of how and when we use an appliance, is captured at a rapid rate, or measured by smart meters, and shared within a network of computers. Computers that not only capture data but create, analyse and exchange new data about the people using them and how they interact with the appliance.
In this environment, children’s lives in the Internet of Things no longer involve a conscious choice to go online. Using the Internet is no longer about going online, but being online. The web knows us. In using the web, we become part of the web.
Our children, to the computers that gather their data, have simply become extensions of the things they use about which data is gathered and sold by the companies who make and sell the things. Things whose makers can even choose who uses them or not and how. In the Internet of things, children have become things of the Internet.
A child’s use of a smart hairbrush will become part of the company’s knowledge base how the hairbrush works. A child’s voice is captured and becomes part of the database for the development training of the doll or robot they play with.
Our biometrics, measurements of the unique physical parts of our identities, provides a further example of the recent offline-self physically incorporated into banking services. Over 1 million UK children’s biometrics are estimated to be used in school canteens and library services through, often compulsory, fingerprinting.
Our interactions create a blended identity of online and offline attributes.
The web has created synthesised versions of our selves.
I say synthesised not synthetic, because our online self is blended with our real self and ‘synthetic’ gives the impression of being less real. If you take my own children’s everyday life as an example, there is no ‘real’ life that is without a digital self. The two are inseparable. And we might have multiple versions.
Our synthesised self is not only about our interactions with appliances and what we do, but who we know and how we think based on how we take decisions.
Data is created and captured not only about how we live, but where we live. These online data can be further linked with data about our behaviours offline generated from trillions of sensors and physical network interactions with our portable devices. Our synthesised self is tracked from real life geolocations. In cities surrounded by sensors under pavements, in buildings, cameras, mapping and tracking everywhere we go, our behaviours are converted into data, and stored inside an overarching network of cloud computers so that our online lives take on life of their own.
Data about us, whether uniquely identifiable on its own or not, is created and collected actively and passively. Online site visits record IP Address and use linked platform log-ins that can even extract friends lists without consent or affirmative action from them.
Using a tool like Privacy Badger from EEF gives you some insight into how many sites create new data about online behaviour once that synthesised self logs in, then tracks your synthesised self across the Internet. How you move from page to page, with what referring and exit pages and URLs, what adverts you click on or ignore, platform types, number of clicks, cookies, invisible on page gifs and web beacons. Data that computers see, interpret and act on better than us.
Those synthesised identities are tracked online, just as we move about a shopping mall offline.
Sir Tim Berners-Lee said this week, there is a need to put “a fair level of data control back in the hands of people.” It is not a need but vital to our future flourishing, very survival even. Data control is not about protecting a list of information or facts about ourselves and our identity for its own sake, it is about choosing who can exert influence and control over our life, our choices, and future of democracy.
We get the service, the web gets our identity and our behaviours. And in what is in effect a hidden slave trade, they get access to use our synthesised selves in secret, and forever.
This grasp of what the Internet is, what the web is, is key to getting a rounded view of children’s online safety. Namely, we need to get away from the sole focus of online safeguarding as about children’s use of the web, and also look at how the web uses children.
Online services use children to:
mine, and exchange, repackage, and trade profile data, offline behavioural data (location, likes), and invisible Internet-use behavioural data (cookies, website analytics)
extend marketing influence in human decision-making earlier in life, even before children carry payment cards of their own,
enjoy the insights of parent-child relationships connected by an email account, sometimes a credit card, used as age verification or in online payments.
What are the risks?
Exploitation of identity and behavioural tracking not only puts our synthesised child at risk from exploitation, it puts our real life child’s future adult identity and data integrity at risk. If we cannot know who holds the keys to our digital identity, how can we trust that systems and services will be fair to us, not discriminate or defraud. Or not make errors that we cannot understand in order to correct?
Leaks, loss and hacks abound and manufacturers are slow to respond. Software that monitors children can also be used in coercive control. Organisations whose data are insecure, can be held to ransom. Children’s products should do what we expect them to and nothing more, there should be “no surprises” how data are used.
Companies tailor and target their marketing activity to those identity profiles. Our data is sold on in secret without consent to data brokers we never see, who in turn sell us on to others who monitor, track and target our synthesised selves every time we show up at their sites, in a never-ending cycle.
And from exploiting the knowledge of our synthesised self, decisions are made by companies, that target their audience, select which search results or adverts to show us, or hide, on which network sites, how often, to actively nudge our behaviours quite invisibly.
Nudge misuse is one of the greatest threats to our autonomy and with it democratic control of the society we live in. Who decides on the “choice architecture” that may shape another’s decisions and actions, and on what ethical basis? once asked these authors who now seem to want to be the decision makers.
Loss of identity is near impossible to reclaim. Our synthesised selves are sold into unending data slavery and we seem powerless to stop it. Our autonomy and with it our self worth, seem diminished.
How can we protect children better online?
Safeguarding must include ending data slavery of our synthesised self. I think of five things needed by policy shapers to tackle it.
Understanding what ‘online’ and the Internet mean and how the web works – i.e. what data does a visit to a web page collect about the user and what happens to that data?
Threat models and risk must go beyond the usual irl protection issues. Those posed by undermining citizens’ autonomy, loss of public trust, of control over our identity, misuse of nudge, and how some are intrinsic to the current web business model, site users or government policy are unseen are underestimated.
On user regulation (age verification / filtering) we must confront the idea that as a stand-alone step it will not create a better online experience for the user, when it will not prevent the misuse of our synthesised selves and may increase risks – regulation of misuse must shift the point of responsibility
Meaningful data privacy training must be mandatory for anyone in contact with children and its role in children’s safeguarding
Siloed thinking must go. Forward thinking must join the dots across Departments into cohesive inclusive digital strategy and that doesn’t just mean ‘let’s join all of the data, all of the time’
Respect our synthesised selves. Data slavery includes government misuse and must end if we respect children’s rights.
In the words of James T. Kirk, “the human adventure is just beginning.”
When our synthesised self is an inseparable blend of offline and online identity, every child is a synthesised child. And they are people. It is vital that government realises their obligation to protect rights to privacy, provision and participation under the Convention of the Rights of the Child and address our children’s real online life.
Governments, policy makers, and commercial companies must not use children’s offline safety as an excuse in a binary trade off to infringe on those digital rights or ignore risk and harm to the synthesised self in law, policy, and practice.
If future society is to thrive we must do all that is technologically possible to safeguard the best of what makes us human in this blend; our free will.
Part 2 follows with thoughts specific to the upcoming regulations, Digital Economy Bill andDigital Strategy
“What do an umbrella, a shark, a houseplant, the brake pads in a mining truck and a smoke detector all have in common? They can all be connected online, and in this example, in this WEF film, they are.
“By 2024 more than 50% of home Internet traffic will be used by appliances and devices, rather than just for communication and entertainment…The IoT raises huge questions on privacy and security, that have to be addressed by government, corporations and consumers.”
That backtracks on what he said in Parliament on January 25th, 2014 on opt out of anonymous data transfers, despite the right to object in the NHS constitution [1].
So what’s the solution? If the new opt out methods aren’t working, then back to the old ones and making Section 10 requests? But it seems the Information Centre isn’t keen on making that work either.
All the data the HSCIC holds is sensitive and as such, its release risks patients’ significant harm or distress [2] so it shouldn’t be difficult to tell them to cease and desist, when it comes to data about you.
But how is NHS Digital responding to people who make the effort to write directly?
If anyone asks that their hospital data should not be used in any format and passed to third parties, that’s surely for them to decide.
Let’s take the case study of a woman who spoke to me during the whole care.data debacle who had been let down by the records system after rape. Her NHS records subsequently about her mental health care were inaccurate, and had led to her being denied the benefit of private health insurance at a new job.
Would she have to detail why selling her medical records would cause her distress? What level of detail is fair and who decides? The whole point is, you want to keep info confidential.
Should you have to state what you fear? “I have future distress, what you might do to me?” Once you lose control of data, it’s gone. Based on past planning secrecy and ideas for the future, like mashing up health data with retail loyalty cards as suggested at Strata in November 2013 [from 16:00] [2] no wonder people are sceptical.
Given the long list of commercial companies, charities, think tanks and others that passing out our sensitive data puts at risk and given the Information Centre’s past record, HSCIC might be grateful they have only opt out requests to deal with, and not millions of medical ethics court summonses. So far.
HSCIC / NHS Digital has extracted our identifiable records and has given them away, including for commercial product use, and continues give them away, without informing us. We’ve accepted Ministers’ statements and that a solution would be found. Two years on, patience wears thin.
“Without that external trust, we risk losing our public mandate and then cannot offer the vital insights that quality healthcare requires.”
In 2014 the public was told there should be no more surprises. This latest response is not only a surprise but enormously disrespectful.
When you’re trying to rebuild trust, assuming that we accept that ‘is’ the aim, you can’t say one thing, and do another. Perhaps the Department for Health doesn’t like the public answer to what the public wants from opt out, but that doesn’t make the DH view right.
Perhaps NHS Digital doesn’t want to deal with lots of individual opt out requests, that doesn’t make their refusal right.
Kingsley Manning recognised in July 2014, that the Information Centre “had made big mistakes over the last 10 years.” And there was “a once-in-a-generation chance to get it right.”
I didn’t think I’d have to move into the next one before they fix it.
The recent round of 2016 public feedback was the same as care.data 1.0. Respect nuanced opt outs and you will have all the identifiable public interest research data you want. Solutions must be better for other uses, opt out requests must be respected without distressing patients further in the process, and anonymous must mean anonymous.
“A patient can object to their confidential personal information from being disclosed out of the GP Practice and/or from being shared onwards by the HSCIC for non-direct care purposes (secondary purposes).”
The Higher Education and Research Bill sucks in personal data to the centre, as well as power. It creates an authoritarian panopticon of the people within the higher education and further education systems. Section 1, parts 72-74 creates risks but offers no safeguards.
Applicants and students’ personal data is being shifted into a top-down management model, at the same time as the horizontal safeguards for its distribution are to be scrapped.
Through deregulation and the building of a centralised framework, these bills will weaken the purposes for which personal data are collected, and weaken existing requirements on consent to which the data may be used at national level. Without amendments, every student who enters this system will find their personal data used at the discretion of any future Secretary of State for Education without safeguards or oversight, and forever. Goodbye privacy.
But in addition and separately, the Bill will permit data to be used at the discretion of the Secretary of State, which waters down and removes nuances of consent for what data may or may not be used today when applicants sign up to UCAS.
Applicants today are told in the privacy policy they can consent separately to sharing their data with the Student Loans company for example. This Bill will remove that right when it permits all Applicant data to be used by the State.
This removal of today’s consent process denies all students their rights to decide who may use their personal data beyond the purposes for which they permit its sharing.
And it explicitly overrides the express wishes registered by the 28,000 applicants, 66% of respondents to a 2015 UCAS survey, who said as an example, that they should be asked before any data was provided to third parties for student loan applications (or even that their data should never be provided for this).
Not only can the future purposes be changed without limitation, by definition, when combined with other legislation, namely the Digital Economy Bill that is in the Lords at the same time right now, this shift will pass personal data together with DWP and in connection with HMRC data expressly to the Student Loans Company.
In just this one example, the Higher Education and Research Bill is being used as a man in the middle. But it will enable all data for broad purposes, and if those expand in future, we’ll never know.
This change, far from making more data available to public interest research, shifts the balance of power between state and citizen and undermines the very fabric of its source of knowledge; the creation and collection of personal data.
Further, a number of amendments have been proposed in the Lords to clause 9 (the transparency duty) which raise more detailed privacy issues for all prospective students, concerns UCAS share.
Why this lack of privacy by design is damaging
This shift takes away our control, and gives it to the State at the very time when ‘take back control’ is in vogue. These bills are building a foundation for a data Brexit.
And without future limitation, what might be imposed is unknown.
This shortsightedness will ultimately cause damage to data integrity and the damage won’t come in education data from the Higher Education Bill alone. The Higher Education and Research Bill is just one of three bills sweeping through Parliament right now which build a cumulative anti-privacy storm together, in what is labelled overtly as data sharing legislation or is hidden in tucked away clauses.
Unlike the Higher Education and Research Bill, it may not fundamentally changing how the State gathers information on further education, but it has the potential to do so on use.
The change is a generalisation of purposes. Currently, subsection 1 of section 54 refers to “purposes of the exercise of any of the functions of the Secretary of State under Part 4 of the Apprenticeships, Skills, Children and Learning Act 2009”.
Therefore, the government argues, “it would not hold good in circumstances where certain further education functions were transferred from the Secretary of State to some combined authorities in England, which is due to happen in 2018.”<
This is why clause 38 will amend that wording to “purposes connected with further education”.
Whatever the details of the reason, the purposes are broader.
Again, combined with the Digital Economy Bill’s open ended purposes, it means the Secretary of State could agree to pass these data on to every other government department, a range of public bodies, and some private organisations.
These loose purposes, without future restrictions, definitions of third parties it could be given to or why, or clear need to consult the public or parliament on future scope changes, is a repeat of similar legislative changes which have resulted in poor data practices using school pupil data in England age 2-19 since 2000.
Policy makers should consider whether the intent of these three bills is to give out identifiable, individual level, confidential data of young people under 18, for commercial use without their consent? Or to journalists and charities access? Should it mean unfettered access by government departments and agencies such as police and Home Office Removals Casework teams without any transparent register of access, any oversight, or accountability?
These are today’s uses by third-parties of school children’s individual, identifiable and sensitive data from the National Pupil Database.
Uses of data not as statistics, but named individuals for interventions in individual lives.
Hoping that the data transfers to the Home Office won’t result in the deportation of thousands we would not predict today, may be naive.
Under the new open wording, the Secretary of State for Education might even decide to sell the nation’s entire Technical and Further Education student data to Trump University for the purposes of their ‘research’ to target marketing at UK students or institutions that may be potential US post-grad applicants. The Secretary of State will have the data simply because she “may require [it] for purposes connected with further education.”
And to think US buyers or others would not be interested is too late.
In 2015 Stanford University made a request of the National Pupil Database for both academic staff and students’ data. It was rejected. We know this only from the third party release register. Without any duty to publish requests, approved users or purposes of data release, where is the oversight for use of these other datasets?
If these are not the intended purposes of these three bills, if there should be any limitation on purposes of use and future scope change, then safeguards and oversight need built into the face of the bills to ensure data privacy is protected and avoid repeating the same again.
Hoping that the decision is always going to be, ‘they wouldn’t approve a request like that’ is not enough to protect millions of students privacy.
The three bills are a perfect privacy storm
As other Europeans seek to strengthen the fundamental rights of their citizens to take back control of their personal data under the GDPR coming into force in May 2018, the UK government is pre-emptively undermining ours in these three bills.
Young people, and data dependent institutions, are asking for solutions to show what personal data is held where, and used by whom, for what purposes. That buys in the benefit message that builds trust showing what you said you’d do with my data, is what you did with my data. [1] [2]
Reality is that in post-truth politics it seems anything goes, on both sides of the Pond. So how will we trust what our data is used for?
2015-16 advice from the cross party Science and Technology Committee suggested data privacy is unsatisfactory, “to be left unaddressed by Government and without a clear public-policy position set out“. We hear the need for data privacy debated about use of consumer data, social media, and on using age verification. It’s necessary to secure the public trust needed for long term public benefit and for economic value derived from data to be achieved.
But the British government seems intent on shortsighted legislation which does entirely the opposite for its own use: in the Higher Education Bill, the Technical and Further Education Bill and in the Digital Economy Bill.
These bills share what Baroness Chakrabarti said of the Higher Education Bill in its Lords second reading on the 6th December, “quite an achievement for a policy to combine both unnecessary authoritarianism with dangerous degrees of deregulation.”
Unchecked these Bills create the conditions needed for catastrophic failure of public trust. They shift ever more personal data away from personal control, into the centralised control of the Secretary of State for unclear purposes and use by undefined third parties. They jeopardise the collection and integrity of public administrative data.
To future-proof the immediate integrity of student personal data collection and use, the DfE reputation, and public and professional trust in DfE political leadership, action must be taken on safeguards and oversight, and should consider:
Transparency register: a public record of access, purposes, and benefits to be achieved from use
Subject Access Requests: Providing the public ways to access copies of their own data
Consent procedures should be strengthened for collection and cannot say one thing, and do another
Ability to withdraw consent from secondary purposes should be built in by design, looking to GDPR from 2018
Clarification of the legislative purpose of intended current use by the Secretary of State and its boundaries shoud be clear
Future purpose and scope change limitations should require consultation – data collected today must not used quite differently tomorrow without scrutiny and ability to opt out (i.e. population wide registries of religion, ethnicity, disability)
Review or sunset clause
If the legislation in these three bills pass without amendment, the potential damage to privacy will be lasting.
Schools Minister Nick Gibb responded on July 25th 2016: ”
“These new data items will provide valuable statistical information on the characteristics of these groups of children […] “The data will be collected solely for internal Departmental use for the analytical, statistical and research purposes described above. There are currently no plans to share the data with other government Departments”
[2] December 15, publication of MOU between the Home Office Casework Removals Team and the DfE, reveals “the previous agreement “did state that DfE would provide nationality information to the Home Office”, but that this was changed “following discussions” between the two departments.” http://schoolsweek.co.uk/dfe-had-agreement-to-share-pupil-nationality-data-with-home-office/
The agreement was changed on 7th October 2016 to not pass nationality data over. It makes no mention of not using the data within the DfE for the same purposes.