‘People, ideas, machines — in that order.’ This quote in that latest blog by Dominic Cummings is spot on, but the blind spots or the deliberate scoping the blog reveals, are both just as interesting.
If you want to “figure out what characters around Putin might do”, move over Miranda. If your soul is for sale, then this might be the job for you. This isn’t anthropomorphism of Cummings, but an excuse to get in the parallels to Meryl Streep’s portrayal of Priestly.
“It will be exhausting but interesting and if you cut it you will be involved in things at the age of 21 that most people never see.”
Comments like these make people who are not of that mould, feel of less worth. Commitment comes in many forms. People with kids and caring responsibilities, may be some of your most loyal staff. You may not want them as your new PA, but you will almost certainly, not want to lose them across the board.
Some words would be wise in follow up to existing staff, the thousands of public servants we have today, after his latest post.
1. The blog is aimed at a certain kind of men. Speak to women too.
The framing of this call for staff is problematic, less for its suggested work ethic, than the structural inequalities it appears to purposely perpetuate. Despite the poke at public school bluffers. Do you want the best people around you, able to play well with others, or not?
I am disappointed that asking for “the sort of people we need to find” is designed, intentionally or not, to appeal to a certain kind of men. Even if he says it should be diverse and includes people, “like that girl hired by Bigend as a brand ‘diviner.'”
If Cummings is intentional about hiring the best people, then he needs to do by better by women. We already have a PM that many women would consider toxic to work around, and won’t as a result.
Some of the most brilliant, cognitively diverse, young people I know who fit these categories well, — and across the political spectrum–are themselves diverse by nature and expect their surroundings to be. They (unlike our generation), do not “babble about ‘gender identity diversity blah blah’.” Woke is not an adjective that needs explained, but a way of life. Put such people off by appearing to devalue their norms, and you’ll miss out on some potential brilliant applicants from the pool, which will already be self-selecting, excluding many who simply won’t work for you, or Boris, or Brexit blah blah. People prepared to burn out as you want them to, aren’t going to be at their best for long. And it takes a long time to recover.
‘That girl’ was the main character, and her name was Cayce Pollard. Women know why you should say her name. Fewer women will have worked at CERN, perhaps for related reasons, compared with “the ideal candidate” described in this call.
“If you want an example of the sort of people we need to find in Britain, look at this’he writes of C.C. Myers, with a link to, ‘On the Cover: The World’s Fastest Man.“
Charlie Munger, Warren Buffett, Alexander Grothendieck, Bret Victor, von Neumann, Cialdini. Groves, Mueller, Jain, Pearl, Kay, Gibson, Grove, Makridakis, Yudkowsky, Graham and Thiel.
The *men illustrated* list, goes on and on.
What does it matter how many lovers you have if none of them gives you the universe?
Not something I care to discuss over dinner either.
But women of all ages do care that our PM appears to be a cad. It matters therefore that your people be seen to work to a better standard. You want people loyal to your cause, and the public to approve, even if they don’t of your leader. Leadership goes far beyond electoral numbers and a mandate.
A different kind of the same kind of thing, isn’t real change. This call for something new, is far less radical than it is being portrayed as.
2. Change. Don’t forget to manage it by design.
In fact, the speculation that this is all change, hiring new people for new stuff [some of which elsewhere he has genuinely interesting ideas on, like, “decentralisation and distributed control to minimise the inevitable failures of even the best people”] doesn’t really feature here, rather it is something of a precursor. He’s starting less with building the new, and rather with let’s ‘drain the swamp’ of bureaucracy. The Washington-style of 1980’s Reagan, including, ‘let’s put in some more of our kind of people’.
His personal brand of longer-term change may not be what some of his cheerleaders think it will be, but if the outcome is the same and seen to be ‘showing these Swamp creatures the zero mercy they deserve‘, [sic] does intent matter? It does, and he needs to describe his future plans better, if he wants to have a civil service that works well.
The biggest content gap (leaving actual policy content aside) is any appreciation of the current, and need for change management.
Training gets a mention; but new process success, depends on effectively communicating on change, and delivering training about it to all, not only those from whom you expect the most high performance. People not projects, remember?
Change management and capability transfer delivered by costly consultants, is not needed, but making it understandable not elitist, is.
genuinely present an understanding of the as-is, (I get you and your org, for change *with* you, not to force change upon you)
communicating what the future model is going to move towards (this is why you want to change and what good looks like), and
a roadmap of how you expect the organisation to get there (how and when), that need not be constricted by artificial comms grids.
On top of the organisational model, *every* member of staff must know where their own path fits in, and if their role is under threat, whether training will be offered to adapt, or whether they will be made redundant. Uncertainty around this over time, is also toxic. You might not care if you lose people along the way. You might consider these the most expendable people. But if people are fearful and unhappy in your organisation, or about their own future, it will hold them back from delivering at their best, and the organisation as a result. And your best will leave, as much as those who are not.
“How to build great teams and so on”, is not a bolt-on extra here, it is fundamental. You can’t forget the kitchens. But changing the infrastructure alone, cannot deliver real change you want to see.
3. Communications. Neither propaganda and persuasion nor PR.
There is not such a vast difference between the business of communications as a campaign tool, and tool for control. Persuasion and propaganda. But where there may be a blind spot in the promotion of the Cialdini-six style comms, is that behavioural scientists that excel at these, will not use the kind of communication tools that either the civil service nor the country needs for the serious communications of change, beyond the immediate short term.
As an aside, for anyone having kittens about using an unofficial email to get around FOI requests and think it a conspiracy to hide internal communications, it really doesn’t work that way. Don’t panic, we know where our towel is.
4. The Devil craves DARPA. Build it with safe infrastructures.
Cumming’s long-established fetishing of technology and fascination with Moscow will be familiar to those close, or blog readers. They are also currently fashionable, again. The solution is therefore no surprise, and has been prepped in various blogs for ages. The language is familiar. But single-mindedness over this length of time, can make for short sightedness.
“The limiting factor for the Pentagon in deploying advanced technology to conflict in a useful time period was not new technical ideas — overcoming its own bureaucracy was harder than overcoming enemy action.”
Almost a year after that project collapsed, its most interesting feature was surely not the role of bureaucracy among tech failure. Maven was a failure not of tech, nor bureaucracy, but to align its values with the decency of its workforce. Whether the recallibration of its compass as a company is even possible, remains to be seen.
If firing staff who hold you to account against a mantra of ‘don’t be evil’ is championed, this drive for big tech values underpinning your staff thinking and action, will be less about supporting technology moonshots, than a shift to the Dark Side of capitalist surveillance.
The incessant narrative focus on man and the machine –machine learning, —the machinery of government, quantitative models and the frontiers of the science of prediction is an obsession with power. The downplay of the human in that world —is displayed in so many ways, but the most obvious is the press and political narrative of a need to devalue human rights, — and yet to succeed, tech and innovation needs an equal and equivalent counterweight, in accountability under human rights and the law, so that when systems fail people, they do not cause catastrophic harm at scale.
We must stop state systems failing children, if they are not to create a failed society.
A UK DARPA-esque, devolved hothousing for technology will fail, if you don’t shore up public trust. Both in the state and commercial sectors. An electoral mandate won’t last, nor reach beyond its scope for long. You need a social licence to have legitimacy for tech that uses public data, that is missing today. It is bone-headed and idiotic that we can’t get this right as a country. Despite knowing how to, if government keeps avoiding doing it safely, it will come at a cost.
You might of course, not care. But commercial companies will when they go under. The electorate will. Your masters might if their legacy will suffer and debate about the national good and the UK as a Life Sciences centre, all come to naught.
There was little in this blog, of the reality of what these hires should deliver beyond more tech and systems’ change. But the point is to make systems that work for people, not see more systems at work.
5. The ‘circle of competence’ needs values, not only to value skills.
It’s important and consistent behaviour that Cummings says he recognises his own weaknesses, that some decisions are beyond his ‘circle of competence’ and that he should in in effect become redundant, having brought in, “the sort of expertise supporting the PM and ministers that is needed.” Founder’s syndrome is common to organisations and politics is not exempt. But neither is the Peter principle a phenomenon particular to only the civil service.
“One of the problems with the civil service is the way in which people are shuffled such that they either do not acquire expertise or they are moved out of areas they really know to do something else.”
But so what? what’s worse, is politics has not only the Peter’s but the Dilbert principle when it comes to senior leadership. You can’t put people in positions expected to command respect when they tell others to shut up and go away. Or fire without due process. If you want orgs to function together at scale, especially beyond the current problems with silos, they need people on the ground who can work together, and have a common goal who respect those above them, and feel it is all worthwhile. Their politics don’t matter. But integrity, respect and trust do, even if they don’t matter to you personally.
I agree wholeheartedly that circles of competence matter [as I see the need to build some in education on data and edTech]. Without the appropriate infrastructure change, radical change of policy is nearly impossible. But skill is not the only competency that counts when it comes to people.
If the change you want is misaligned with people’s values, people won’t support it, no matter who you get to see it through. Something on the integrity that underpins this endeavour, will matter to the applicants too. Most people do care how managers treat their own.
The blog was pretty clear that Cummings won’t value staff, unless their work ethic, skills and acceptance will belong to him alone to judge sufficient or not, to be “binned within weeks if you don’t fit.”
This government already knows it has treated parts of the public like that for too long. Policy has knowingly left some people behind on society’s scrap heap, often those scored by automated systems as inadequate. Families in-work moved onto Universal Credit, feed their children from food banks for #5WeeksTooLong. The rape clause. Troubled families. Children with special educational needs battling for EHC plan recognition without which schools won’t take them, and DfE knowingly underfunding suitable Alternative Provision in education by a colossal several hundred per cent amount per place, by design.
The ‘circle of competence’ needs to recognise what happens as a result of policy, not only to place value on the skills in its delivery or see outcomes on people as inevitable or based on merit. Charlie Munger may have said, “At the end of the day – if you live long enough – most people get what they deserve.”
An awful lot of people deserve a better standard of living and human dignity than the UK affords them today. And we can’t afford not to fix it. A question for new hires: How will you contribute to doing this?
6. Remember that our civil servants, are after all, public servants.
The real test of competence, and whether the civil service delivers for the people whom they serve, is inextricably bound with government policy. If its values, if its ethics are misguided, building a new path with or without new people, will be impossible.
The best civil servants I have worked with, have one thing in common. They have a genuine desire to make the world better. [We can disagree on what that looks like and for whom, on fraud detection, on immigration, on education, on exploitation of data mining and human rights, or the implications of the law. Their policy may bring harm, but their motivation is not malicious.] Your goal may be a ‘better’ civil service. They may be more focussed on better outcomes for people, not systems. Lose sight of that, and you put the service underpinning government, at risk. Not to bring change for good, but to destroy the very point of it. Keep the point of a better service, focussed on the improvement for the public.
Civil servants civilly serve in the words of Stefan Czerniawski. These plans will need challenge to be the best they can be. As pubstrat asked, so should we all ask Cummings to outline his thoughts on:
“What makes the decisions which civil servants implement legitimate?
Where are the boundaries of that legitimacy and how can they be detected?
What should civil servants do if those boundaries are reached and crossed?”
Self-destruction for its own sake, is not a compelling narrative for change, whether you say you want to control that narrative, or not.
Two hands are a lot, but many more already work in the civil service. If Cummings only works against them, he’ll succeed not in building change, but resistance.
“The consent model is broken” was among its key conclusions.
Similarly, this summer, the Swedish DPA found, in accordance with GDPR, that consent was not a valid legal basis for a school pilot using facial recognition to keep track of students’ attendance given the clear imbalance between the data subject and the controller.
This power imbalance is at the heart of the failure of consent as a lawful basis under Art. 6, for data processing from schools.
Schools, children and their families across England and Wales currently have no mechanisms to understand which companies and third parties will process their personal data in the course of a child’s compulsory education.
Children have rights to privacy and to data protection that are currently disregarded.
Fair processing is a joke.
Unclear boundaries between the processing in-school and by third parties are the norm.
Companies and third parties reach far beyond the boundaries of processor, necessity and proportionality, when they determine the nature of the processing: extensive data analytics, product enhancements and development going beyond necessary for the existing relationship, or product trials.
Data retention rules are as unrespected as the boundaries of lawful processing. and ‘we make the data pseudonymous / anonymous and then archive / process / keep forever’ is common.
Rights are as yet almost completely unheard of for schools to explain, offer and respect, except for Subject Access. Portability for example, a requirement for consent, simply does not exist.
“Children do not lose their human rights by virtue of passing through the school gates. Thus, for example, education must be provided in a way that respects the inherent dignity of the child and enables the child to express his or her views freely in accordance with article 12, para (1), and to participate in school life.”
Those rights currently unfairly compete with commercial interests. And that power balance in education is as enormous, as the data mining in the sector. The then CEO of Knewton, Jose Ferreira said in 2012,
“the human race is about to enter a totally data mined existence…education happens to be today, the world’s most data mineable industry– by far.”
At the moment, these competing interests and the enormous power imbalance between companies and schools, and schools and families, means children’s rights are last on the list and oft ignored.
In addition, there are serious implications for the State, schools and families due to the routine dependence on key systems at scale:
Infrastructure dependence ie Google Education
Hidden risks [tangible and intangible] of freeware
Data distribution at scale and dependence on third party intermediaries
and not least, the implications for families’ mental health and stress thanks to the shift of the burden of school back office admin from schools, to the family.
It’s not a contract between children and companies either
Contract GDPR Article 6 (b) does not work either, as a basis of processing between the data processing and the data subject, because again, it’s the school that determines the need for and nature of the processing in education, and doesn’t work for children.
Controllers must, inter alia, take into account the impact on data subjects’ rights when identifying the appropriate lawful basis in order to respect the principle of fairness.
They also concluded that, on the capacity of children to enter into contracts, (footnote 10, page 6)
“A contractual term that has not been individually negotiated is unfair under the Unfair Contract Terms Directive “if, contrary to the requirement of good faith, it causes a significant imbalance in the parties’ rights and obligations arising under the contract, to the detriment of the consumer”.
Like the transparency obligation in the GDPR, the Unfair Contract Terms Directive mandates the use of plain, intelligible language.
Processing of personal data that is based on what is deemed to be an unfair term under the Unfair Contract Terms Directive, will generally not be consistent with the requirement under Article5(1)(a) GDPR that processing is lawful and fair.’
In relation to the processing of special categories of personal data, in the guidelines on consent, WP29 has also observed that Article 9(2) does not recognize ‘necessary for the performance of a contract’ as an exception to the general prohibition to process special categories of data.
They too also found:
it is completely inappropriate to use consent when processing children’s data: children aged 13 and older are, under the current legal framework, considered old enough to consent to their data being used, even though many adults struggle to understand what they are consenting to.
Can we fix it?
Consent models fail school children. Contracts can’t be between children and companies. So what do we do instead?
Schools’ statutory tasks rely on having a legal basis under data protection law, the public task lawful basis Article 6(e) under GDPR, which implies accompanying lawful obligations and responsibilities of schools towards children. They cannot rely on (f) legitimate interests. This 6(e) does not extend directly to third parties.
Third parties should operate on the basis of contract with the school, as processors, but nothing more. That means third parties do not become data controllers. Schools stay the data controller.
Where that would differ with current practice, is that most processors today stray beyond necessary tasks and become de facto controllers. Sometimes because of the everyday processing and having too much of a determining role in the definition of purposes or not allowing changes to terms and conditions; using data to develop their own or new products, for extensive data analytics, the location of processing and data transfers, and very often because of excessive retention.
Although the freedom of the mish-mash of procurement models across UK schools on an individual basis, learning grids, MATs, Local Authorities and no-one-size-fits-all model may often be a good thing, the lack of consistency today means your child’s privacy and data protection are in a postcode lottery. Instead we need:
a radical rethink the use of consent models, and home-school agreements to obtain manufactured ‘I agree’ consent.
to radically articulate and regulate what good looks like, for interactions between children and companies facilitated by schools, and
radically redesign a contract model which enables only that processing which is within the limitations of a processors remit and therefore does not need to rely on consent.
It would mean radical changes in retention as well. Processors can only process for only as long as the legal basis extends from the school. That should generally be only the time for which a child is in school, and using that product in the course of their education. And certainly data must not stay with an indefinite number of companies and their partners, once the child has left that class, year, or left school and using the tool. Schools will need to be able to bring in part of the data they outsource to third parties for learning, *if* they need it as evidence or part of the learning record, into the educational record.
Where schools close (or the legal entity shuts down and no one thinks of the school records [yes, it happens], change name, and reopen in the same walls as under academisation) there must be a designated controller communicated before the change occurs.
The school fence is then something that protects the purposes of the child’s data for education, for life, and is the go to for questions. The child has a visible and manageable digital footprint. Industry can be confident that they do indeed have a lawful basis for processing.
Schools need to be within a circle of competence
This would need an independent infrastructure we do not have today, but need to draw on.
Due diligence,
communication to families and children of agreed processors on an annual basis,
an opt out mechanism that works,
alternative lesson content on offer to meet a similar level of offering for those who do,
and end-of-school-life data usage reports.
The due diligence in procurement, in data protection impact assessment, and accountability needs to be done up front, removed from the classroom teacher’s responsibility who is in an impossible position having had no basic teacher training in privacy law or data protection rights, and the documents need published in consultation with governors and parents, before beginning processing.
However, it would need to have a baseline of good standards that simply does not exist today.
That would also offer a public safeguard for processing at scale, where a company is not notifying the DPA due to small numbers of children at each school, but where overall group processing of special category (sensitive) data could be for millions of children.
Where some procurement structures might exist today, in left over learning grids, their independence is compromised by corporate partnerships and excessive freedoms.
While pre-approval of apps and platforms can fail where the onus is on the controller to accept a product at a point in time, the power shift would occur where products would not be permitted to continue processing without notifying of significant change in agreed activities, owner, storage of data abroad and so on.
We shift the power balance back to schools, where they can trust a procurement approval route, and children and families can trust schools to only be working with suppliers that are not overstepping the boundaries of lawful processing.
What might school standards look like?
The first principles of necessity, proportionality, data minimisation would need to be demonstrable — just as required under data protection law for many years, and is more explicit under GDPR’s accountability principle. The scope of the school’s authority must be limited to data processing for defined educational purposes under law and only these purposes can be carried over to the processor. It would need legislation and a Code of Practice, and ongoing independent oversight. Violations could mean losing the permission to be a provider in the UK school system. Data processing failures would be referred to the ICO.
Purposes: A duty on the purposes of processing to be for necessary for strictly defined educational purposes.
Service Improvement: Processing personal information collected from children to improve the product would be very narrow and constrained to the existing product and relationship with data subjects — i.e security, not secondary product development.
Deletion: Families and children must still be able to request deletion of personal information collected by vendors which do not form part of the permanent educational record. And a ‘clean slate’ approach for anything beyond the necessary educational record, which would in any event, be school controlled.
Fairness: Whilst at school, the school has responsibility for communication to the child and family how their personal data are processed.
Post-school accountability as the data, resides with the school: On leaving school the default for most companies, should be deletion of all personal data, provided by the data subject, by the school, and inferred from processing. For remaining data, the school should become the data controller and the data transferred to the school. For any remaining company processing, it must be accountable as controller on demand to both the school and the individual, and at minimum communicate data usage on an annual basis to the school.
Ongoing relationships: Loss of communication channels should be assumed to be a withdrawal of relationship and data transferred to the school, if not deleted.
Data reuse and repurposing for marketing explicitly forbidden. Vendors must be prohibited from using information for secondary [onward or indirect] reuse, for example in product or external marketing to pupils or parents.
Families must still be able to object to processing, on an ad hoc basis, but at no detriment to the child, and an alternative method of achieving the same aims must be offered.
Data usage reports would become the norm to close the loop on an annual basis. “Here’s what we said we’d do at the start of the year. Here’s where your data actually went, and why.”
In addition, minimum acceptable ethical standards could be framed around for example, accessibility, and restrictions on in-product advertising.
There must be no alternative back route to just enough processing
What we should not do, is introduce workarounds by the back door.
Schools are not to carry on as they do today, manufacturing ‘consent’ which is in fact unlawful. It’s why Google, despite the objection when I set this out some time ago, is processing unlawfully. They rely on consent that simply cannot and does not exist.
In parallel timing, the US Federal Trade Commission’s has a consultation open until December 9th, on the Implementation of the Children’s Online Privacy Protection Rule, the COPPA consultation.
‘There has been a significant expansion of education technology used in classrooms’, the FTC mused before asking whether the Commission should consider a specific exception to parental consent for the use of education technology used in the schools.
In a backwards approach to agency and the development of a rights respecting digital environment for the child, the consultation in effect suggests that we mould our rights mechanisms to fit the needs of business.
That must change. The ecosystem needs a massive shift to acknowledge that if it is to be GDPR compliant, which is a rights respecting regulation, then practice must become rights respecting.
That means meeting children and families reasonable expectations. If I send my daughter to school, and we are required to use a product that processes our personal data, it must be strictly for the *necessary* purposes of the task that the school asks of the company, and the child/ family expects, and not a jot more.
Borrowing on Ben Green’s smart enough city concept, or Rachel Coldicutt’s just enough Internet, UK school edTech suppliers should be doing just enough processing.
How it is done in the U.S. governed by FERPA law is imperfect and still results in too many privacy invasions, but it offers a regional model of expertise for schools to rely on, and strong contractual agreements of what is permitted.
That, we could build on. It could be just enough, to get it right.
Five years on, other people’s use of the language of data ethics puts social science at risk. Event after event, we are witnessing the gradual dissolution of the value and meaning of ‘ethics’, into little more than a buzzword.
Companies and organisations are using the language of ‘ethical’ behaviour blended with ‘corporate responsibility’ modelled after their own values, as a way to present competitive advantage.
Ethics is becoming shorthand for, ‘we’re the good guys’. It is being subverted by personal data users’ self-interest. Not to address concerns over the effects of data processing on individuals or communities, but to justify doing it anyway.
An ethics race
There’s certainly a race on for who gets to define what data ethics will mean. We have at least three new UK institutes competing for a voice in the space. Digital Catapult has formed an AI ethics committee. Data charities abound. Even Google has developed an ethical AI strategy of its own, in the wake of their Project Maven.
Lessons learned in public data policy should be clear by now. There should be no surprises how administrative data about us are used by others. We should expect fairness. Yet these basics still seem hard for some to accept.
The NHS Royal Free Hospital in 2015 was rightly criticised – because they tried “to commercialise personal confidentiality without personal consent,” as reported in Wired recently.
“The shortcomings we found were avoidable,” wrote Elizabeth Denham in 2017 when the ICO found six ways the Google DeepMind — Royal Free deal did not comply with the Data Protection Act. The price of innovation, she said, didn’t need to be the erosion of fundamental privacy rights underpinned by the law.
If the Centre for Data Ethics and Innovation is put on a statutory footing where does that leave the ICO, when their views differ?
It’s why the idea of DeepMind funding work in Ethics and Society seems incongruous to me. I wait to be proven wrong. In their own words, “technologists must take responsibility for the ethical and social impact of their work“. Breaking the law however, is conspicuous by its absence, and the Centre must not be used by companies, to generate pseudo lawful or ethical acceptability.
Do we need new digital ethics?
Admittedly, not all laws are good laws. But if recognising and acting under the authority of the rule-of-law is now an optional extra, it will undermine the ICO, sink public trust, and destroy any hope of achieving the research ambitions of UK social science.
I am not convinced there is any such thing as digital ethics. The claimed gap in an ability to get things right in this complex area, is too often after people simply get caught doing something wrong. Technologists abdicate accountability saying “we’re just developers,” and sociologists say, “we’re not tech people.”
These shrugs of the shoulders by third-parties, should not be rewarded with more data access, or new contracts. Get it wrong, get out of our data.
This lack of acceptance of responsibility creates a sense of helplessness. We can’t make it work, so let’s make the technology do more. But even the most transparent algorithms will never be accountable. People can be accountable, and it must be possible to hold leaders to account for the outcomes of their decisions.
But it shouldn’t be surprising no one wants to be held to account. The consequences of some of these data uses are catastrophic.
Accountability is the number one problem to be solved right now. It includes openness of data errors, uses, outcomes, and policy. Are commercial companies, with public sector contracts, checking data are accurate and corrected from people who the data are about, before applying in predictive tools?
Unethical practice
As Tim Harford in the FT once asked about Big Data uses in general: “Who cares about causation or sampling bias, though, when there is money to be made?”
Problem area number two, whether researchers are are working towards a profit model, or chasing grant funding is this:
How data users can make unbiased decisions whether they should use the data? We have all the same bodies deciding on data access, that oversee its governance. Conflict of self interest is built-in by default, and the allure of new data territory is tempting.
But perhaps the UK key public data ethics problem, is that the policy is currently too often about the system goal, not about improving the experience of the people using systems. Not using technology as a tool, as if people mattered. Harmful policy, can generate harmful data.
Secondary uses of data are intrinsically dependent on the ethics of the data’s operational purpose at collection. Damage-by-design is evident right now across a range of UK commercial and administrative systems. Metrics of policy success and associated data may be just wrong.
Home Office reviews wrongly identified students as potentially fraudulent, yet their TOIEC visa cancellations has irrevocably damaged lives without redress.
Some of the most ethical research aims try to reveal these problems. But we need to also recognise not all research would be welcomed by the people the research is about, and few researchers want to talk about it. Among hundreds of already-approved university research ethics board applications I’ve read, some were desperately lacking. An organisation is no more ethical than the people who make decisions in its name. People disagree on what is morally right. People can game data input and outcomes and fail reproducibility. Markets and monopolies of power bias aims. Trying to support the next cohort of PhDs and impact for the REF, shapes priorities and values.
It is still rare to find informed discussion among the brightest and best of our leading data institutions, about the extensive everyday real world secondary data use across public authorities, including where that use may be unlawful and unethical, like buying from data brokers. Research users are pushing those boundaries for more and more without public debate. Who says what’s too far?
The only way is ethics? Where next?
The latest academic-commercial mash-ups on why we need new data ethics in a new regulatory landscape where the established is seen as past it, is a dangerous catch-all ‘get out of jail free card’.
Ethical barriers are out of step with some of today’s data politics. The law is being sidestepped and regulation diminished by lack of enforcement of gratuitous data grabs from the Internet of Things, and social media data are seen as a free-for-all. Data access barriers are unwanted. What is left to prevent harm?
I’m certain that we first need to take a step back if we are to move forward. Ethical values are founded on human rights that existed before data protection law. Fundamental human decency, rights to privacy, and to freedom from interference, common law confidentiality, tort, and professional codes of conduct on conflict of interest, and confidentiality.
Data protection law emphasises data use. But too often its first principles of necessity and proportionality are ignored. Ethical practice would ask more often, should we collect the data at all?
Let’s not pretend secondary use of data is unproblematic, while uses are decided in secret. Calls for a new infrastructure actually seek workarounds of regulation. And human rights are dismissed.
Building a social license between data subjects and data users is unavoidable if use of data about people hopes to be ethical.
The lasting solutions are underpinned by law, and ethics. Accountability for risk and harm. Put the person first in all things.
We need more than hopes and dreams and talk of ethics.
We need realism if we are to get a future UK data strategy that enables human flourishing, with public support.
Notes of desperation or exasperation are increasingly evident in discourse on data policy, and start to sound little better than ‘we want more data at all costs’. If so, the true costs would be lasting.
Perhaps then it is unsurprising that there are calls for a new infrastructure to make it happen, in the form of Data Trusts. Some thoughts on that follow too.
Part 1. Ethically problematic
Ethics is dissolving into little more than a buzzword. Can we find solutions underpinned by law, and ethics, and put the person first?
Peter Riddell, the Commissioner for Public Appointments, has completed his investigation into the recent appointments to the Board of the Office for Students and published his report.
From the “Number 10 Googlers,” that NUS affiliation — an interest in student union representation was seen as undesirable, to “undermining the policy goals” and what the SpAds supported, the whole report is worth a read.
Perception of the process
The concern that the Commissioner raises, over the harm done to the public’s perception of the public appointments process means more needs done to fix these problems, before and after appointments.
This process reinforces what people think already. Jobs for the [white Oxford] boys, and yes-men. And so what, why should I get involved anyway, and what can we hope to change?
Possibilities for improvement
What should the Department for Education (DfE) now offer and what should be required after the appointments process, for the OfS and other bodies, boards and groups et al?
Every board at the Department for Education, its name, aim, and members — internal and external — should be published.
Every board at the Department for Education should be required to publish its Terms of Appointment, and Terms of Reference.
Every board at the Department for Education should be required to publish agendas before meetings and meaningful meeting minutes promptly.
Why? Because there’s all sorts of boards around and their transparency is frankly non-existent. I know because I sit on one. Foolishly I did not make it a requirement to publish minutes before I agreed to join. But in a year it has only met twice, so you’ve not missed much. Who else sits where, on what policy, and why?
In another I used to sit on I got increasingly frustrated that the minutes were not reflective of the substance of discussion. This does the public a disservice twice over. The purpose of the boards look insipid and the evidence for what challenge they are intended to offer, their very reason for being, is washed away. Show the public what’s hard, that there’s debate, that risks are analysed and balanced, and then decisions taken. Be open to scrutiny.
The public has a right to know
When scrutiny really matters, it is wrong — just as the Commissioner report reads — for any Department or body to try to hide the truth.
The purpose of transparency must be to hold to account and ensure checks-and-balances are upheld in a democratic system.
The DfE withdrew from a legal hearing scheduled at the First Tier Information Rights Tribunal last year a couple of weeks beforehand, and finally accepted an ICO decision notice in my favour. I had gone through a year of the Freedom-of-Information appeal process to get hold of the meeting minutes of the Department for Education Star Chamber Scrutiny Board, from November 2015.
It was the meeting in which I had been told members approved the collection of nationality and country of birth in the school census.
“The Star Chamber Scrutiny Board”. Not out of Harry Potter and the Ministry of Magic but appointed by the DfE.
It’s a board that mentions actively seeking members of certain teaching unions but omits others. It publishes no meeting minutes. Its terms of reference are 38 words long, and it was not told the whole truth before one of the most important and widely criticised decisions it ever made affecting the lives of millions of children across England and harm and division in the classroom.
Its annual report doesn’t mention the controversy at all.
After sixteen months, the DfE finally admitted it had kept the Star Chamber Scrutiny Board in the dark on at least one of the purposes of expanding the school census. And on its pre-existing active, related data policy passing pupil data over to the Home Office.
The minutes revealed the Board did not know anything about the data sharing agreement already in place between the DfE and Home Office or that “(once collected) nationality data” [para 15.2.6] was intended to share with the Border Force Casework Removals Team.
Truth that the DfE was forced to reveal, and only came out two years after the meeting, and a full year after the change in law.
If the truth, transparency, diversity of political opinion on boards are allowed to die so does democracy
I spoke to Board members in 2016. They were shocked to find out what the MOU purposes were for the new data, and that regular data transfers had already begun without their knowledge, when they were asked to sign off the nationality data collection.
Their lack of concerns raised was given in written evidence to the House of Lords Secondary Legislation Scrutiny Committee that it had been properly reviewed.
How trustworthy is anything that the Star Chamber now “approves” and our law making process to expand school data? How trustworthy is the Statutory Instrument scrutiny process?
“there was no need for DfE to discuss with SCSB the sharing of data with Home Office as: a.) none of the data being considered by the SCSB as part of the proposal supporting this SI has been, or will be, shared with any third-party (including other government departments);
[omits it “was planned to be”]
and b.) even if the data was to be shared externally, those decisions are outside the SCSB terms of reference.”
Outside the terms of reference that are 38 words long and should scrutinise but not too closely or reject on the basis of what exactly?
Not only is the public not being told the full truth about how these boards are created, and what their purpose is, it seems board members are not always told the full truth they deserve either.
Who is invited to the meeting, and who is left out? What reports are generated with what recommendations? What facts or opinion cannot be listened to, scrutinised and countered, that could be so damaging as to not even allow people to bring the truth to the table?
If the meeting minutes would be so controversial and damaging to making public policy by publishing them, then who the heck are these unelected people making such significant decisions and how? Are they qualified, are they independent, and are they accountable?
If alternately, what should be ‘independent’ boards, or panels, or meetings set up to offer scrutiny and challenge, are in fact being manipulated to manoeuvre policy and ready-made political opinions of the day, it is a disaster for public engagement and democracy.
It should end with this ex- OfS hiring process at DfE, today.
The appointments process and the ongoing work by boards must have full transparency, if they are ever to be seen as trustworthy.
What is means to be human is going to be different. That was the last word of a panel of four excellent speakers, and the sparkling wit and charm of chair Timandra Harkness, at tonight’s Turing Institute event, hosted at the British Library, on the future of data.
The first speaker, Bernie Hogan, of the Oxford Internet Institute, spoke of Facebook’s emotion experiment, and the challenges of commercial companies ownership and concentrations of knowledge, as well as their decisions controlling what content you get to see.
He also explained simply what an API is in human terms. Like a plug in a socket and instead of electricity, you get a flow of data, but the data controller can control which data can come out of the socket.
And he brilliantly brought in a thought what would it mean to be able to go back in time to the Nuremberg trials, and regulate not only medical ethics, but the data ethics of indirect and computational use of information. How would it affect today’s thinking on AI and machine learning and where we are now?
“Available does not mean accessible, transparent does not mean accountable”
Charles from the Bureau of Investigative Journalism, who had also worked for Trinity Mirror using data analytics, introduced some of the issues that large datasets have for the public.
People rarely have the means to do any analytics well.
Even if open data are available, they are not necessarily accessible due to the volume of data to access, and constraints of common software (such as excel) and time constraints.
Without the facts they cannot go see a [parliamentary] representative or community group to try and solve the problem.
Local journalists often have targets for the number of stories they need to write, and target number of Internet views/hits to meet.
Putting data out there is only transparency, but not accountability if we cannot turn information into knowledge that can benefit the public.
“Trust, is like personal privacy. Once lost, it is very hard to restore.”
Jonathan Bamford, Head of Parliamentary and Government Affairs at the ICO, took us back to why we need to control data at all. Democracy. Fairness. The balance of people’s rights, like privacy, and Freedom-of-Information, and the power of data holders. The awareness that power of authorities and companies will affect the lives of ordinary citizens. And he said that even early on there was a feeling there was a need to regulate who knows what about us.
The third generation of Data Protection law he said, is now more important than ever to manage the whole new era of technology and use of data that did not exist when previous laws were made.
But, he said, the principles stand true today. Don’t be unfair. Use data for the purposes people expect. Security of data matters. As do rights to see the data people hold about us. Make sure data are relevant, accurate, necessary and kept for a sensible amount of time.
And even if we think that technology is changing, he argued, the principles will stand, and organisations need to consider these principles before they do things, considering privacy as a fundamental human right by default, and data protection by design.
After all, we should remember the Information Commissioner herself recently said,
“privacy does not have to be the price we pay for innovation. The two can sit side by side. They must sit side by side.
It’s not always an easy partnership and, like most relationships, a lot of energy and effort is needed to make it work. But that’s what the law requires and it’s what the public expects.”
“We must not forget, evil people want to do bad things. AI needs to be audited.”
Joanna J. Bryson was brilliant her multifaceted talk, summing up how data will affect our lives. She explained how implicit biases work, and how we reason, make decisions and showed up how we think in some ways in Internet searches. She showed in practical ways, how machine learning is shaping our future in ways we cannot see. And she said, firms asserting that doing these things fairly and openly and that regulation no longer fits new tech, “is just hoo-hah”.
She talked about the exciting possibilities and good use of data, but that , “we must not forget, evil people want to do bad things. AI needs to be audited.” She summed up, we will use data to predict ourselves. And she said:
“What is means to be human is going to be different.”
That is perhaps the crux of this debate. How do data and machine learning and its mining of massive datasets, and uses for ‘prediction’, affect us as individual human beings, and our humanity?
The last audience question addressed inequality. Solutions like transparency, subject access, accountability, and understanding biases and how we are used, will never be accessible to all. It needs a far greater digital understanding across all levels of society. How can society both benefit from and be involved in the future of data in public life? The conclusion was made, that we need more faith in public institutions working for people at scale.
But what happens when those institutions let people down, at scale?
The debate was less about the Future of Data in Public Life, and much more about how big data affects our personal lives. Most of the discussion was around how we understand the use of our personal information by companies and institutions, and how will we ensure democracy, fairness and equality in future.
The question went unanswered from an audience member, how do we protect ourselves from the harms we cannot see, or protect the most vulnerable who are least able to protect themselves?
“How can we future proof data protection legislation and make sure it keeps up with innovation?”
That audience question is timely given the new Data Protection Bill. But what legislation means in practice, I am learning rapidly, can be very different from what is in the written down in law.
One additional tool in data privacy and rights legislation is up for discussion, right now, in the UK. If it matters to you, take action.
NGOs could be enabled to make complaints on behalf of the public under article 80 of the General Data Protection Regulation (GDPR). However, the government has excluded that right from the draft UK Data Protection Bill launched last week.
“Paragraph 53 omits from Article 80, representation of data subjects, where provided for by Member State law” from paragraph 1 and paragraph 2,” [Data Protection Bill Explanatory notes, paragraph 681 p84/112]. 80 (2) gives members states the option to provide for NGOs to take action independently on behalf of many people that may have been affected.
If you want that right, a right others will be getting in other countries in the EU, then take action. Call your MP or write to them. Ask for Article 80, the right to representation, in UK law. We need to ensure that our human rights continue to be enacted and enforceable to the maximum, if, “what is means to be human is going to be different.”
For the Future of Data, has never been more personal.
What would it mean for you to trust an Internet connected product or service and why would you not?
What has damaged consumer trust in products and services and why do sellers care?
What do we want to see different from today, and what is necessary to bring about that change?
These three pairs of questions implicitly underpinned the intense day of #iotmark discussion at the London Zoo last Friday.
The questions went unasked, and could have been voiced before we started, although were probably assumed to be self-evident:
Why do you want one at all [define the problem]?
What needs to change and why [define the future model]?
How do you deliver that and for whom [set out the solution]?
If a group does not agree on the need and drivers for change, there will be no consensus on what that should look like, what the gap is to achieve it, and even less on making it happen.
So who do you want the trustmark to be for, why will anyone want it, and what will need to change to deliver the aims? No one wants a trustmark per se. Perhaps you want what values or promises it embodies to demonstrate what you stand for, promote good practice, and generate consumer trust. To generate trust, you must be seen to be trustworthy. Will the principles deliver on those goals?
The Open IoT Certification Mark Principles, as a rough draft was the outcome of the day, and are available online.
Here’s my reflections, including what was missing on privacy, and the potential for it to be considered in future.
I’ve structured this first, assuming readers attended the event, at ca 1,000 words. Lists and bullet points. The background comes after that, for anyone interested to read a longer piece.
Many thanks upfront, to fellow participants, to the organisers Alexandra D-S and Usman Haque and the colleague who hosted at the London Zoo. And Usman’s Mum. I hope there will be more constructive work to follow, and that there is space for civil society to play a supporting role and critical friend.
The mark didn’t aim to fix the IoT in a day, but deliver something better for product and service users, by those IoT companies and providers who want to sign up. Here is what I took away.
I learned three things
A sense of privacy is not homogenous, even within people who like and care about privacy in theoretical and applied ways. (I very much look forward to reading suggestions promised by fellow participants, even if enforced personal openness and ‘watching the watchers’ may mean ‘privacy is theft‘.)
Awareness of current data protection regulations needs improved in the field. For example, Subject Access Requests already apply to all data controllers, public and private. Few have read the GDPR, or the e-Privacy directive, despite importance for security measures in personal devices, relevant for IoT.
I truly love working on this stuff, with people who care.
And it reaffirmed things I already knew
Change is hard, no matter in what field.
People working together towards a common goal is brilliant.
Group collaboration can create some brilliantly sharp ideas. Group compromise can blunt them.
Some men are particularly bad at talking over each other, never mind over the women in the conversation. Women notice more. (Note to self: When discussion is passionate, it’s hard to hold back in my own enthusiasm and not do the same myself. To fix.)
The IoT context, and risks within it are not homogenous, but brings new risks and adverseries. The risks for manufacturers and consumers and the rest of the public are different, and cannot be easily solved with a one-size-fits-all solution. But we can try.
Concerns I came away with
If the citizen / customer / individual is to benefit from the IoT trustmark, they must be put first, ahead of companies’ wants.
If the IoT group controls both the design, assessment to adherence and the definition of success, how objective will it be?
The group was not sufficiently diverse and as a result, reflects too little on the risks and impact of the lack of diversity in design and effect, and the implications of dataveillance .
Critical minority thoughts although welcomed, were stripped out from crowdsourced first draft principles in compromise.
More future thinking should be built-in to be robust over time.
What was missing
There was too little discussion of privacy in perhaps the most important context of IoT – inter connectivity and new adversaries. It’s not only about *your* thing, but things that it speaks to, interacts with, of friends, passersby, the cityscape , and other individual and state actors interested in offense and defense. While we started to discuss it, we did not have the opportunity to discuss sufficiently at depth to be able to get any thinking into applying solutions in the principles.
One of the greatest risks that users face is the ubiquitous collection and storage of data about users that reveal detailed, inter-connected patterns of behaviour and our identity and not seeing how that is used by companies behind the scenes.
What we also missed discussing is not what we see as necessary today, but what we can foresee as necessary for the short term future, brainstorming and crowdsourcing horizon scanning for market needs and changing stakeholder wants.
Future thinking
Here’s the areas of future thinking that smart thinking on the IoT mark could consider.
We are moving towards ever greater requirements to declare identity to use a product or service, to register and log in to use anything at all. How will that change trust in IoT devices?
Single identity sign-on is becoming ever more imposed, and any attempts for multiple presentation of who I am by choice, and dependent on context, therefore restricted. [not all users want to use the same social media credentials for online shopping, with their child’s school app, and their weekend entertainment]
Is this imposition what the public wants or what companies sell us as what customers want in the name of convenience? What I believe the public would really want is the choice to do neither.
There is increasingly no private space or time, at places of work.
Limitations on private space are encroaching in secret in all public city spaces. How will ‘handoffs’ affect privacy in the IoT?
There is too little understanding of the social effects of this connectedness and knowledge created, embedded in design.
What effects may there be on the perception of the IoT as a whole, if predictive data analysis and complex machine learning and AI hidden in black boxes becomes more commonplace and not every company wants to be or can be open-by-design?
Ubiquitous collection and storage of data about users that reveal detailed, inter-connected patterns of behaviour and our identity needs greater commitments to disclosure. Where the hand-offs are to other devices, and whatever else is in the surrounding ecosystem, who has responsibility for communicating interaction through privacy notices, or defining legitimate interests, where the data joined up may be much more revealing than stand-alone data in each silo?
Define with greater clarity the privacy threat models for different groups of stakeholders and address the principles for each.
What would better look like?
The draft privacy principles are a start, but they’re not yet aspirational as I would have hoped. Of course the principles will only be adopted if possible, practical and by those who choose to. But where is the differentiator from what everyone is required to do, and better than the bare minimum? How will you sell this to consumers as new? How would you like your child to be treated?
The wording in these 5 bullet points, is the first crowdsourced starting point.
The supplier of this product or service MUST be General Data Protection Regulation (GDPR) compliant.
This product SHALL NOT disclose data to third parties without my knowledge.
I SHOULD get full access to all the data collected about me.
I MAY operate this device without connecting to the internet.
My data SHALL NOT be used for profiling, marketing or advertising without transparent disclosure.
Yes other points that came under security address some of the crossover between privacy and surveillance risks, but there is as yet little substantial that is aspirational to make the IoT mark a real differentiator in terms of privacy. An opportunity remains.
It was that and how young people perceive privacy that I hoped to bring to the table. Because if manufacturers are serious about future success, they cannot ignore today’s children and how they feel. How you treat them today, will shape future purchasers and their purchasing, and there is evidence you are getting it wrong.
The timing is good in that it now also offers the opportunity to promote consistent understanding, and embed the language of GDPR and ePrivacy regulations into consistent and compatible language in policy and practice in the #IoTmark principles.
User rights I would like to see considered
These are some of the points I would think privacy by design would mean. This would better articulate GDPR Article 25 to consumers.
Data sovereignty is a good concept and I believe should be considered for inclusion in explanatory blurb before any agreed privacy principles.
Goods should by ‘dumb* by default’ until the smart functionality is switched on. [*As our group chair/scribe called it] I would describe this as, “off is the default setting out-of-the-box”.
Privact by design. Deniability by default. i.e. not only after opt out, but a company should not access the personal or identifying purchase data of anyone who opts out of data collection about their product/service use during the set up process.
The right to opt out of data collection at a later date while continuing to use services.
A right to object to the sale or transfer of behavioural data, including to third-party ad networks and absolute opt-in on company transfer of ownership.
A requirement that advertising should be targeted to content, [user bought fridge A] not through jigsaw data held on users by the company [how user uses fridge A, B, C and related behaviour].
An absolute rejection of using children’s personal data gathered to target advertising and marketing at children
Background: Starting points before privacy
After a brief recap on 5 years ago, we heard two talks.
The first was a presentation from Bosch. They used the insights from the IoT open definition from 5 years ago in their IoT thinking and embedded it in their brand book. The presenter suggested that in five years time, every fridge Bosch sells will be ‘smart’. And the second was a fascinating presentation, of both EU thinking and the intellectual nudge to think beyond the practical and think what kind of society we want to see using the IoT in future. Hints of hardcore ethics and philosophy that made my brain fizz from Gerald Santucci, soon to retire from the European Commission.
The principles of open sourcing, manufacturing, and sustainable life cycle were debated in the afternoon with intense arguments and clearly knowledgeable participants, including those who were quiet. But while the group had assigned security, and started work on it weeks before, there was no one pre-assigned to privacy. For me, that said something. If they are serious about those who earn the trustmark being better for customers than their competition, then there needs to be greater emphasis on thinking like their customers, and by their customers, and what use the mark will be to customers, not companies. Plan early public engagement and testing into the design of this IoT mark, and make that testing open and diverse.
To that end, I believe it needed to be articulated more strongly, that sustainable public trust is the primary goal of the principles.
Trust that my device will not become unusable or worthless through updates or lack of them.
Trust that my device is manufactured safely and ethically and with thought given to end of life and the environment.
Trust that my source components are of high standards.
Trust in what data and how that data is gathered and used by the manufacturers.
Fundamental to ‘smart’ devices is their connection to the Internet, and so the last for me, is therefore key to successful public perception and it actually making a difference, beyond the PR value to companies. The value-add must be measured from consumers point of view.
All the openness about design functions and practice improvements, without attempting to change privacy infringing practices, may be wasted effort. Why? Because the perceived benefit of the value of the mark, will be proportionate to what risks it is seen to mitigate.
Why?
Because I assume that you know where your source components come from today. I was shocked to find out not all do and that ‘one degree removed’ is going to be an improvement? Holy cow, I thought. What about regulatory requirements for product safety recalls? These differ of course for different product areas, but I was still surprised. Having worked in global Fast Moving Consumer Goods (FMCG) and food industry, semiconductor and optoelectronics, and medical devices it was self-evident for me, that sourcing is rigorous. So that new requirement to know one degree removed, was a suggested minimum. But it might shock consumers to know there is not usually more by default.
Customers also believe they have reasonable expectations of not being screwed by a product update, left with something that does not work because of its computing based components. The public can take vocal, reputation-damaging action when they are let down.
While these are visible, the full extent of the overreach of company market and product surveillance into our whole lives, not just our living rooms, is yet to become understood by the general population. What will happen when it is?
The Internet of Things is exacerbating the power imbalance between consumers and companies, between government and citizens. As Wendy Grossman wrote recently, in one sense this may make privacy advocates’ jobs easier. It was always hard to explain why “privacy” mattered. Power, people understand.
That public discussion is long overdue. If open principles on IoT devices mean that the signed-up companies differentiate themselves by becoming market leaders in transparency, it will be a great thing. Companies need to offer full disclosure of data use in any privacy notices in clear, plain language under GDPR anyway, but to go beyond that, and offer customers fair presentation of both risks and customer benefits, will not only be a point-of-sales benefit, but potentially improve digital literacy in customers too.
The morning discussion touched quite often on pay-for-privacy models. While product makers may see this as offering a good thing, I strove to bring discussion back to first principles.
Privacy is a human right. There can be no ethical model of discrimination based on any non-consensual invasion of privacy. Privacy is not something I should pay to have. You should not design products that reduce my rights. GDPR requires privacy-by-design and data protection by default. Now is that chance for IoT manufacturers to lead that shift towards higher standards.
We also need a new ethics thinking on acceptable fair use. It won’t change overnight, and perfect may be the enemy of better. But it’s not a battle that companies should think consumers have lost. Human rights and information security should not be on the battlefield at all in the war to win customer loyalty. Now is the time to do better, to be better, demand better for us and in particular, for our children.
Privacy will be a genuine market differentiator
If manufacturers do not want to change their approach to exploiting customer data, they are unlikely to be seen to have changed.
Today feelings that people in US and Europe reflect in surveys are loss of empowerment, feeling helpless, and feeling used. That will shift to shock, resentment, and any change curve will predict, anger.
“The poll of just over two thousand British adults carried out by Ipsos MORI found that the media, internet services such as social media and search engines and telecommunication companies were the least trusted to use personal data appropriately.” [2014, Data trust deficit with lessons for policymakers, Royal Statistical Society]
In the British student population, one 2015 survey of university applicants in England, found of 37,000 who responded, the vast majority of UCAS applicants agree that sharing personal data can benefit them and support public benefit research into university admissions, but they want to stay firmly in control. 90% of respondents said they wanted to be asked for their consent before their personal data is provided outside of the admissions service.
In 2010, a multi method model of research with young people aged 14-18, by the Royal Society of Engineering, found that, “despite their openness to social networking, the Facebook generation have real concerns about the privacy of their medical records.” [2010, Privacy and Prejudice, RAE, Wellcome]
When people use privacy settings on Facebook set to maximum, they believe they get privacy, and understand little of what that means behind the scenes.
Are there tools designed by others, like Projects by If licenses, and ways this can be done, that you’re not even considering yet?
What if you don’t do it?
“But do you feel like you have privacy today?” I was asked the question in the afternoon. How do people feel today, and does it matter? Companies exploiting consumer data and getting caught doing things the public don’t expect with their data, has repeatedly damaged consumer trust. Data breaches and lack of information security have damaged consumer trust. Both cause reputational harm. Damage to reputation can harm customer loyalty. Damage to customer loyalty costs sales, profit and upsets the Board.
Where overreach into our living rooms has raised awareness of invasive data collection, we are yet to be able to see and understand the invasion of privacy into our thinking and nudge behaviour, into our perception of the world on social media, the effects on decision making that data analytics is enabling as data shows companies ‘how we think’, granting companies access to human minds in the abstract, even before Facebook is there in the flesh.
Governments want to see how we think too, and is thought crime really that far away using database labels of ‘domestic extremists’ for activists and anti-fracking campaigners, or the growing weight of policy makers attention given to predpol, predictive analytics, the [formerly] Cabinet Office Nudge Unit, Google DeepMind et al?
Had the internet remained decentralized the debate may be different.
I am starting to think of the IoT not as the Internet of Things, but as the Internet of Tracking. If some have their way, it will be the Internet of Thinking.
Considering our centralised Internet of Things model, our personal data from human interactions has become the network infrastructure, and data flows, are controlled by others. Our brains are the new data servers.
In the Internet of Tracking, people become the end nodes, not things.
And it is this where the future users will be so important. Do you understand and plan for factors that will drive push back, and crash of consumer confidence in your products, and take it seriously?
Companies have a choice to act as Empires would – multinationals, joining up even on low levels, disempowering individuals and sucking knowledge and power at the centre. Or they can act as Nation states ensuring citizens keep their sovereignty and control over a selected sense of self.
Look at Brexit. Look at the GE2017. Tell me, what do you see is the direction of travel? Companies can fight it, but will not defeat how people feel. No matter how much they hope ‘nudge’ and predictive analytics might give them this power, the people can take back control.
What might this desire to take-back-control mean for future consumer models? The afternoon discussion whilst intense, reached fairly simplistic concluding statements on privacy. We could have done with at least another hour.
Some in the group were frustrated “we seem to be going backwards” in current approaches to privacy and with GDPR.
But if the current legislation is reactive because companies have misbehaved, how will that be rectified for future? The challenge in the IoT both in terms of security and privacy, AND in terms of public perception and reputation management, is that you are dependent on the behaviours of the network, and those around you. Good and bad. And bad practices by one, can endanger others, in all senses.
If you believe that is going back to reclaim a growing sense of citizens’ rights, rather than accepting companies have the outsourced power to control the rights of others, that may be true.
There was a first principle asked whether any element on privacy was needed at all, if the text was simply to state, that the supplier of this product or service must be General Data Protection Regulation (GDPR) compliant. The GDPR was years in the making after all. Does it matter more in the IoT and in what ways? The room tended, understandably, to talk about it from the company perspective. “We can’t” “won’t” “that would stop us from XYZ.” Privacy would however be better addressed from the personal point of view.
What do people want?
From the company point of view, the language is different and holds clues. Openness, control, and user choice and pay for privacy are not the same thing as the basic human right to be left alone. Afternoon discussion reminded me of the 2014 WAPO article, discussing Mark Zuckerberg’s theory of privacy and a Palo Alto meeting at Facebook:
“Not one person ever uttered the word “privacy” in their responses to us. Instead, they talked about “user control” or “user options” or promoted the “openness of the platform.” It was as if a memo had been circulated that morning instructing them never to use the word “privacy.””
In the afternoon working group on privacy, there was robust discussion whether we had consensus on what privacy even means. Words like autonomy, control, and choice came up a lot. But it was only a beginning. There is opportunity for better. An academic voice raised the concept of sovereignty with which I agreed, but how and where to fit it into wording, which is at once both minimal and applied, and under a scribe who appeared frustrated and wanted a completely different approach from what he heard across the group, meant it was left out.
This group do care about privacy. But I wasn’t convinced that the room cared in the way that the public as a whole does, but rather only as consumers and customers do. But IoT products will affect potentially everyone, even those who do not buy your stuff. Everyone in that room, agreed on one thing. The status quo is not good enough. What we did not agree on, was why, and what was the minimum change needed to make a enough of a difference that matters.
I share the deep concerns of many child rights academics who see the harm that efforts to avoid restrictions Article 8 the GDPR will impose. It is likely to be damaging for children’s right to access information, be discriminatory according to parents’ prejudices or socio-economic status, and ‘cheating’ – requiring secrecy rather than privacy, in attempts to hide or work round the stringent system.
In ‘The Class’ the research showed, ” teachers and young people have a lot invested in keeping their spheres of interest and identity separate, under their autonomous control, and away from the scrutiny of each other.” [2016, Livingstone and Sefton-Green, p235]
Employers require staff use devices with single sign including web and activity tracking and monitoring software. Employee personal data and employment data are blended. Who owns that data, what rights will employees have to refuse what they see as excessive, and is it manageable given the power imbalance between employer and employee?
What is this doing in the classroom and boardroom for stress, anxiety, performance and system and social avoidance strategies?
A desire for convenience creates shortcuts, and these are often met using systems that require a sign-on through the platforms giants: Google, Facebook, Twitter, et al. But we are kept in the dark how by using these platforms, that gives access to them, and the companies, to see how our online and offline activity is all joined up.
Any illusion of privacy we maintain, we discussed, is not choice or control if based on ignorance, and backlash against companies lack of efforts to ensure disclosure and understanding is growing.
“The lack of accountability isn’t just troubling from a philosophical perspective. It’s dangerous in a political climate where people are pushing back at the very idea of globalization. There’s no industry more globalized than tech, and no industry more vulnerable to a potential backlash.”
If your connected *thing* requires registration, why does it? How about a commitment to not forcing one of these registration methods or indeed any at all? Social Media Research by Pew Research in 2016 found that 56% of smartphone owners ages 18 to 29 use auto-delete apps, more than four times the share among those 30-49 (13%) and six times the share among those 50 or older (9%).
Does that tell us anything about the demographics of data retention preferences?
In 2012, they suggested social media has changed the public discussion about managing “privacy” online. When asked, people say that privacy is important to them; when observed, people’s actions seem to suggest otherwise.
Does that tell us anything about how well companies communicate to consumers how their data is used and what rights they have?
There is also data with strong indications about how women act to protect their privacy more but when it comes to basic privacy settings, users of all ages are equally likely to choose a private, semi-private or public setting for their profile. There are no significant variations across age groups in the US sample.
Now think about why that matters for the IoT? I wonder who makes the bulk of purchasing decsions about household white goods for example and has Bosch factored that into their smart-fridges-only decision?
Do you *need* to know who the user is? Can the smart user choose to stay anonymous at all?
The day’s morning challenge was to attend more than one interesting discussion happening at the same time. As invariably happens, the session notes and quotes are always out of context and can’t possibly capture everything, no matter how amazing the volunteer (with thanks!). But here are some of the discussion points from the session on the body and health devices, the home, and privacy. It also included a discussion on racial discrimination, algorithmic bias, and the reasons why care.data failed patients and failed as a programme. We had lengthy discussion on ethics and privacy: smart meters, objections to models of price discrimination, and why pay-for-privacy harms the poor by design.
Smart meter data can track the use of unique appliances inside a person’s home and intimate patterns of behaviour. Information about our consumption of power, what and when every day, reveals personal details about everyday lives, our interactions with others, and personal habits.
Why should company convenience come above the consumer’s? Why should government powers, trump personal rights?
Smart meter is among the knowledge that government is exploiting, without consent, to discover a whole range of issues, including ensuring that “Troubled Families are identified”. Knowing how dodgy some of the school behaviour data might be, that helps define who is “troubled” there is a real question here, is this sound data science? How are errors identified? What about privacy? It’s not your policy, but if it is your product, what are your responsibilities?
If companies do not respect children’s rights, you’d better shape up to be GDPR compliant
For children and young people, more vulnerable to nudge, and while developing their sense of self can involve forming, and questioning their identity, these influences need oversight or be avoided.
In terms of GDPR, providers are going to pay particular attention to Article 8 ‘information society services’ and parental consent, Article 17 on profiling, and rights to restriction of processing (19) right to erasure in recital 65 and rights to portability. (20) However, they may need to simply reassess their exploitation of children and young people’s personal data and behavioural data. Article 57 requires special attention to be paid by regulators to activities specifically targeted at children, as ‘vulnerable natural persons’ of recital 75.
Human Rights, regulations and conventions overlap in similar principles that demand respect for a child, and right to be let alone:
(a) The development of the child ‘s personality, talents and mental and physical abilities to their fullest potential;
(b) The development of respect for human rights and fundamental freedoms, and for the principles enshrined in the Charter of the United Nations.
A weakness of the GDPR is that it allows derogation on age and will create inequality and inconsistency for children as a result. By comparison Article one of the Convention on the Rights of the Child (CRC) defines who is to be considered a “child” for the purposes of the CRC, and states that: “For the purposes of the present Convention, a child means every human being below the age of eighteen years unless, under the law applicable to the child, majority is attained earlier.”<
Article two of the CRC says that States Parties shall respect and ensure the rights set forth in the present Convention to each child within their jurisdiction without discrimination of any kind.
CRC Article 16 says that no child shall be subjected to arbitrary or unlawful interference with his or her honour and reputation.
Article 8 CRC requires respect for the right of the child to preserve his or her identity […] without unlawful interference.
Article 12 CRC demands States Parties shall assure to the child who is capable of forming his or her own views the right to express those views freely in all matters affecting the child, the views of the child being given due weight in accordance with the age and maturity of the child.
That stands in potential conflict with GDPR article 8. There is much on GDPR on derogations by country, and or children, still to be set.
What next for our data in the wild
Hosting the event at the zoo offered added animals, and during a lunch tour we got out on a tour, kindly hosted by a fellow participant. We learned how smart technology was embedded in some of the animal enclosures, and work on temperature sensors with penguins for example. I love tigers, so it was a bonus that we got to see such beautiful and powerful animals up close, if a little sad for their circumstances and as a general basic principle, seeing big animals caged as opposed to in-the-wild.
Freedom is a common desire in all animals. Physical, mental, and freedom from control by others.
I think any manufacturer that underestimates this element of human instinct is ignoring the ‘hidden dragon’ that some think is a myth. Privacy is not dead. It is not extinct, or even unlike the beautiful tigers, endangered. Privacy in the IoT at its most basic, is the right to control our purchasing power. The ultimate people power waiting to be sprung. Truly a crouching tiger. People object to being used and if companies continue to do so without full disclosure, they do so at their peril. Companies seem all-powerful in the battle for privacy, but they are not. Even insurers and data brokers must be fair and lawful, and it is for regulators to ensure that practices meet the law.
When consumers realise our data, our purchasing power has the potential to control, not be controlled, that balance will shift.
“Paper tigers” are superficially powerful but are prone to overextension that leads to sudden collapse. If that happens to the superficially powerful companies that choose unethical and bad practice, as a result of better data privacy and data ethics, then bring it on.
I hope that the IoT mark can champion best practices and make a difference to benefit everyone.
While the companies involved in its design may be interested in consumers, I believe it could be better for everyone, done well. The great thing about the efforts into an #IoTmark is that it is a collective effort to improve the whole ecosystem.
I hope more companies will realise their privacy rights and ethical responsibility in the world to all people, including those interested in just being, those who want to be let alone, and not just those buying.
“If a cat is called a tiger it can easily be dismissed as a paper tiger; the question remains however why one was so scared of the cat in the first place.”
Further reading: Networks of Control – A Report on Corporate Surveillance, Digital Tracking, Big Data & Privacy by Wolfie Christl and Sarah Spiekermann
Is Education preparing us for the jobs of the future?
The panel talked about changing social and political realities. We considered the effects on employment. We began discussion how those changes should feed into education policy and practice today. It is discussion that should be had by the public. So far, almost a year after the Referendum, the UK government is yet to say what post-Brexit Britain might look like. Without a vision, any mandate for the unknown, if voted for on June 9th, will be meaningless.
What was talked about and what should be a public debate:
What jobs will be needed in the future?
Post Brexit, what skills will we need in the UK?
How can the education system adapt and improve to help future generations develop skills in this ever changing landscape?
How do we ensure women [and anyone else] are not left behind?
Brexit is the biggest change management project I may never see.
As the State continues making and remaking laws, reforming education, and starts exiting the EU, all in parallel, technology and commercial companies won’t wait to see what the post-Brexit Britain will look like. In our state’s absence of vision, companies are shaping policy and ‘re-writing’ their own version of regulations. What implications could this have for long term public good?
What will be needed in the UK future?
A couple of sentences from Alan Penn have stuck with me all week. Loosely quoted, we’re seeing cultural identity shift across the country, due to the change of our available employment types. Traditional industries once ran in a family, with a strong sense of heritage. New jobs don’t offer that. It leaves a gap we cannot fill with “I’m a call centre worker”. And this change is unevenly felt.
There is no tangible public plan in the Digital Strategy for dealing with that change in the coming 10 to 20 years employment market and what it means tied into education. It matters when many believe, as do these authors in American Scientific, “around half of today’s jobs will be threatened by algorithms. 40% of today’s top 500 companies will have vanished in a decade.”
So what needs thought?
Analysis of what that regional jobs market might look like, should be a public part of the Brexit debate and these elections →
We need to see those goals, to ensure policy can be planned for education and benchmark its progress towards achieving its aims
Brexit and technology will disproportionately affect different segments of the jobs market and therefore the population by age, by region, by socio-economic factors →
Education policy must therefore address aspects of skills looking to the future towards employment in that new environment, so that we make the most of opportunities, and mitigate the harms.
Brexit and technology will disproportionately affect communities → What will be done to prevent social collapse in regions hardest hit by change?
Where are we starting from today?
Before we can understand the impact of change, we need to understand what the present looks like. I cannot find a map of what the English education system looks like. No one I ask seems to have one or have a firm grasp across the sector, of how and where all the parts of England’s education system fit together, or their oversight and accountability. Everyone has an idea, but no one can join the dots. If you have, please let me know.
Nothing is constant in education like change; in laws, policy and its effects in practice, so I shall start there.
1. Legislation
In retrospect it was a fatal flaw, missed in post-Referendum battles of who wrote what on the side of a bus, that no one did an assessment of education [and indeed other] ‘legislation in progress’. There should have been recommendations made on scrapping inappropriate government bills in entirety or in parts. New laws are now being enacted, rushed through in wash up, that are geared to our old status quo, and we risk basing policy only on what we know from the past, because on that, we have data.
In the timeframe that Brexit will become tangible, we will feel the effects of the greatest shake up of Higher Education in 25 years. Parts of the Higher Education and Research Act, and Technical and Further Education Act are unsuited to the new order post-Brexit.
What it will do: The new HE law encourages competition between institutions, and the TFE Act centred in large part on how to manage insolvency.
What it should do: Policy needs to promote open, collaborative networks if within a now reduced research and academic circle, scholarly communities are to thrive.
Legislation has recently not only meant restructure, but repurposing of what education [authorities] is expected to offer.
A new Statutory Instrument — The School and Early Years Finance (England) Regulations 2017 — makes music, arts and playgrounds items; ‘That may be removed from maintained schools’ budget shares’.
How will this withdrawal of provision affect skills starting from the Early Years throughout young people’s education?
2. Policy
Education policy if it continues along the grammar school path, will divide communities into ‘passed’ and the ‘unselected’. A side effect of selective schooling— a feature or a bug dependent on your point of view — is socio-economic engineering. It builds class walls in the classroom, while others, like Fabian Women, say we should be breaking through glass ceilings. Current policy in a wider sense, is creating an environment that is hostile to human integration. It creates division across the entire education system for children aged 2–19.
The curriculum is narrowing, according to staff I’ve spoken to recently, as a result of measurement focus on Progress 8, and due to funding constraints.
What effect will this have on analysis of knowledge, discernment, how to assess when computers have made a mistake or supplied misinformation, and how to apply wisdom? Skills that today still distinguish human from machine learning.
What narrowing the curriculum does: Students have fewer opportunities to discover their skill set, limiting opportunities for developing social skills and cultural development, and their development as rounded, happy, human beings.
What we could do: Promote long term love of learning in-and-outside school and in communities. Reinvest in the arts, music and play, which support mental and physical health and create a culture in which people like to live as well as work. Library and community centres funding must be re-prioritised, ensuring inclusion and provision outside school for all abilities.
Austerity builds barriers of access to opportunity and skills. Children who cannot afford to, are excluded from extra curricular classes. We already divide our children through private and state education, into those who have better facilities and funding to enjoy and explore a fully rounded education, and those whose funding will not stretch much beyond the bare curriculum. For SEN children, that has already been stripped back further.
Existing barriers are likely to become entrenched in twenty years. What does it do to society, if we are divided in our communities by money, or gender, or race, and feel disempowered as individuals? Are we less responsible for our actions if there’s nothing we can do about it? If others have more money, more power than us, others have more control over our lives, and “no matter what we do, we won’t pass the 11 plus”?
Without joined-up scrutiny of these policy effects across the board, we risk embedding these barriers into future planning. Today’s data are used to train “how the system should work”. If current data are what applicants in 5 years will base future expectations on, will their decisions be objective and will in-built bias be transparent?
3. Sociological effects of legislation.
It’s not only institutions that will lose autonomy in the Higher Education and Research Act.
At present, the risk to the autonomy of science and research is theoretical — but the implications for academic freedom are troubling. [Nature 538, 5 (06 October 2016)]
The Secretary of State for Education now also has new Powers of Information about individual applicants and students. Combined with the Digital Economy Act, the law can ride roughshod over students’ autonomy and consent choices. Today they can opt out of UCAS automatically sharing their personal data with the Student Loans Company for example. Thanks to these new powers, and combined with the Digital Economy Act, that’s gone.
The Act further includes the intention to make institutions release more data about course intake and results under the banner of ‘transparency’. Part of the aim is indisputably positive, to expose discrimination and inequality of all kinds. It also aims to make the £ cost-benefit return “clearer” to applicants — by showing what exams you need to get in, what you come out with, and then by joining all that personal data to the longitudinal school record, tax and welfare data, you see what the return is on your student loan. The government can also then see what your education ‘cost or benefit’ the Treasury. It is all of course much more nuanced than that, but that’s the very simplified gist.
This ‘destinations data’ is going to be a dataset we hear ever more about and has the potential to influence education policy from age 2.
Aside from the issue of personal data disclosiveness when published by institutions — we already know of individuals who could spot themselves in a current published dataset — I worry that this direction using data for ‘advice’ is unhelpful. What if we’re looking at the wrong data upon which to base future decisions? The past doesn’t take account of Brexit or enable applicants to do so.
Researchers [and applicants, the year before they apply or start a course] will be looking at what *was* — predicted and achieved qualifying grades, make up of the class, course results, first job earnings — what was for other people, is at least 5 years old by the time it’s looked at it. Five years is a long time out of date.
4. Change
Teachers and schools have long since reached saturation point in the last 5 years to handle change. Reform has been drastic, in structures, curriculum, and ongoing in funding. There is no ongoing teacher training, and lack of CPD take up, is exacerbated by underfunding.
Teachers are fed up with change. They want stability. But contrary to the current “strong and stable” message, reality is that ahead we will get anything but, and must instead manage change if we are to thrive. Politically, we will see backlash when ‘stable’ is undeliverable.
But Teaching has not seen ‘stable’ for some time. Teachers are asking for fewer children, and more cash in the classroom. Unions talk of a focus on learning, not testing, to drive school standards. If the planned restructuring of funding happens, how will it affect staff retention?
We know schools are already reducing staff. How will this affect employment, adult and children’s skill development, their ambition, and society and economy?
Where could legislation and policy look ahead?
What are the big Brexit targets and barriers and when do we expect them?
How is the fall out from underfunding and reduction of teaching staff expected to affect skills provision?
State education policy is increasingly hands-off. What is the incentive for local schools or MATs to look much beyond the short term?
How do local decisions ensure education is preparing their community, but also considering society, health and (elderly) social care, Post-Brexit readiness and women’s economic empowerment?
How does our ageing population shift in the same time frame?
How can the education system adapt?
We need to talk more about other changes in the system in parallel to Brexit; join the dots, plus the potential positive and harmful effects of technology.
Gender here too plays a role, as does mitigating discrimination of all kinds, confirmation bias, and even in the tech itself, whether AI for example, is going to be better than us at decision-making, if we teach AI to be biased.
Dr Lisa Maria Mueller talked about the effects and influence of age, setting and language factors on what skills we will need, and employment. While there are certain skills sets that computers are and will be better at than people, she argued society also needs to continue to cultivate human skills in cultural sensitivities, empathy, and understanding. We all nodded. But how?
To develop all these human skills is going to take investment. Investment in the humans that teach us. Bennie Kara, Assistant Headteacher in London, spoke about school cuts and how they will affect children’s futures.
The future of England’s education must be geared to a world in which knowledge and facts are ubiquitous, and readily available online than at any other time. And access to learning must be inclusive. That means including SEN and low income families, the unskilled, everyone. As we become more internationally remote, we must put safeguards in place if we to support thriving communities.
Policy and legislation must also preserve and respect human dignity in a changing work environment, and review not only what work is on offer, but *how*; the kinds of contracts and jobs available.
Where might practice need to adapt now?
Re-consider curriculum content with its focus on facts. Will success risk being measured based on out of date knowledge, and a measure of recall? Are these skills in growing or dwindling need?
Knowledge focus must place value on analysis, discernment, and application of facts that computers will learn and recall better than us. Much of that learning happens outside school.
Opportunities have been cut, together with funding. We need communities brought back together, if they are not to collapse. Funding centres of local learning, restoring libraries and community centres will be essential to local skill development.
What is missing?
Although Sarah Waite spoke (in a suitably Purdah appropriate tone), about the importance of basic skills in the future labour market we didn’t get to talking about education preparing us for the lack of jobs of the future and what that changed labour market will look like.
What skills will *not* be needed? Who decides? If left to companies’ sponsor led steer in academies, what effects will we see in society?
Discussions of a future education model and technology seem to share a common theme: people seem reduced in making autonomous choices. But they share no positive vision.
Technology should empower us, but it seems to empower the State and diminish citizens’ autonomy in many of today’s policies, and in future scenarios especially around the use of personal data and Digital Economy.
Technology should enable greater collaboration, but current tech in education policy is focused too little on use on children’s own terms, and too heavily on top-down monitoring: of scoring, screen time, search terms. Further restrictions by Age Verification are coming, and may access and reduce participation in online services if not done well.
Infrastructure weakness is letting down the skill training: University Technical Colleges (UTCs) are not popular and failing to fill places. There is lack of an overarching area wide strategic plan for pupils in which UTCS play a part. Local Authorities played an important part in regional planning which needs restored to ensure joined up local thinking.
How do we ensure women are not left behind?
The final question of the evening asked how women will be affected by Brexit and changing job market. Part of the risks overall, the panel concluded, is related to [lack of] equal-pay. But where are the assessments of the gendered effects in the UK of:
community structural change and intra-family support and effect on demand for social care
tech solutions in response to lack of human interaction and staffing shortages including robots in the home and telecare
the disproportionate drop out of work, due to unpaid care roles, and difficulty getting back in after a break.
the roles and types of work likely to be most affected or replaced by machine learning and robots
and how will women be empowered or not socially by technology?
We quickly need in education to respond to the known data where women are already being left behind now. The attrition rate for example in teaching in England after two-three years is poor, and getting worse. What will government do to keep teachers teaching? Their value as role models is not captured in pupils’ exams results based entirely on knowledge transfer.
Our GCSEs this year go back to pure exam based testing, and remove applied coursework marking, and is likely to see lower attainment for girls than boys, say practitioners. Likely to leave girls behind at an earlier age.
“There is compelling evidence to suggest that girls in particular may be affected by the changes — as research suggests that boys perform more confidently when assessed by exams alone.”
Jennifer Tuckett spoke about what fairness might look like for female education in the Creative Industries. From school-leaver to returning mother, and retraining older women, appreciating the effects of gender in education is intrinsic to the future jobs market.
We also need broader public understanding of the loop of the impacts of technology, on the process and delivery of teaching itself, and as school management becomes increasingly important and is male dominated, how will changes in teaching affect women disproportionately? Fact delivery and testing can be done by machine, and supports current policy direction, but can a computer create a love of learning and teach humans how to think?
“There is a opportunity for a holistic synthesis of research into gender, the effect of tech on the workplace, the effect of technology on care roles, risks and opportunities.”
Delivering education to ensure women are not left behind, includes avoiding women going into education as teenagers now, to be led down routes without thinking of what they want and need in future. Regardless of work.
Education must adapt to changed employment markets, and the social and community effects of Brexit. If it does not, barriers will become embedded. Geographical, economic, language, familial, skills, and social exclusion.
In short
In summary, what is the government’s Brexit vision? We must know what they see five, 10, and for 25 years ahead, set against understanding the landscape as-is, in order to peg other policy to it.
With this foundation, what we know and what we estimate we don’t know yet can be planned for.
Once we know where we are going in policy, we can do a fit-gap to map how to get people there.
Estimate which skills gaps need filled and which do not. Where will change be hardest?
Change is not new. But there is current potential for massive long term economic and social lasting damage to our young people today. Government is hindered by short term political thinking, but it has a long-term responsibility to ensure children are not mis-educated because policy and the future environment are not aligned.
We deserve public, transparent, informed debate to plan our lives.
We enter the unknown of the education triangle at our peril; Brexit, underfunding, divisive structural policy, for the next ten years and beyond, without appropriate adjustment to pre-Brexit legislation and policy plans for the new world order.
The combined negative effects on employment at scale and at pace must be assessed with urgency, not by big Tech who will profit, but with an eye on future fairness, and public economic and social good. Academy sponsors, decision makers in curriculum choices, schools with limited funding, have no incentives to look to the wider world.
If we’re going to go it alone, we’d be better be robust as a society, and that can’t be just some of us, and can’t only be about skills as seen as having an tangible output.
All this discussion is framed by the premise that education’s aim is to prepare a future workforce for work, and that it is sustainable.
Policy is increasingly based on work that is measured by economic output. We must not leave out or behind those who do not, or cannot, or whose work is unmeasured yet contributes to the world.
‘The only future worth building includes everyone,’ said the Pope in a recent TedTalk.
What kind of future do you want to see yourself living in? Will we all work or will there be universal basic income? What will happen on housing, an ageing population, air pollution, prisons, free movement, migration, and health? What will keep communities together as their known world in employment, and family life, and support collapse? How will education enable children to discover their talents and passions?
Human beings are more than what we do. The sense of a country of who we are and what we stand for is about more than our employment or what we earn. And we cannot live on slogans alone.
Who do we think we in the UK will be after Brexit, needs real and substantial answers. What are we going to *do* and *be* in the world?
Without this vision, any mandate as voted for on June 9th, will be made in the dark and open to future objection writ large. ‘We’ must be inclusive based on a consensus, not simply a ‘mandate’.
Only with clear vision for all these facets fitting together in a model of how we will grow in all senses, will we be able to answer the question, is education preparing us [all] for the jobs of the future?
More than this, we must ask if education is preparing people for the lack of jobs, for changing relationships in our communities, with each other, and with machines.
Change is coming, Brexit or not. But Brexit has exacerbated the potential to miss opportunities, embed barriers, and see negative side-effects from changes already underway in employment, in an accelerated timeframe.
If our education policy today is not gearing up to that change, we must.
“With the Family Link app from Google, you can stay in the loop as your kid explores on their Android* device. Family Link lets you create a Google Account for your kid that’s like your account, while also helping you set certain digital ground rules that work for your family — like managing the apps your kid can use, keeping an eye on screen time, and setting a bedtime on your kid’s device.”
John Carr shared his blog post about the Google Family Link today which was the first I had read about the new US account in beta. In his post, with an eye on GDPR, he asks, what is the right thing to do?
What is the Family Link app?
Family Link requires a US based google account to sign up, so outside the US we can’t read the full details. However from what is published online, it appears to offer the following three key features:
“Approve or block the apps your kid wants to download from the Google Play Store.
Keep an eye on screen time. See how much time your kid spends on their favorite apps with weekly or monthly activity reports, and set daily screen time limits for their device. “
and
“Set device bedtime: Remotely lock your kid’s device when it’s time to play, study, or sleep.”
From the privacy and disclosure information it reads that there is not a lot of difference between a regular (over 13s) Google account and this one for under 13s. To collect data from under 13s it must be compliant with COPPA legislation.
If you google “what is COPPA” the first result says, “The Children’s Online Privacy Protection Act (COPPA) is a law created to protect the privacy of children under 13.”
But does this Google Family Link do that? What safeguards and controls are in place for use of this app and children’s privacy?
What data does it capture?
“In order to create a Google Account for your child, you must review the Disclosure (including the Privacy Notice) and the Google Privacy Policy, and give consent by authorizing a $0.30 charge on your credit card.”
Google captures the parent’s verified real-life credit card data.
Google captures child’s name, date of birth and email.
Google captures voice.
Google captures location.
Google may associate your child’s phone number with their account.
And lots more:
Google automatically collects and stores certain information about the services a child uses and how a child uses them, including when they save a picture in Google Photos, enter a query in Google Search, create a document in Google Drive, talk to the Google Assistant, or watch a video in YouTube Kids.
What does it offer over regular “13+ Google”?
In terms of general safeguarding, it doesn’t appear that SafeSearch is on by default but must be set and enforced by a parent.
Parents should “review and adjust your child’s Google Play settings based on what you think is right for them.”
Google rightly points out however that, “filters like SafeSearch are not perfect, so explicit, graphic, or other content you may not want your child to see makes it through sometimes.”
Ron Amadeo at Arstechnica wrote a review of the Family Link app back in February, and came to similar conclusions about added safeguarding value:
“Other than not showing “personalized” ads to kids, data collection and storage seems to work just like in a regular Google account. On the “Disclosure for Parents” page, Google notes that “your child’s Google Account will be like your own” and “Most of these products and services have not been designed or tailored for children.” Google won’t do any special content blocking on a kid’s device, so they can still get into plenty of trouble even with a monitored Google account.”
Your child will be able to share information, including photos, videos, audio, and location, publicly and with others, when signed in with their Google Account. And Google wants to see those photos.
There’s some things that parents cannot block at all.
Installs of app updates can’t be controlled, so leave a questionable grey area. Many apps are built on classic bait and switch – start with a free version and then the upgrade contains paid features. This is therefore something to watch for.
“Regardless of the approval settings you choose for your child’s purchases and downloads, you won’t be asked to provide approval in some instances, such as if your child: re-downloads an app or other content; installs an update to an app (even an update that adds content or asks for additional data or permissions); or downloads shared content from your Google Play Family Library. “
The child “will have the ability to change their activity controls, delete their past activity in “My Activity,” and grant app permissions (including things like device location, microphone, or contacts) to third parties”.
What’s in it for children?
You could argue that this gives children “their own accounts” and autonomy. But why do they need one at all? If I give my child a device on which they can download an app, then I approve it first.
If I am not aware of my under 13 year old child’s Internet time physically, then I’m probably not a parent who’s going to care to monitor it much by remote app either. Is there enough insecurity around ‘what children under 13 really do online’, versus what I see or they tell me as a parent, that warrants 24/7 built-in surveillance software?
I can use safe settings without this app. I can use a device time limiting app without creating a Google account for my child.
If parents want to give children an email address, yes, this allows them to have a device linked Gmail account to which you as a parent, cannot access content. But wait a minute, what’s this. Google can?
Google can read their mails and provide them “personalised product features”. More detail is probably needed but this seems clear:
“Our automated systems analyze your child’s content (including emails) to provide your child personally relevant product features, such as customized search results and spam and malware detection.”
And what happens when the under 13s turn 13? It’s questionable that it is right for Google et al. to then be able draw on a pool of ready-made customers’ data in waiting. Free from COPPA ad regulation. Free from COPPA privacy regulation.
Google knows when the child reaches 13 (the set-up requires a child’s date of birth, their first and last name, and email address, to set up the account). And they will inform the child directly when they become eligible to sign up to a regular account free of parental oversight.
What a birthday gift. But is it packaged for the child or Google?
What’s in it for Google?
The parental disclosure begins,
“At Google, your trust is a priority for us.”
If it truly is, I’d suggest they revise their privacy policy entirely.
Google’s disclosure policy also makes parents read a lot before you fully understand the permissions this app gives to Google.
I do not believe Family Link gives parents adequate control of their children’s privacy at all nor does it protect children from predatory practices.
While “Google will not serve personalized ads to your child“, your child “will still see ads while using Google’s services.”
Google also tailors the Family Link apps that the child sees, (and begs you to buy) based on their data:
“(including combining personal information from one service with information, including personal information, from other Google services) to offer them tailored content, such as more relevant app recommendations or search results.”
Contextual advertising using “persistent identifiers” is permitted under COPPA, and is surely a fundamental flaw. It’s certainly one I wouldn’t want to see duplicated under GDPR. Serving up ads that are relevant to the content the child is using, doesn’t protect them from predatory ads at all.
Google captures geolocators and knows where a child is and builds up their behavioural and location patterns. Google, like other online companies, captures and uses what I’ve labelled ‘your synthesised self’; the mix of online and offline identity and behavioural data about a user. In this case, the who and where and what they are doing, are the synthesised selves of under 13 year old children.
These data are made more valuable by the connection to an adult with spending power.
Google gains permission via the parent’s acceptance of the privacy policy, to pass personal data around to third parties and affiliates. An affiliate is an entity that belongs to the Google group of companies. Today, that’s a lot of companies.
Google’s ad network consists of Google services, like Search, YouTube and Gmail, as well as 2+ million non-Google websites and apps that partner with Google to show ads.
I also wonder if it will undo some of the previous pro-privacy features on any linked child’s YouTube account if Google links any logged in accounts across the Family Link and YouTube platforms.
Is this pseudo-safe use a good thing?
In practical terms, I’d suggest this app is likely to lull parents into a false sense of security. Privacy safeguarding is not the default set up.
It’s questionable that Google should adopt some sort of parenting role through an app. Parental remote controls via an app isn’t an appropriate way to regulate whether my under 13 year old is using their device, rather than sleeping.
It’s also got to raise questions about children’s autonomy at say, 12. Should I as a parent know exactly every website and app that my child visits? What does that do for parental-child trust and relations?
As for my own children I see no benefit compared with letting them have supervised access as I do already. That is without compromising my debit card details, or under a false sense of safeguarding. Their online time is based on age appropriate education and trust, and yes I have to manage their viewing time.
That said, if there are people who think parents cannot do that, is the app a step forward? I’m not convinced. It’s definitely of benefit to Google. But for families it feels more like a sop to adults who feel a duty towards safeguarding children, but aren’t sure how to do it.
Is this the best that Google can do by children?
In summary it seems to me that the Family Link app is a free gift from Google. (Well, free after the thirty cents to prove you’re a card-carrying adult.)
It gives parents three key tools: App approval (accept, pay, or block), Screen-time surveillance, and a remote Switch Off of child’s access.
In return, Google gets access to a valuable data set – a parent-child relationship with credit data attached – and can increase its potential targeted app sales. Yet Google can’t guarantee additional safeguarding, privacy, or benefits for the child while using it.
I think for families and child rights, it’s a false friend. None of these tools per se require a Google account. There are alternatives.
Children’s use of the Internet should not mean they are used and their personal data passed around or traded in hidden back room bidding by the Internet companies, with no hope of control.
There are other technical solutions to age verification and privacy too.
I’d ask, what else has Google considered and discarded?
Is this the best that a cutting edge technology giant can muster?
This isn’t designed to respect children’s rights as intended under COPPA or ready for GDPR, and it’s a shame they’re not trying.
If I were designing Family Link for children, it would collect no real identifiers. No voice. No locators. It would not permit others access to voice or images or need linked. It would keep children’s privacy intact, and enable them when older, to decide what they disclose. It would not target personalised apps/products at children at all.
GDPR requires active, informed parental consent for children’s online services. It must be revocable, personal data must collect the minimum necessary and be portable. Privacy policies must be clear to children. This, in terms of GDPR readiness, is nowhere near ‘it’.
Family Link needs to re-do their homework. And this isn’t a case of ‘please revise’.
Google is a multi-billion dollar company. If they want parental trust, and want to be GDPR and COPPA compliant, they should do the right thing.
When it comes to child rights, companies must do or do not. There is no try.
The care.data programme 2014-15 listening exercise and action plan has become impossible to find online. That’s OK, you might think, the programme has been scrapped. Not quite.
But the same questions are being asked again around consent and use of your medical data, from primary and secondary care. What a very long questionnaire asks is in effect, do you want to keep your medical history private? You can answer only Q 15 if you want.
Ambiguity again surrounds what constitutes “de-identified” patient information.
What is clear is that public voice seems to have been deleted or lost from the care.data programme along with the feedback and brand.
People spoke up in 2014, and acted. The opt out that 1 in 45 people chose between January and March 2014 was put into effect by the HSCIC inApril 2016. Now it seems, that might be revoked.
Upcoming events cost time and money and will almost certainly go over the same ground that hours and hours were spent on in 2014. However if they do achieve a meaningful response rate, then I hope the results will not be lost and will be combined with those already captured under the ‘care.data listening events’ responses. Will they have any impact on what consent model there may be in future?
So what we gonna do? I don’t know, whatcha wanna do? Let’s do something.
Let’s have clear future scope and control. There is still no plan to give the public rights to control or delete data if we change our minds who can have it or for what purposes. And that is very uncertain. After all, they might decide to privatise or outsource the whole thing as was planned for the CSUs.
We have the possibility to see health data used wisely, safely, and with public trust. But we seem stuck with the same notes again. And the public seem to be the last to be invited to participate and views once gathered, seem to be disregarded. I hope to be proved wrong.
Might, perhaps, the consultation deliver the nuanced consent model discussed at public listening exercises that many asked for?
Will the care.data listening events feedback summary be found, and will its 2014 conclusions and the enacted opt out be ignored? Will the new listening event view make more difference than in 2014?
Is public engagement, engagement, if nobody hears what was said?
This blog post is also available as an audio file on soundcloud.
What constitutes the public interest must be set in a universally fair and transparent ethics framework if the benefits of research are to be realised – whether in social science, health, education and more – that framework will provide a strategy to getting the pre-requisite success factors right, ensuring research in the public interest is not only fit for the future, but thrives. There has been a climate change in consent. We need to stop talking about barriers that prevent datasharing and start talking about the boundaries within which we can.
What is the purpose for which I provide my personal data?
‘We use math to get you dates’, says OkCupid’s tagline.
That’s the purpose of the site. It’s the reason people log in and create a profile, enter their personal data and post it online for others who are looking for dates to see. The purpose, is to get a date.
When over 68K OkCupid users registered for the site to find dates, they didn’t sign up to have their identifiable data used and published in ‘a very large dataset’ and onwardly re-used by anyone with unregistered access. The users data were extracted “without the express prior consent of the user […].”
Are the registration consent purposes compatible with the purposes to which the researcher put the data should be a simple enough question. Are the research purposes what the person signed up to, or would they be surprised to find out their data were used like this?
Questions the “OkCupid data snatcher”, now self-confessed ‘non-academic’ researcher, thought unimportant to consider.
But it appears in the last month, he has been in good company.
Google DeepMind, and the Royal Free, big players who do know how to handle data and consent well, paid too little attention to the very same question of purposes.
The boundaries of how the users of OkCupid had chosen to reveal information and to whom, have not been respected in this project.
Nor were these boundaries respected by the Royal Free London trust that gave out patient data for use by Google DeepMind with changing explanations, without clear purposes or permission.
The respectful ethical boundaries of consent to purposes, disregarding autonomy, have indisputably broken down, whether by commercial org, public body, or lone ‘researcher’.
Research purposes
The crux of data access decisions is purposes. What question is the research to address – what is the purpose for which the data will be used? The intent by Kirkegaard was to test:
“the relationship of cognitive ability to religious beliefs and political interest/participation…”
In this case the question appears intended rather a test of the data, not the data opened up to answer the test. While methodological studies matter, given the care and attention [or self-stated lack thereof] given to its extraction and any attempt to be representative and fair, it would appear this is not the point of this study either.
The data doesn’t include profiles identified as heterosexual male, because ‘the scraper was’. It is also unknown how many users hide their profiles, “so the 99.7% figure [identifying as binary male or female] should be cautiously interpreted.”
“Furthermore, due to the way we sampled the data from the site, it is not even representative of the users on the site, because users who answered more questions are overrepresented.” [sic]
The paper goes on to say photos were not gathered because they would have taken up a lot of storage space and could be done in a future scraping, and
“other data were not collected because we forgot to include them in the scraper.”
The data are knowingly of poor quality, inaccurate and incomplete. The project cannot be repeated as ‘the scraping tool no longer works’. There is an unclear ethical or peer review process, and the research purpose is at best unclear. We can certainly give someone the benefit of the doubt and say intent appears to have been entirely benevolent. It’s not clear what the intent was. I think it is clearly misplaced and foolish, but not malevolent.
The trouble is, it’s not enough to say, “don’t be evil.” These actions have consequences.
When the researcher asserts in his paper that, “the lack of data sharing probably slows down the progress of science immensely because other researchers would use the data if they could,” in part he is right.
Google and the Royal Free have tried more eloquently to say the same thing. It’s not research, it’s direct care, in effect, ignore that people are no longer our patients and we’re using historical data without re-consent. We know what we’re doing, we’re the good guys.
However the principles are the same, whether it’s a lone project or global giant. And they’re both wildly wrong as well. More people must take this on board. It’s the reason the public interest needs the Dame Fiona Caldicott review published sooner rather than later.
Just because there is a boundary to data sharing in place, does not mean it is a barrier to be ignored or overcome. Like the registration step to the OkCupid site, consent and the right to opt out of medical research in England and Wales is there for a reason.
We’re desperate to build public trust in UK research right now. So to assert that the lack of data sharing probably slows down the progress of science is misplaced, when it is getting ‘sharing’ wrong, that caused the lack of trust in the first place and harms research.
A climate change in consent
There has been a climate change in public attitude to consent since care.data, clouded by the smoke and mirrors of state surveillance. It cannot be ignored. The EUGDPR supports it. Researchers may not like change, but there needs to be an according adjustment in expectations and practice.
Without change, there will be no change. Public trust is low. As technology advances and if we continue to see commercial companies get this wrong, we will continue to see public trust falter unless broken things get fixed. Change is possible for the better. But it has to come from companies, institutions, and people within them.
Like climate change, you may deny it if you choose to. But some things are inevitable and unavoidably true.
There is strong support for public interest research but that is not to be taken for granted. Public bodies should defend research from being sunk by commercial misappropriation if they want to future-proof public interest research.
The purpose for which the people gave consent are the boundaries within which you have permission to use data, that gives you freedom within its limits, to use the data. Purposes and consent are not barriers to be overcome.
If research is to win back public trust developing a future proofed, robust ethical framework for data science must be a priority today.
This case study and indeed the Google DeepMind recent episode by contrast demonstrate the urgency with which working out what common expectations and oversight of applied ethics in research, who gets to decide what is ‘in the public interest’ and data science public engagement must be made a priority, in the UK and beyond.
Boundaries in the best interest of the subject and the user
Society needs research in the public interest. We need good decisions made on what will be funded and what will not be. What will influence public policy and where needs attention for change.
To do this ethically, we all need to agree what is fair use of personal data, when is it closed and when is it open, what is direct and what are secondary uses, and how advances in technology are used when they present both opportunities for benefit or risks to harm to individuals, to society and to research as a whole.
The potential benefits of research are potentially being compromised for the sake of arrogance, greed, or misjudgement, no matter intent. Those benefits cannot come at any cost, or disregard public concern, or the price will be trust in all research itself.
In discussing this with social science and medical researchers, I realise not everyone agrees. For some, using deidentified data in trusted third party settings poses such a low privacy risk, that they feel the public should have no say in whether their data are used in research as long it’s ‘in the public interest’.
For the DeepMind researchers and Royal Free, they were confident even using identifiable data, this is the “right” thing to do, without consent.
For the Cabinet Office datasharing consultation, the parts that will open up national registries, share identifiable data more widely and with commercial companies, they are convinced it is all the “right” thing to do, without consent.
How can researchers, society and government understand what is good ethics of data science, as technology permits ever more invasive or covert data mining and the current approach is desperately outdated?
Who decides where those boundaries lie?
“It’s research Jim, but not as we know it.” This is one aspect of data use that ethical reviewers will need to deal with, as we advance the debate on data science in the UK. Whether independents or commercial organisations. Google said their work was not research. Is‘OkCupid’ research?
If this research and data publication proves anything at all, and can offer lessons to learn from, it is perhaps these three things:
Researchers and ethics committees need to adjust to the climate change of public consent. Purposes must be respected in research particularly when sharing sensitive, identifiable data, and there should be no assumptions made that differ from the original purposes when users give consent.
Data ethics and laws are desperately behind data science technology. Governments, institutions, civil, and all society needs to reach a common vision and leadership how to manage these challenges. Who defines these boundaries that matter?
How do we move forward towards better use of data?
Our data and technology are taking on a life of their own, in space which is another frontier, and in time, as data gathered in the past might be used for quite different purposes today.
The public are being left behind in the game-changing decisions made by those who deem they know best about the world we want to live in. We need a say in what shape society wants that to take, particularly for our children as it is their future we are deciding now.
How about an ethical framework for datasharing that supports a transparent public interest, which tries to build a little kinder, less discriminating, more just world, where hope is stronger than fear?
Working with people, with consent, with public support and transparent oversight shouldn’t be too much to ask. Perhaps it is naive, but I believe that with an independent ethical driver behind good decision-making, we could get closer to datasharing like that.
Purposes and consent are not barriers to be overcome. Within these, shaped by a strong ethical framework, good data sharing practices can tackle some of the real challenges that hinder ‘good use of data’: training, understanding data protection law, communications, accountability and intra-organisational trust. More data sharing alone won’t fix these structural weaknesses in current UK datasharing which are our really tough barriers to good practice.
How our public data will be used in the public interest will not be a destination or have a well defined happy ending, but it is a long term process which needs to be consensual and there needs to be a clear path to setting out together and achieving collaborative solutions.
While we are all different, I believe that society shares for the most part, commonalities in what we accept as good, and fair, and what we believe is important. The family sitting next to me have just counted out their money and bought an ice cream to share, and the staff gave them two. The little girl is beaming. It seems that even when things are difficult, there is always hope things can be better. And there is always love.