Tag Archives: technology

The power behind today’s AI in public services

The power behind today’s AI in public services

Thinking about whether education in England is preparing us for the jobs of the future, means also thinking about how technology will influence it.

Time and again, thinking and discussion about these topics is siloed. At the Turing Institute, the Royal Society, the ADRN and EPSRC, in government departments, discussions on data, or within education practitioner, and public circles — we are all having similar discussions about data and ethics, but with little ownership and no goals for future outcomes. If government doesn’t get it, or have time for it, or policy lacks ethics by design, is it in the public interest for private companies, Google et al., to offer a fait accompli?

There is lots of talking about Machine Learning (ML), Artificial Intelligence (AI) and ethics. But what is being done to ensure that real values — respect for rights, human dignity, and autonomy — are built into practice in the public services delivery?

In most recent data policy it is entirely absent. The Digital Economy Act s33 risks enabling, through removal of inter and intra-departmental data protections, an unprecedented expansion of public data transfers, with “untrammelled powers”. Powers without codes of practice, promised over a year ago. That has fall out for the trustworthiness of legislative process, and data practices across public services.

Predictive analytics is growing but poorly understood in the public and public sector.

There is already dependence on computers in aspects of public sector work. Its interactions with others in sensitive situations demands better knowledge of how systems operate and can be wrong. Debt recovery, and social care to take two known examples.

Risk averse, staff appear to choose not to question the outcome of ‘algorithmic decision making’ or do not have the ability to do so. There is reportedly no analysis training for practitioners, to understand the basis or bias of conclusions. This has the potential that instead of making us more informed, decision-making by machine makes us humans less clever.

What does it do to professionals, if they feel therefore less empowered? When is that a good thing if it overrides discriminatory human decisions? How can we tell the difference and balance these risks if we don’t understand or feel able to challenge them?

In education, what is it doing to children whose attainment is profiled, predicted, and acted on to target extra or less focus from school staff, who have no ML training and without informed consent of pupils or parents?

If authorities use data in ways the public do not expect, such as to ID homes of multiple occupancy without informed consent, they will fail the future to deliver uses for good. The ‘public interest’, ‘user need,’ and ethics can come into conflict according to your point of view. The public and data protection law and ethics object to harms from use of data. This type of application has potential to be mind-blowingly invasive and reveal all sorts of other findings.

Widely informed thinking must be made into meaningful public policy for the greatest public good

Our politicians are caught up in the General Election and buried in Brexit.

Meanwhile, the commercial companies taking AI first rights to capitalise on existing commercial advantage could potentially strip public assets, use up our personal data and public trust, and leave the public with little public good. We are already used by global data players, and by machine-based learning companies, without our knowledge or consent. That knowledge can be used to profit business models, that pay little tax into the public purse.

There are valid macro economic arguments about whether private spend and investment are preferable compared with a state’s ability to do the same. But these companies make more than enough to do it all. Does it signal a failure to a commitment to the wider community; not paying just amounts of taxes, is it a red flag to a company’s commitment to public good?

What that public good should look like, depends on who is invited to participate in the room, and not to tick boxes, but to think and to build.

The Royal Society’s Report on AI and Machine Learning published on April 25, showed a working group of 14 participants, including two Google DeepMind representatives, one from Amazon, private equity investors, and academics from cognitive science and genetics backgrounds.

Our #machinelearning working group chair, professor Peter Donnelly FRS, on today’s major #RSMachinelearning report https://t.co/PBYjzlESmB pic.twitter.com/RM9osnvOMX

— The Royal Society (@royalsociety) April 25, 2017

If we are going to form objective policies the inputs that form the basis for them must be informed, but must also be well balanced, and be seen to be balanced. Not as an add on, but be in the same room.

As Natasha Lomas in TechCrunch noted, “Public opinion is understandably a big preoccupation for the report authors — unsurprisingly so, given that a technology that potentially erodes people’s privacy and impacts their jobs risks being drastically unpopular.”

“The report also calls on researchers to consider the wider impact of their work and to receive training in recognising the ethical implications.”

What are those ethical implications? Who decides which matter most? How do we eliminate recognised discriminatory bias? What should data be used for and AI be working on at all? Who is it going to benefit? What questions are we not asking? Why are young people left out of this debate?

Who decides what the public should or should not know?

AI and ML depend on data. Data is often talked about as a panacea to problems of better working together. But data alone does not make people better informed. In the same way that they fail, if they don’t feel it is their job to pick up the fax. A fundamental building block of our future public and private prosperity is understanding data and how we, and the AI, interact. What is data telling us and how do we interpret it, and know it is accurate?

How and where will we start to educate young people about data and ML, if not about their own and use by government and commercial companies?

The whole of Chapter 5 in the report is very good as a starting point for policy makers who have not yet engaged in the area. Privacy while summed up too short in conclusions, is scattered throughout.

Blind spots remain, however.

  • Over willingness to accommodate existing big private players as their expertise leads design, development and a desire to ‘re-write regulation’.
  • Slowness to react to needed regulation in the public sector (caught up in Brexit) while commercial drivers and technology change forge ahead
  • ‘How do we develop technology that benefits everyone’ must not only think UK, but global South, especially in the bias in how AI is being to taught, and broad socio-economic barriers in application
  • Predictive analytics and professional application = unwillingness to question the computer result. In children’s social care this is already having a damaging upturn in the family courts (S31)
  • Data and technology knowledge and ethics training, must be embedded across the public sector, not only post grad students in machine learning.
  • Harms being done to young people today and potential for intense future exploitation, are being ignored by policy makers and some academics. Safeguarding is often only about blocking in case of liability to the provider, stopping children seeing content, or preventing physical exploitation. It ignores exploitation by online platform firms, and app providers and games creators, of a child’s synthesised online life and use. Laws and government departments’ own practices can be deeply flawed.
  • Young people are left out of discussions which, after all, are about their future. [They might have some of the best ideas, we miss at our peril.]

There is no time to waste

Children and young people have the most to lose while their education, skills, jobs market, economy, culture, care, and society goes through a series of gradual but seismic shift in purpose, culture, and acceptance before finding new norms post-Brexit. They will also gain the most if the foundations are right. One of these must be getting age verification right in GDPR, not allowing it to enable a massive data grab of child-parent privacy.

Although the RS Report considers young people in the context of a future workforce who need skills training, they are otherwise left out of this report.

“The next curriculum reform needs to consider the educational needs of young people through the lens of the implications of machine learning and associated technologies for the future of work.”

Yes it does, but it must give young people and the implications of ML broader consideration for their future, than classroom or workplace.

Facebook has targeted vulnerable young people, it is alleged, to facilitate predatory advertising practices. Some argue that emotive computing or MOOCs belong in the classroom. Who decides?

We are not yet talking about the effects of teaching technology to learn, and its effect on public services and interactions with the public. Questions that Sam Smith asked in Shadow of the smart machine: Will machine learning end?

At the end of this Information Age we are at a point when machine learning, AI and biotechnology are potentially life enhancing or could have catastrophic effects, if indeed “AI will cause people ‘more pain than happiness” as described by Alibaba’s founder Jack Ma.

The conflict between commercial profit and public good, what commercial companies say they will do and actually do, and fears and assurances over predicted outcomes is personified in the debate between Demis Hassabis, co-founder of DeepMind Technologies, (a London-based machine learning AI startup), and Elon Musk, discussing the perils of artificial intelligence.

Vanity Fair reported that, Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.””

Musk was of the opinion that A.I. was probably humanity’s “biggest existential threat.”

We are not yet joining up multi disciplinary and cross sector discussions of threats and opportunities

Jobs, shift in needed skill sets for education, how we think, interact, value each other, accept or reject ownership and power models; and later, from the technology itself. We are not yet talking conversely, the opportunities that the seismic shifts offer in real terms. Or how and why to accept or reject or regulate them.

Where private companies are taking over personal data given in trust to public services, it is reckless for the future of public interest research to assume there is no public objection. How can we object, if not asked? How can children make an informed choice? How will public interest be assured to be put ahead of private profit? If it is intended on balance to be all about altruism from these global giants, then they must be open and accountable.

Private companies are shaping how and where we find machine learning and AI gathering data about our behaviours in our homes and public spaces.

SPACE10, an innovation hub for IKEA is currently running a survey on how the public perceives and “wants their AI to look, be, and act”, with an eye on building AI into their products, for us to bring flat-pack into our houses.

As the surveillance technology built into the Things in our homes attached to the Internet becomes more integral to daily life, authorities are now using it to gather evidence in investigations; from mobile phones, laptops, social media, smart speakers, and games. The IoT so far seems less about the benefits of collaboration, and all about the behavioural data it collects and uses to target us to sell us more things. Our behaviours tell much more than how we act. They show how we think inside the private space of our minds.

Do you want Google to know how you think and have control over that? The companies of the world that have access to massive amounts of data, and are using that data to now teach AI how to ‘think’. What is AI learning? And how much should the State see or know about how you think, or try to predict it?

Who cares, wins?

It is not overstated to say society and future public good of public services, depends on getting any co-dependencies right. As I wrote in the time of care.data, the economic value of data, personal rights and the public interest are not opposed to one another, but have synergies and co-dependency. One player getting it wrong, can create harm for all. Government must start to care about this, beyond the side effects of saving political embarrassment.

Without joining up all aspects, we cannot limit harms and make the most of benefits. There is nuance and unknowns. There is opaque decision making and secrecy, packaged in the wording of commercial sensitivity and behind it, people who can be brilliant but at the end of the day, are also, human, with all our strengths and weaknesses.

And we can get this right, if data practices get better, with joined up efforts.

Our future society, as our present, is based on webs of trust, on our social networks on- and offline, that enable business, our education, our cultural, and our interactions. Children must trust they will not be used by systems. We must build trustworthy systems that enable future digital integrity.

The immediate harm that comes from blind trust in AI companies is not their AI, but the hidden powers that commercial companies have to nudge public and policy maker behaviours and acceptance, towards private gain. Their ability and opportunity to influence regulation and future direction outweighs most others. But lack of transparency about their profit motives is concerning. Carefully staged public engagement is not real engagement but a fig leaf to show ‘the public say yes’.

The unwillingness by Google DeepMind, when asked at their public engagement event, to discuss their past use of NHS patient data, or the profit model plan or their terms of NHS deals with London hospitals, should be a warning that these questions need answers and accountability urgently.

As TechCrunch suggested after the event, this is all “pretty standard playbook for tech firms seeking to workaround business barriers created by regulation.” Calls for more data, might mean an ever greater power shift.

Companies that have already extracted and benefited from personal data in the public sector, have already made private profit. They and their machines have learned for their future business product development.

A transparent accountable future for all players, private and public, using public data is a necessary requirement for both the public good and private profit. It is not acceptable for departments to hide their practices, just as it is unacceptable if firms refuse algorithmic transparency.

Rebooting antitrust for the information age will not be easy. It will entail new risks: more data sharing, for instance, could threaten privacy. But if governments don’t want a data economy dominated by a few giants, they will need to act soon.” [The Economist, May 6]

If the State creates a single data source of truth, or private Giant tech thinks it can side-step regulation and gets it wrong, their practices screw up public trust. It harms public interest research, and with it our future public good.

But will they care?

If we care, then across public and private sectors, we must cherish shared values and better collaboration. Embed ethical human values into development, design and policy. Ensure transparency of where, how, who and why my personal data has gone.

We must ensure that as the future becomes “smarter”, we educate ourselves and our children to stay intelligent about how we use data and AI.

We must start today, knowing how we are used by both machines, and man.


First published on Medium for a change.

Is education preparing us for the jobs of the future?

The Fabian Women, Glass Ceiling not Glass Slipper event, asked last week:

Is Education preparing us for the jobs of the future?

The panel talked about changing social and political realities. We considered the effects on employment. We began discussion how those changes should feed into education policy and practice today. It is discussion that should be had by the public. So far, almost a year after the Referendum, the UK government is yet to say what post-Brexit Britain might look like. Without a vision, any mandate for the unknown, if voted for on June 9th, will be meaningless.

What was talked about and what should be a public debate:

  • What jobs will be needed in the future?
  • Post Brexit, what skills will we need in the UK?
  • How can the education system adapt and improve to help future generations develop skills in this ever changing landscape?
  • How do we ensure women [and anyone else] are not left behind?

Brexit is the biggest change management project I may never see.

As the State continues making and remaking laws, reforming education, and starts exiting the EU, all in parallel, technology and commercial companies won’t wait to see what the post-Brexit Britain will look like. In our state’s absence of vision, companies are shaping policy and ‘re-writing’ their own version of regulations. What implications could this have for long term public good?

What will be needed in the UK future?

A couple of sentences from Alan Penn have stuck with me all week. Loosely quoted, we’re seeing cultural identity shift across the country, due to the change of our available employment types. Traditional industries once ran in a family, with a strong sense of heritage. New jobs don’t offer that. It leaves a gap we cannot fill with “I’m a call centre worker”. And this change is unevenly felt.

There is no tangible public plan in the Digital Strategy for dealing with that change in the coming 10 to 20 years employment market and what it means tied into education. It matters when many believe, as do these authors in American Scientific, “around half of today’s jobs will be threatened by algorithms. 40% of today’s top 500 companies will have vanished in a decade.”

So what needs thought?

  • Analysis of what that regional jobs market might look like, should be a public part of the Brexit debate and these elections →
    We need to see those goals, to ensure policy can be planned for education and benchmark its progress towards achieving its aims
  • Brexit and technology will disproportionately affect different segments of the jobs market and therefore the population by age, by region, by socio-economic factors →
    Education policy must therefore address aspects of skills looking to the future towards employment in that new environment, so that we make the most of opportunities, and mitigate the harms.
  • Brexit and technology will disproportionately affect communities → What will be done to prevent social collapse in regions hardest hit by change?

Where are we starting from today?

Before we can understand the impact of change, we need to understand what the present looks like. I cannot find a map of what the English education system looks like. No one I ask seems to have one or have a firm grasp across the sector, of how and where all the parts of England’s education system fit together, or their oversight and accountability. Everyone has an idea, but no one can join the dots. If you have, please let me know.

Nothing is constant in education like change; in laws, policy and its effects in practice, so I shall start there.

1. Legislation

In retrospect it was a fatal flaw, missed in post-Referendum battles of who wrote what on the side of a bus, that no one did an assessment of education [and indeed other] ‘legislation in progress’. There should have been recommendations made on scrapping inappropriate government bills in entirety or in parts. New laws are now being enacted, rushed through in wash up, that are geared to our old status quo, and we risk basing policy only on what we know from the past, because on that, we have data.

In the timeframe that Brexit will become tangible, we will feel the effects of the greatest shake up of Higher Education in 25 years. Parts of the Higher Education and Research Act, and Technical and Further Education Act are unsuited to the new order post-Brexit.

What it will do: The new HE law encourages competition between institutions, and the TFE Act centred in large part on how to manage insolvency.

What it should do: Policy needs to promote open, collaborative networks if within a now reduced research and academic circle, scholarly communities are to thrive.

If nothing changes, we will see harm to these teaching institutions and people in them. The stance on counting foreign students in total migrant numbers, to take an example, is singularly pointless.

Even the Royal Society report on Machine Learning noted the UK approach to immigration as a potential harm to prosperity.

Local authorities cannot legally build schools under their authority today, even if needed. They must be free schools. This model has seen high turnover and closures, a rather instable model.

Legislation has recently not only meant restructure, but repurposing of what education [authorities] is expected to offer.

A new Statutory Instrument — The School and Early Years Finance (England) Regulations 2017 — makes music, arts and playgrounds items; ‘That may be removed from maintained schools’ budget shares’.

How will this withdrawal of provision affect skills starting from the Early Years throughout young people’s education?

2. Policy

Education policy if it continues along the grammar school path, will divide communities into ‘passed’ and the ‘unselected’. A side effect of selective schooling— a feature or a bug dependent on your point of view — is socio-economic engineering. It builds class walls in the classroom, while others, like Fabian Women, say we should be breaking through glass ceilings. Current policy in a wider sense, is creating an environment that is hostile to human integration. It creates division across the entire education system for children aged 2–19.

The curriculum is narrowing, according to staff I’ve spoken to recently, as a result of measurement focus on Progress 8, and due to funding constraints.

What effect will this have on analysis of knowledge, discernment, how to assess when computers have made a mistake or supplied misinformation, and how to apply wisdom? Skills that today still distinguish human from machine learning.

What narrowing the curriculum does: Students have fewer opportunities to discover their skill set, limiting opportunities for developing social skills and cultural development, and their development as rounded, happy, human beings.

What we could do: Promote long term love of learning in-and-outside school and in communities. Reinvest in the arts, music and play, which support mental and physical health and create a culture in which people like to live as well as work. Library and community centres funding must be re-prioritised, ensuring inclusion and provision outside school for all abilities.

Austerity builds barriers of access to opportunity and skills. Children who cannot afford to, are excluded from extra curricular classes. We already divide our children through private and state education, into those who have better facilities and funding to enjoy and explore a fully rounded education, and those whose funding will not stretch much beyond the bare curriculum. For SEN children, that has already been stripped back further.

All the accepted current evidence says selective schooling limits social mobility and limits choice. Talk of evidence based profession is hard to square with passion for grammars, an against-the-evidence based policy.

Existing barriers are likely to become entrenched in twenty years. What does it do to society, if we are divided in our communities by money, or gender, or race, and feel disempowered as individuals? Are we less responsible for our actions if there’s nothing we can do about it? If others have more money, more power than us, others have more control over our lives, and “no matter what we do, we won’t pass the 11 plus”?

Without joined-up scrutiny of these policy effects across the board, we risk embedding these barriers into future planning. Today’s data are used to train “how the system should work”. If current data are what applicants in 5 years will base future expectations on, will their decisions be objective and will in-built bias be transparent?

3. Sociological effects of legislation.

It’s not only institutions that will lose autonomy in the Higher Education and Research Act.

At present, the risk to the autonomy of science and research is theoretical — but the implications for academic freedom are troubling. [Nature 538, 5 (06 October 2016)]

The Secretary of State for Education now also has new Powers of Information about individual applicants and students. Combined with the Digital Economy Act, the law can ride roughshod over students’ autonomy and consent choices. Today they can opt out of UCAS automatically sharing their personal data with the Student Loans Company for example. Thanks to these new powers, and combined with the Digital Economy Act, that’s gone.

The Act further includes the intention to make institutions release more data about course intake and results under the banner of ‘transparency’. Part of the aim is indisputably positive, to expose discrimination and inequality of all kinds. It also aims to make the £ cost-benefit return “clearer” to applicants — by showing what exams you need to get in, what you come out with, and then by joining all that personal data to the longitudinal school record, tax and welfare data, you see what the return is on your student loan. The government can also then see what your education ‘cost or benefit’ the Treasury. It is all of course much more nuanced than that, but that’s the very simplified gist.

This ‘destinations data’ is going to be a dataset we hear ever more about and has the potential to influence education policy from age 2.

Aside from the issue of personal data disclosiveness when published by institutions — we already know of individuals who could spot themselves in a current published dataset — I worry that this direction using data for ‘advice’ is unhelpful. What if we’re looking at the wrong data upon which to base future decisions? The past doesn’t take account of Brexit or enable applicants to do so.

Researchers [and applicants, the year before they apply or start a course] will be looking at what *was* — predicted and achieved qualifying grades, make up of the class, course results, first job earnings — what was for other people, is at least 5 years old by the time it’s looked at it. Five years is a long time out of date.

4. Change

Teachers and schools have long since reached saturation point in the last 5 years to handle change. Reform has been drastic, in structures, curriculum, and ongoing in funding. There is no ongoing teacher training, and lack of CPD take up, is exacerbated by underfunding.

Teachers are fed up with change. They want stability. But contrary to the current “strong and stable” message, reality is that ahead we will get anything but, and must instead manage change if we are to thrive. Politically, we will see backlash when ‘stable’ is undeliverable.

But Teaching has not seen ‘stable’ for some time. Teachers are asking for fewer children, and more cash in the classroom. Unions talk of a focus on learning, not testing, to drive school standards. If the planned restructuring of funding happens, how will it affect staff retention?

We know schools are already reducing staff. How will this affect employment, adult and children’s skill development, their ambition, and society and economy?

Where could legislation and policy look ahead?

  • What are the big Brexit targets and barriers and when do we expect them?
  • How is the fall out from underfunding and reduction of teaching staff expected to affect skills provision?
  • State education policy is increasingly hands-off. What is the incentive for local schools or MATs to look much beyond the short term?
  • How do local decisions ensure education is preparing their community, but also considering society, health and (elderly) social care, Post-Brexit readiness and women’s economic empowerment?
  • How does our ageing population shift in the same time frame?

How can the education system adapt?

We need to talk more about other changes in the system in parallel to Brexit; join the dots, plus the potential positive and harmful effects of technology.

Gender here too plays a role, as does mitigating discrimination of all kinds, confirmation bias, and even in the tech itself, whether AI for example, is going to be better than us at decision-making, if we teach AI to be biased.

Dr Lisa Maria Mueller talked about the effects and influence of age, setting and language factors on what skills we will need, and employment. While there are certain skills sets that computers are and will be better at than people, she argued society also needs to continue to cultivate human skills in cultural sensitivities, empathy, and understanding. We all nodded. But how?

To develop all these human skills is going to take investment. Investment in the humans that teach us. Bennie Kara, Assistant Headteacher in London, spoke about school cuts and how they will affect children’s futures.

The future of England’s education must be geared to a world in which knowledge and facts are ubiquitous, and readily available online than at any other time. And access to learning must be inclusive. That means including SEN and low income families, the unskilled, everyone. As we become more internationally remote, we must put safeguards in place if we to support thriving communities.

Policy and legislation must also preserve and respect human dignity in a changing work environment, and review not only what work is on offer, but *how*; the kinds of contracts and jobs available.

Where might practice need to adapt now?

  • Re-consider curriculum content with its focus on facts. Will success risk being measured based on out of date knowledge, and a measure of recall? Are these skills in growing or dwindling need?
  • Knowledge focus must place value on analysis, discernment, and application of facts that computers will learn and recall better than us. Much of that learning happens outside school.
  • Opportunities have been cut, together with funding. We need communities brought back together, if they are not to collapse. Funding centres of local learning, restoring libraries and community centres will be essential to local skill development.

What is missing?

Although Sarah Waite spoke (in a suitably Purdah appropriate tone), about the importance of basic skills in the future labour market we didn’t get to talking about education preparing us for the lack of jobs of the future and what that changed labour market will look like.

What skills will *not* be needed? Who decides? If left to companies’ sponsor led steer in academies, what effects will we see in society?

Discussions of a future education model and technology seem to share a common theme: people seem reduced in making autonomous choices. But they share no positive vision.

  • Technology should empower us, but it seems to empower the State and diminish citizens’ autonomy in many of today’s policies, and in future scenarios especially around the use of personal data and Digital Economy.
  • Technology should enable greater collaboration, but current tech in education policy is focused too little on use on children’s own terms, and too heavily on top-down monitoring: of scoring, screen time, search terms. Further restrictions by Age Verification are coming, and may access and reduce participation in online services if not done well.
  • Infrastructure weakness is letting down the skill training: University Technical Colleges (UTCs) are not popular and failing to fill places. There is lack of an overarching area wide strategic plan for pupils in which UTCS play a part. Local Authorities played an important part in regional planning which needs restored to ensure joined up local thinking.

How do we ensure women are not left behind?

The final question of the evening asked how women will be affected by Brexit and changing job market. Part of the risks overall, the panel concluded, is related to [lack of] equal-pay. But where are the assessments of the gendered effects in the UK of:

  • community structural change and intra-family support and effect on demand for social care
  • tech solutions in response to lack of human interaction and staffing shortages including robots in the home and telecare
  • the disproportionate drop out of work, due to unpaid care roles, and difficulty getting back in after a break.
  • the roles and types of work likely to be most affected or replaced by machine learning and robots
  • and how will women be empowered or not socially by technology?

We quickly need in education to respond to the known data where women are already being left behind now. The attrition rate for example in teaching in England after two-three years is poor, and getting worse. What will government do to keep teachers teaching? Their value as role models is not captured in pupils’ exams results based entirely on knowledge transfer.

Our GCSEs this year go back to pure exam based testing, and remove applied coursework marking, and is likely to see lower attainment for girls than boys, say practitioners. Likely to leave girls behind at an earlier age.

“There is compelling evidence to suggest that girls in particular may be affected by the changes — as research suggests that boys perform more confidently when assessed by exams alone.”

Jennifer Tuckett spoke about what fairness might look like for female education in the Creative Industries. From school-leaver to returning mother, and retraining older women, appreciating the effects of gender in education is intrinsic to the future jobs market.

We also need broader public understanding of the loop of the impacts of technology, on the process and delivery of teaching itself, and as school management becomes increasingly important and is male dominated, how will changes in teaching affect women disproportionately? Fact delivery and testing can be done by machine, and supports current policy direction, but can a computer create a love of learning and teach humans how to think?

“There is a opportunity for a holistic synthesis of research into gender, the effect of tech on the workplace, the effect of technology on care roles, risks and opportunities.”

Delivering education to ensure women are not left behind, includes avoiding women going into education as teenagers now, to be led down routes without thinking of what they want and need in future. Regardless of work.

Education must adapt to changed employment markets, and the social and community effects of Brexit. If it does not, barriers will become embedded. Geographical, economic, language, familial, skills, and social exclusion.

In short

In summary, what is the government’s Brexit vision? We must know what they see five, 10, and for 25 years ahead, set against understanding the landscape as-is, in order to peg other policy to it.

With this foundation, what we know and what we estimate we don’t know yet can be planned for.

Once we know where we are going in policy, we can do a fit-gap to map how to get people there.

Estimate which skills gaps need filled and which do not. Where will change be hardest?

Change is not new. But there is current potential for massive long term economic and social lasting damage to our young people today. Government is hindered by short term political thinking, but it has a long-term responsibility to ensure children are not mis-educated because policy and the future environment are not aligned.

We deserve public, transparent, informed debate to plan our lives.

We enter the unknown of the education triangle at our peril; Brexit, underfunding, divisive structural policy, for the next ten years and beyond, without appropriate adjustment to pre-Brexit legislation and policy plans for the new world order.

The combined negative effects on employment at scale and at pace must be assessed with urgency, not by big Tech who will profit, but with an eye on future fairness, and public economic and social good. Academy sponsors, decision makers in curriculum choices, schools with limited funding, have no incentives to look to the wider world.

If we’re going to go it alone, we’d be better be robust as a society, and that can’t be just some of us, and can’t only be about skills as seen as having an tangible output.

All this discussion is framed by the premise that education’s aim is to prepare a future workforce for work, and that it is sustainable.

Policy is increasingly based on work that is measured by economic output. We must not leave out or behind those who do not, or cannot, or whose work is unmeasured yet contributes to the world.

‘The only future worth building includes everyone,’ said the Pope in a recent TedTalk.

What kind of future do you want to see yourself living in? Will we all work or will there be universal basic income? What will happen on housing, an ageing population, air pollution, prisons, free movement, migration, and health? What will keep communities together as their known world in employment, and family life, and support collapse? How will education enable children to discover their talents and passions?

Human beings are more than what we do. The sense of a country of who we are and what we stand for is about more than our employment or what we earn. And we cannot live on slogans alone.

Who do we think we in the UK will be after Brexit, needs real and substantial answers. What are we going to *do* and *be* in the world?

Without this vision, any mandate as voted for on June 9th, will be made in the dark and open to future objection writ large. ‘We’ must be inclusive based on a consensus, not simply a ‘mandate’.

Only with clear vision for all these facets fitting together in a model of how we will grow in all senses, will we be able to answer the question, is education preparing us [all] for the jobs of the future?

More than this, we must ask if education is preparing people for the lack of jobs, for changing relationships in our communities, with each other, and with machines.

Change is coming, Brexit or not. But Brexit has exacerbated the potential to miss opportunities, embed barriers, and see negative side-effects from changes already underway in employment, in an accelerated timeframe.

If our education policy today is not gearing up to that change, we must.

Failing a generation is not what post-Brexit Britain needs

Basically Britain needs Prof. Brian Cox shaping education policy:

“If it were up to me I would increase pay and conditions and levels of responsibility and respect significantly, because it is an investment that would pay itself back many times over in the decades to come.”

Don’t use children as ‘measurement probes’ to test schools

What effect does using school exam results to reform the school system have on children? And what effect does it have on society?

Last autumn Ofqual published a report and their study on consistency of exam marking and metrics.

The report concluded that half of pupils in English Literature, as an example, are not awarded the “correct” grade on a particular exam paper due to marking inconsistencies and the design of the tests.
Given the complexity and sensitivity of the data, Ofqual concluded, it is essential that the metrics stand up to scrutiny and that there is a very clear understanding behind the meaning and application of any quality of marking.  They wrote that, “there are dangers that information from metrics (particularly when related to grade boundaries) could be used out of context.”

Context and accuracy are fundamental to the value of and trust in these tests. And at the moment, trust is not high in the system behind it. There must also be trust in policy behind the system.

This summer two sets of UK school tests, will come under scrutiny. GCSEs and SATS. The goal posts are moving for children and schools across the country. And it’s bad for children and bad for Britain.

Grades A-G will be swapped for numbers 1 -9

GCSE sitting 15-16 year olds will see their exams shift to a numerical system, scoring from the highest Grade 9 to Grade 1, with the three top grades replacing the current A and A*. The alphabetical grading system will be fully phased out by 2019.

The plans intended that roughly the same proportion of students as have achieved a Grade C will be awarded a new Grade 4 and as Schools Week reported: “There will be two GCSE pass rates in school performance tables.”

One will measure grade 5s or above, and this will be called the ‘strong’ pass rate. And the other will measure grade 4s or above, and this will be the ‘standard’ pass rate.

Laura McInerney summed up, “in some senses, it’s not a bad idea as it will mean it is easier to see if the measures are comparable. We can check if the ‘standard’ rate is better or worse over the next few years. (This is particularly good for the DfE who have been told off by the government watchdog for fiddling about with data so much that no one can tell if anything has worked anymore).”

There’s plenty of confusion in parents, how the numerical grading system will work. The confusion you can gauge in playground conversations, is also reflected nationally in a more measurable way.

Market research in a range of audiences – including businesses, head teachers, universities, colleges, parents and pupils – found that just 31 per cent of secondary school pupils and 30 per cent of parents were clear on the new numerical grading system.

So that’s a change in the GCSE grading structure. But why? If more differentiators are needed, why not add one or two more letters and shift grade boundaries? A policy need for these changes is unclear.

Machine marking is training on ten year olds

I wonder if any of the shift to numerical marking, is due in any part to a desire to move GCSEs in future to machine marking?

This year, ten and eleven year olds, children in their last year of primary school, will have their SATs tests computer marked.

That’s everything in maths and English. Not multiple choice papers or one word answers, but full written responses. If their f, b or g doesn’t look like the correct  letter in the correct place in the sentence, then it gains no marks.

Parents are concerned about children whose handwriting is awful, but their knowledge is not. How well can they hope to be assessed? If exams are increasingly machine marked out of sight, many sent to India, where is our oversight of the marking process and accuracy?

The concerns I’ve heard simply among local parents and staff, seem reflected in national discussions and the assessor, Oftsed. TES has reported Ofsted’s most senior officials as saying that the inspectorate is just as reluctant to use this year’s writing assessments as it was in 2016. Teachers and parents locally are united in feeling it is not accurate, not fair, and not right.

The content is also to be tougher.

How will we know what is being accurately measured and the accuracy of the metrics with content changes at the same time? How will we know if children didn’t make the mark, or if the marks were simply not awarded?

The accountability of the process is less than transparent to pupils and parents. We have little opportunity for Ofqual’s recommended scrutiny of these metrics, or the data behind the system on our kids.

Causation, correlation and why we should care

The real risk is that no one will be able to tell if there is an error, where it stems from, and where there is a reason if pass rates should be markedly different from what was expected.

After the wide range of changes across pupil attainment, exam content, school progress scores, and their interaction and dependencies, can they all fit together and be comparable with the past at all?

If the SATS are making lots of mistakes simply due to being bad at reading ten year’ old’s handwriting, how will we know?

Or if GCSE scores are lower, will we be able to see if it is because they have genuinely differentiated the results in a wider spread, and stretched out the fail, pass and top passes more strictly than before?

What is likely, is that this year’s set of children who were expecting As and A star at GCSE but fail to be the one of the two children nationally who get the new grade 9, will be disappointed to feel they are not, after all, as great as they thought they were.

And next year, if you can’t be the one or two to get the top mark, will the best simply stop stretching themselves and rest a bit easier, because, whatever, you won’t get that straight grade As anyway?

Even if children would not change behaviours were they to know, the target range scoring sent by third party data processors to schools, discourages teachers from stretching those at the top.

Politicians look for positive progress, but policies are changing that will increase the number of schools deemed to have failed. Why?

Our children’s results are being used to reform the school system.

Coasting and failing schools can be compelled to become academies.

Government policy on this forced academisation was rejected by popular revolt. It appears that the government is determined that schools *will* become academies with the same fervour that they *will* re-introduce grammar schools. Both are unevidenced and unwanted. But there is a workaround.  Create evidence. Make the successful scores harder to achieve, and more will be seen to fail.

A total of 282 secondary schools in England were deemed to be failing by the government this January, as they “have not met a new set of national standards”.

It is expected that even more will attain ‘less’ this summer. Tim Leunig, Chief Analyst & Chief Scientific Adviser Department for Education, made a personal guess at two reaching the top mark.

The context of this GCSE ‘failure’ is the changes in how schools are measured. Children’s progress over 8 subjects, or “P8” is being used as an accountability measure of overall school quality.

But it’s really just: “a school’s average Attainment 8 score adjusted for pupils’ Key Stage 2 attainment.” [Dave Thomson, Education Datalab]

Work done by FFT Education Datalab showed that contextualising P8 scores can lead to large changes for some schools.  (Read more here and here). You cannot meaningfully compare schools with different types of intake, but it appears that the government is determined to do so. Starting ever younger if new plans go ahead.

Data is being reshaped to tell stories to fit to policy.

Shaping children’s future

What this reshaping doesn’t factor in at all, is the labelling of a generation or more, with personal failure, from age ten and up.

All this tinkering with the data, isn’t just data.

It’s tinkering badly with our kids sense of self, their sense of achievement, aspiration, and with that; the country’s future.

Education reform has become the aim, and it has replaced the aims of education.

Post-Brexit Britain doesn’t need policy that delivers ideology. We don’t need “to use children as ‘measurement probes’ to test schools.

Just as we shouldn’t use children’s educational path to test their net worth or cost to the economy. Or predict it in future.

Children’s education and human value cannot be measured in data.

Notes on Not the fake news

Notes and thoughts from Full Fact’s event at Newspeak House in London on 27/3 to discuss fake news, the misinformation ecosystem, and how best to respond. The recording is here. The contributions and questions part of the evening began from 55.55.


What is fake news? Are there solutions?

1. Clickbait: celebrity pull to draw online site visitors towards traffic to an advertising model – kill the business model
2. Mischief makers: Deceptive with hostile intent – bots, trolls, with an agenda
3. Incorrectly held views: ‘vaccinations cause autism’ despite the evidence to the contrary. How can facts reach people who only believe what they want to believe?

Why does it matter? The scrutiny of people in power matters – to politicians, charities, think tanks – as well as the public.

It is fundamental to remember that we do in general believe that the public has a sense of discernment, however there is also a disconnect between an objective truth and some people’s perception of reality. Can this conflict be resolved? Is it necessary to do so? If yes, when is it necessary to do so and who decides that?

There is a role for independent tracing of unreliable information, its sources and its distribution patterns and identifying who continues to circulate fake news even when asked to desist.

Transparency about these processes is in the public interest.

Overall, there is too little public understanding of how technology and online tools affect behaviours and decision-making.

The Role of Media in Society

How do you define the media?
How can average news consumers distinguish between self-made and distributed content compared with established news sources?
What is the role of media in a democracy?
What is the mainstream media?
Does the media really represent what I want to understand? > Does the media play a role in failure of democracy if news is not representative of all views? > see Brexit, see Trump
What are news values and do we have common press ethics?

New problems in the current press model:

Failure of the traditional media organisations in fact checking; part of the problem is that the credible media is under incredible pressure to compete to gain advertising money share.

Journalism is under resourced. Verification skills are lacking and tools can be time consuming. Techniques like reverse image search, and verification take effort.

Press releases with numbers can be less easily scrutinised so how do we ensure there is not misinformation through poor journalism?

What about confirmation bias and reinforcement?

What about friends’ behaviours? Can and should we try to break these links if we are not getting a fair picture? The Facebook representative was keen to push responsibility for the bubble entirely to users’ choices. Is this fair given the opacity of the model?
Have we cracked the bubble of self-reinforcing stories being the only stories that mutual friends see?
Can we crack the echo chamber?
How do we start to change behaviours? Can we? Should we?

The risk is that if people start to feel nothing is trustworthy, we trust nothing. This harms relations between citizens and state, organisations and consumers, professionals and public and between us all. Community is built on relationships. Relationships are built on trust. Trust is fundamental to a functioning society and economy.

Is it game over?

Will Moy assured the audience that there is no need to descend into blind panic and there is still discernment among the public.

Then, it was asked, is perhaps part of the problem that the Internet is incapable in its current construct to keep this problem at bay? Is part of the solution re-architecturing and re-engineering the web?

What about algorithms? Search engines start with word frequency and neutral decisions but are now much more nuanced and complex. We really must see how systems decide what is published. Search engines provide but also restrict our access to facts and ‘no one gets past page 2 of search results’. Lack of algorithmic transparency is an issue, but will not be solved due to commercial sensitivities.

Fake news creation can be lucrative. Mangement models that rely on user moderation or comments to give balance can be gamed.

Are there appropriate responses to the grey area between trolling and deliberate deception through fake news that is damaging? In what context and background? Are all communities treated equally?

The question came from the audience whether the panel thought regulation would come from the select committee inquiry. The general response was that it was unlikely.

What are the solutions?

The questions I came away thinking about went unanswered, because I am not sure there are solutions as long as the current news model exists and is funded in the current way by current players.

I believe one of the things that permits fake news is the growing imbalance of money between the big global news distributors and independent and public interest news sources.

This loss of balance, reduces our ability to decide for ourselves what we believe and what matters to us.

The monetisation of news through its packaging in between advertising has surely contaminated the news content itself.

Think of a Facebook promoted post – you can personalise your audience to a set of very narrow and selective characteristics. The bubble that receives that news is already likely to be connected by similar interest pages and friends and the story becomes self reinforcing, showing up in  friends’ timelines.

A modern online newsroom moves content on the webpage around according to what is getting the most views and trending topics in a list encourage the viewers to see what other people are reading, and again, are self reinforcing.

There is also a lack of transparency of power. Where we see a range of choices from which we may choose to digest a range of news, we often fail to see one conglomerate funder which manages them all.

The discussion didn’t address at all the fundamental shift in “what is news” which has taken place over the last twenty years. In part, I believe the responsibility for the credibility level of fake news in viewers lies with 24/7 news channels. They have shifted the balance of content from factual bulletins, to discussion and opinion. Now while the news channel is seen as a source of ‘news’ much of the time, the content is not factual, but opinion, and often that means the promotion and discussion of the opinions of their paymaster.

Most simply, how should I answer the question that my ten year old asks – how do I know if something on the Internet is true or not?

Can we really say it is up to the public to each take on this role and where do we fit the needs of the vulnerable or children into that?

Is the term fake news the wrong approach and something to move away from? Can we move solutions away from target-fixation ‘stop fake news’ which is impossible online, but towards what the problems are that fake news cause?

Interference in democracy. Interference in purchasing power. Interference in decision making. Interference in our emotions.

These interferences with our autonomy is not something that the web is responsible for, but the people behind the platforms must be accountable for how their technology works.

In the mean time, what can we do?

“if we ever want the spread of fake news to stop we have to take responsibility for calling out those who share fake news (real fake news, not just things that feel wrong), and start doing a bit of basic fact-checking ourselves.” [IB Times, Eliot Higgins is the founder of Bellingcat]

Not everyone has the time or capacity to each do that. As long as today’s imbalance of money and power exists, truly independent organisations like Bellingcat and FullFact have an untold value.


The billed Google and Twitter speakers were absent because they were invited to a meeting with the Home Secretary on 28/3. Speakers were Will Moy, Director of Jenni Sargent Managing Director of , Richard Allan, Facebook EMEA Policy Director and the event was chaired by Bill Thompson.

Information society services: Children in the GDPR, Digital Economy Bill & Digital Strategy

In preparation for The General Data Protection Regulation (GDPR) there  must be an active UK decision about policy in the coming months for children and the Internet – provision of ‘Information Society Services’. The age of consent for online content aimed at children from May 25, 2018 will be 16 by default unless UK law is made to lower it.

Age verification for online information services in the GDPR, will mean capturing parent-child relationships. This could mean a parent’s email or credit card unless there are other choices made. What will that mean for access to services for children and to privacy? It is likely to offer companies an opportunity for a data grab, and mean privacy loss for the public, as more data about family relationships will be created and collected than the content provider would get otherwise.

Our interactions create a blended identity of online and offline attributes which I suggested in a previous post, create synthesised versions of our selves raises questions on data privacy and security.

The goal may be to protect the physical child. The outcome will mean it simultaneously expose children and parents to risks that we would not otherwise be put through increased personal data collection. By increasing the data collected, it increases the associated risks of loss, theft, and harm to identity integrity. How will legislation balance these risks and rights to participation?

The UK government has various work in progress before then, that could address these questions:

But will they?

As Sonia Livingstone wrote in the post on the LSE media blog about what to expect from the GDPR and its online challenges for children:

“Now the UK, along with other Member States, has until May 2018 to get its house in order”.

What will that order look like?

The Digital Strategy and Ed Tech

The Digital Strategy commits to changes in National Pupil Data  management. That is, changes in the handling and secondary uses of data collected from pupils in the school census, like using it for national research and planning.

It also means giving data to commercial companies and the press. Companies such as private tutor pupil matching services, and data intermediaries. Journalists at the Times and the Telegraph.

Access to NPD via the ONS VML would mean safe data use, in safe settings, by safe (trained and accredited) users.

Sensitive data — it remains to be seen how DfE intends to interpret ‘sensitive’ and whether that is the DPA1998 term or lay term meaning ‘identifying’ as it should — will no longer be seen by users for secondary uses outside safe settings.

However, a grey area on privacy and security remains in the “Data Exchange” which will enable EdTech products to “talk to each other”.

The aim of changes in data access is to ensure that children’s data integrity and identity are secure.  Let’s hope the intention that “at all times, the need to preserve appropriate privacy and security will remain paramount and will be non-negotiable” applies across all closed pupil data, and not only to that which may be made available via the VML.

This strategy is still far from clear or set in place.

The Digital Strategy and consumer data rights

The Digital Strategy commits under the heading of “Unlocking the power of data in the UK economy and improving public confidence in its use” to the implementation of the General Data Protection Regulation by May 2018. The Strategy frames this as a business issue, labelling data as “a global commodity” and as such, its handling is framed solely as a requirements needed to ensure “that our businesses can continue to compete and communicate effectively around the world” and that adoption “will ensure a shared and higher standard of protection for consumers and their data.”

The GDPR as far as children goes, is far more about protection of children as people. It focuses on returning control over children’s own identity and being able to revoke control by others, rather than consumer rights.

That said, there are data rights issues which are also consumer issues and  product safety failures posing real risk of harm.

Neither The Digital Economy Bill nor the Digital Strategy address these rights and security issues, particularly when posed by the Internet of Things with any meaningful effect.

In fact, the chapter Internet of Things and Smart Infrastructure [ 9/19]  singularly miss out anything on security and safety:

“We want the UK to remain an international leader in R&D and adoption of IoT. We are funding research and innovation through the three year, £30 million IoT UK Programme.”

There was much more thoughtful detail in the 2014 Blackett Review on the IoT to which I was signposted today after yesterday’s post.

If it’s not scary enough for the public to think that their sex secrets and devices are hackable, perhaps it will kill public trust in connected devices more when they find strangers talking to their children through a baby monitor or toy. [BEUC campaign report on #Toyfail]

“The internet-connected toys ‘My Friend Cayla’ and ‘i-Que’ fail miserably when it comes to safeguarding basic consumer rights, security, and privacy. Both toys are sold widely in the EU.”

Digital skills and training in the strategy doesn’t touch on any form of change management plans for existing working sectors in which we expect to see machine learning and AI change the job market. This is something the digital and industrial strategy must be addressing hand in glove.

The tactics and training providers listed sound super, but there does not appear to be an aspirational strategy hidden between the lines.

The Digital Economy Bill and citizens’ data rights

While the rest of Europe in this legislation has recognised that a future thinking digital world without boundaries, needs future thinking on data protection and empowered citizens with better control of identity, the UK government appears intent on taking ours away.

To take only one example for children, the Digital Economy Bill in Cabinet Office led meetings was explicit about use for identifying and tracking individuals labelled under “Troubled Families” and interventions with them. Why, when consent is required to work directly with people, that consent is being ignored to access their information is baffling and in conflict with both the spirit and letter of GDPR. Students and Applicants will see their personal data sent to the Student Loans Company without their consent or knowledge. This overrides the current consent model in place at UCAS.

It is baffling that the government is pursuing the Digital Economy Bill data copying clauses relentlessly, that remove confidentiality by default, and will release our identities in birth, marriage and death data for third party use without consent through Chapter 2, the opening of the Civil Registry, without any safeguards in the bill.

Government has not only excluded important aspects of Parliamentary scrutiny in the bill, it is trying to introduce “almost untrammeled powers” (paragraph 21), that will “very significantly broaden the scope for the sharing of information” and “specified persons”  which applies “whether the service provider concerned is in the public sector or is a charity or a commercial organisation” and non-specific purposes for which the information may be disclosed or used. [Reference: Scrutiny committee comments]

Future changes need future joined up thinking

While it is important to learn from the past, I worry that the effort some social scientists put into looking backwards,  is not matched by enthusiasm to look ahead and making active recommendations for a better future.

Society appears to have its eyes wide shut to the risks of coercive control and nudge as research among academics and government departments moves in the direction of predictive data analysis.

Uses of administrative big data and publicly available social media data for example, in research and statistics, needs further new regulation in practice and policy but instead the Digital Economy Bill looks only at how more data can be got out of Department silos.

A certain intransigence about data sharing with researchers from government departments is understandable. What’s the incentive for DWP to release data showing its policy may kill people?

Westminster may fear it has more to lose from data releases and don’t seek out the political capital to be had from good news.

The ethics of data science are applied patchily at best in government, and inconsistently in academic expectations.

Some researchers have identified this but there seems little will to action:

 “It will no longer be possible to assume that secondary data use is ethically unproblematic.”

[Data Horizons: New forms of Data for Social Research, Elliot, M., Purdam, K., Mackey, E., School of Social Sciences, The University Of Manchester, 2013.]

Research and legislation alike seem hell bent on the low hanging fruit but miss out the really hard things. What meaningful benefit will it bring by spending millions of pounds on exploiting these personal data and opening our identities to risk just to find out whether X course means people are employed in Y tax bracket 5 years later, versus course Z where everyone ends up self employed artists? What ethics will be applied to the outcomes of those questions asked and why?

And while government is busy joining up children’s education data throughout their lifetimes from age 2 across school, FE, HE, into their HMRC and DWP interactions, there is no public plan in the Digital Strategy for the coming 10 to 20 years employment market, when many believe, as do these authors in American Scientific, “around half of today’s jobs will be threatened by algorithms. 40% of today’s top 500 companies will have vanished in a decade.”

What benefit will it have to know what was, or for the plans around workforce and digital skills list ad hoc tactics, but no strategy?

We must safeguard jobs and societal needs, but just teaching people to code is not a solution to a fundamental gap in what our purpose will be, and the place of people as a world-leading tech nation after Brexit. We are going to have fewer talented people from across the world staying on after completing academic studies, because they’re not coming at all.

There may be investment in A.I. but where is the investment in good data practices around automation and machine learning in the Digital Economy Bill?

To do this Digital Strategy well, we need joined up thinking.

Improving online safety for children in The Green Paper on Children’s Internet Safety should mean one thing:

Children should be able to use online services without being used and abused by them.

This article arrived on my Twitter timeline via a number of people. Doteveryone CEO Rachel Coldicutt summed up various strands of thought I started to hear hints of last month at #CPDP2017 in Brussels:

“As designers and engineers, we’ve contributed to a post-thought world. In 2017, it’s time to start making people think again.

“We need to find new ways of putting friction and thoughtfulness back into the products we make.” [Glanceable truthiness, 30.1.2017]

Let’s keep the human in discussions about technology, and people first in our products

All too often in technology and even privacy discussions, people have become ‘consumers’ and ‘customers’ instead of people.

The Digital Strategy may seek to unlock “the power of data in the UK economy” but policy and legislation must put equal if not more emphasis on “improving public confidence in its use” if that long term opportunity is to be achieved.

And in technology discussions about AI and algorithms we hear very little about people at all.  Discussions I hear seem siloed instead into three camps: the academics, the designers and developers,  the politicians and policy makers.  And then comes the lowest circle, ‘the public’ and ‘society’.

It is therefore unsurprising that human rights have fallen down the ranking of importance in some areas of technology development.

It’s time to get this house in order.

Information. Society. Services. Children in the Internet of Things.

In this post, I think out loud about what improving online safety for children in The Green Paper on Children’s Internet Safety means ahead of the General Data Protection Regulation in 2018. Children should be able to use online services without being used and abused by them. If this regulation and other UK Government policy and strategy are to be meaningful for children, I think we need to completely rethink the State approach to what data privacy means in the Internet of Things.
[listen on soundcloud]


Children in the Internet of Things

In 1979 Star Trek: The Motion Picture created a striking image of A.I. as Commander Decker merged with V’Ger and the artificial copy of Lieutenant Ilia, blending human and computer intelligence and creating an integrated, synthesised form of life.

Ten years later, Sir Tim Berners-Lee wrote his proposal and created the world wide web, designing the way for people to share and access knowledge with each other through networks of computers.

In the 90s my parents described using the Internet as spending time ‘on the computer’, and going online meant from a fixed phone point.

Today our wireless computers in our homes, pockets and school bags, have built-in added functionality to enable us to do other things with them at the same time; make toast, play a game, and make a phone call, and we live in the Internet of Things.

Although we talk about it as if it were an environment of inanimate appliances,  it would be more accurate to think of the interconnected web of information that these things capture, create and share about our interactions 24/7, as vibrant snapshots of our lives, labelled with retrievable tags, and stored within the Internet.

Data about every moment of how and when we use an appliance, is captured at a rapid rate, or measured by smart meters, and shared within a network of computers. Computers that not only capture data but create, analyse and exchange new data about the people using them and how they interact with the appliance.

In this environment, children’s lives in the Internet of Things no longer involve a conscious choice to go online. Using the Internet is no longer about going online, but being online. The web knows us. In using the web, we become part of the web.

Our children, to the computers that gather their data, have simply become extensions of the things they use about which data is gathered and sold by the companies who make and sell the things. Things whose makers can even choose who uses them or not and how. In the Internet of things,  children have become things of the Internet.

A child’s use of a smart hairbrush will become part of the company’s knowledge base how the hairbrush works. A child’s voice is captured and becomes part of the database for the development training of the doll or robot they play with.

Our biometrics, measurements of the unique physical parts of our identities, provides a further example of the recent offline-self physically incorporated into banking services. Over 1 million UK children’s biometrics are estimated to be used in school canteens and library services through, often compulsory, fingerprinting.

Our interactions create a blended identity of online and offline attributes.

The web has created synthesised versions of our selves.

I say synthesised not synthetic, because our online self is blended with our real self and ‘synthetic’ gives the impression of being less real. If you take my own children’s everyday life as an example,  there is no ‘real’ life that is without a digital self.  The two are inseparable. And we might have multiple versions.

Our synthesised self is not only about our interactions with appliances and what we do, but who we know and how we think based on how we take decisions.

Data is created and captured not only about how we live, but where we live. These online data can be further linked with data about our behaviours offline generated from trillions of sensors and physical network interactions with our portable devices. Our synthesised self is tracked from real life geolocations. In cities surrounded by sensors under pavements, in buildings, cameras, mapping and tracking everywhere we go, our behaviours are converted into data, and stored inside an overarching network of cloud computers so that our online lives take on life of their own.

Data about us, whether uniquely identifiable on its own or not, is created and collected actively and passively. Online site visits record IP Address and use linked platform log-ins that can even extract friends lists without consent or affirmative action from them.

Using a tool like Privacy Badger from EEF gives you some insight into how many sites create new data about online behaviour once that synthesised self logs in, then tracks your synthesised self across the Internet. How you move from page to page, with what referring and exit pages and URLs, what adverts you click on or ignore,  platform types, number of clicks, cookies, invisible on page gifs and web beacons. Data that computers see, interpret and act on better than us.

Those synthesised identities are tracked online,  just as we move about a shopping mall offline.

Sir Tim Berners-Lee said this week, there is a need to put “a fair level of data control back in the hands of people.” It is not a need but vital to our future flourishing, very survival even. Data control is not about protecting a list of information or facts about ourselves and our identity for its own sake, it is about choosing who can exert influence and control over our life, our choices, and future of democracy.

And while today that who may be companies, it is increasingly A.I. itself that has a degree of control over our lives, as decisions are machine made.

Understanding how the Internet uses people

We get the service, the web gets our identity and our behaviours. And in what is in effect a hidden slave trade, they get access to use our synthesised selves in secret, and forever.

This grasp of what the Internet is, what the web is, is key to getting a rounded view of children’s online safety. Namely, we need to get away from the sole focus of online safeguarding as about children’s use of the web, and also look at how the web uses children.

Online services use children to:

  • mine, and exchange, repackage, and trade profile data, offline behavioural data (location, likes), and invisible Internet-use behavioural data (cookies, website analytics)
  • extend marketing influence in human decision-making earlier in life, even before children carry payment cards of their own,
  • enjoy the insights of parent-child relationships connected by an email account, sometimes a credit card, used as age verification or in online payments.

What are the risks?

Exploitation of identity and behavioural tracking not only puts our synthesised child at risk from exploitation, it puts our real life child’s future adult identity and data integrity at risk. If we cannot know who holds the keys to our digital identity, how can we trust that systems and services will be fair to us, not discriminate or defraud. Or not make errors that we cannot understand in order to correct?

Leaks, loss and hacks abound and manufacturers are slow to respond. Software that monitors children can also be used in coercive control. Organisations whose data are insecure, can be held to ransom. Children’s products should do what we expect them to and nothing more, there should be “no surprises” how data are used.

Companies tailor and target their marketing activity to those identity profiles. Our data is sold on in secret without consent to data brokers we never see, who in turn sell us on to others who monitor, track and target our synthesised selves every time we show up at their sites, in a never-ending cycle.

And from exploiting the knowledge of our synthesised self, decisions are made by companies, that target their audience, select which search results or adverts to show us, or hide, on which network sites, how often, to actively nudge our behaviours quite invisibly.

Nudge misuse is one of the greatest threats to our autonomy and with it democratic control of the society we live in. Who decides on the “choice architecture” that may shape another’s decisions and actions, and on what ethical basis?  once asked these authors who now seem to want to be the decision makers.

Thinking about Sir Tim Berners-Lee’s comments today on things that threaten the web, including how to address the loss of control over our personal data, we must frame it not a user-led loss of control, but autonomy taken by others; by developers, by product sellers, by the biggest ‘nudge controllers’ the Internet giants themselves.

Loss of identity is near impossible to reclaim. Our synthesised selves are sold into unending data slavery and we seem powerless to stop it. Our autonomy and with it our self worth, seem diminished.

How can we protect children better online?

Safeguarding must include ending data slavery of our synthesised self. I think of five things needed by policy shapers to tackle it.

  1. Understanding what ‘online’ and the Internet mean and how the web works – i.e. what data does a visit to a web page collect about the user and what happens to that data?
  2. Threat models and risk must go beyond the usual irl protection issues. Those  posed by undermining citizens’ autonomy, loss of public trust, of control over our identity, misuse of nudge, and how some are intrinsic to the current web business model, site users or government policy are unseen are underestimated.
  3. On user regulation (age verification / filtering) we must confront the idea that as a stand-alone step  it will not create a better online experience for the user, when it will not prevent the misuse of our synthesised selves and may increase risks – regulation of misuse must shift the point of responsibility
  4. Meaningful data privacy training must be mandatory for anyone in contact with children and its role in children’s safeguarding
  5. Siloed thinking must go. Forward thinking must join the dots across Departments into cohesive inclusive digital strategy and that doesn’t just mean ‘let’s join all of the data, all of the time’
  6. Respect our synthesised selves. Data slavery includes government misuse and must end if we respect children’s rights.

In the words of James T. Kirk, “the human adventure is just beginning.”

When our synthesised self is an inseparable blend of offline and online identity, every child is a synthesised child. And they are people. It is vital that government realises their obligation to protect rights to privacy, provision and participation under the Convention of the Rights of the Child and address our children’s real online life.

Governments, policy makers, and commercial companies must not use children’s offline safety as an excuse in a binary trade off to infringe on those digital rights or ignore risk and harm to the synthesised self in law, policy, and practice.

If future society is to thrive we must do all that is technologically possible to safeguard the best of what makes us human in this blend; our free will.


Part 2 follows with thoughts specific to the upcoming regulations, Digital Economy Bill andDigital Strategy

References:

[1] Internet of things WEF film, starting from 19:30

“What do an umbrella, a shark, a houseplant, the brake pads in a mining truck and a smoke detector all have in common?  They can all be connected online, and in this example, in this WEF film, they are.

“By 2024 more than 50% of home Internet traffic will be used by appliances and devices, rather than just for communication and entertainment…The IoT raises huge questions on privacy and security, that have to be addressed by government, corporations and consumers.”

[2] The government has today announced a “major new drive on internet safety”  [The Register, Martin, A. 27.02.2017]

[3] GDPR page 38 footnote (1) indicates the definition of Information Society Services as laid out in the Directive (EU) 2015/1535 of the European Parliament and of the Council of 9 September 2015 laying down a procedure for the provision of information in the field of technical regulations and of rules on Information Society services (OJ L 241, 17.9.2015, p. 1 and Annex 1)

image source: Startrek.com

Mum, are we there yet? Why should AI care.

Mike Loukides drew similarities between the current status of AI and children’s learning in an article I read this week.

The children I know are always curious to know where they are going, how long will it take, and how they will know when they get there. They ask others for guidance often.

Loukides wrote that if you look carefully at how humans learn, you see surprisingly little unsupervised learning.

If unsupervised learning is a prerequisite for general intelligence, but not the substance, what should we be looking for, he asked. It made me wonder is it also true that general intelligence is a prerequisite for unsupervised learning? And if so, what level of learning must AI achieve before it is capable of recursive self-improvement? What is AI being encouraged to look for as it learns, what is it learning as it looks?

What is AI looking for and how will it know when it gets there?

Loukides says he can imagine a toddler learning some rudiments of counting and addition on his or her own, but can’t imagine a child developing any sort of higher mathematics without a teacher.

I suggest a different starting point. I think children develop on their own, given a foundation. And if the foundation is accompanied by a purpose — to understand why they should learn to count, and why they should want to — and if they have the inspiration, incentive and  assets they’ll soon go off on their own, and outstrip your level of knowledge. That may or may not be with a teacher depending on what is available, cost, and how far they get compared with what they want to achieve.

It’s hard to learn something from scratch by yourself if you have no boundaries to set knowledge within and search for more, or to know when to stop when you have found it.

You’ve only to start an online course, get stuck, and try to find the solution through a search engine to know how hard it can be to find the answer if you don’t know what you’re looking for. You can’t type in search terms if you don’t know the right words to describe the problem.

I described this recently to a fellow codebar-goer, more experienced than me, and she pointed out something much better to me. Don’t search for the solution or describe what you’re trying to do, ask the search engine to find others with the same error message.

In effect she said, your search is wrong. Google knows the answer, but can’t tell you what you want to know, if you don’t ask it in the way it expects.

So what will AI expect from people and will it care if we dont know how to interrelate? How does AI best serve humankind and defined by whose point-of-view? Will AI serve only those who think most closely in AI style steps and language?  How will it serve those who don’t know how to talk about, or with it? AI won’t care if we don’t.

If as Loukides says, we humans are good at learning something and then applying that knowledge in a completely different area, it’s worth us thinking about how we are transferring our knowledge today to AI and how it learns from that. Not only what does AI learn in content and context, but what does it learn about learning?

His comparison of a toddler learning from parents — who in effect are ‘tagging’ objects through repetition of words while looking at images in a picture book — made me wonder how we will teach AI the benefit of learning? What incentive will it have to progress?

“the biggest project facing AI isn’t making the learning process faster and more efficient. It’s moving from machines that solve one problem very well (such as playing Go or generating imitation Rembrandts) to machines that are flexible and can solve many unrelated problems well, even problems they’ve never seen before.”

Is the skill to enable “transfer learning” what will matter most?

For AI to become truly useful, we need better as a global society to understand *where* it might best interface with our daily lives, and most importantly *why*.  And consider *who* is teaching and AI and who is being left out in the crowdsourcing of AI’s teaching.

Who is teaching AI what it needs to know?

The natural user interfaces for people to interact with today’s more common virtual assistants (Amazon’s Alexa, Apple’s Siri and Viv, Microsoft  and Cortana) are not just providing information to the user, but through its use, those systems are learning. I wonder what percentage of today’s  population is using these assistants, how representative are they, and what our AI assistants are being taught through their use? Tay was a swift lesson learned for Microsoft.

In helping shape what AI learns, what range of language it will use to develop its reference words and knowledge, society co-shapes what AI’s purpose will be —  and for AI providers to know what’s the point of selling it. So will this technology serve everyone?

Are providers counter-balancing what AI is currently learning from crowdsourcing, if the crowd is not representative of society?

So far we can only teach machines to make decisions based on what we already know, and what we can tell it to decide quickly against pre-known references using lots of data. Will your next image captcha, teach AI to separate the sloth from the pain-au-chocolat?

One of the task items for machine processing is better searches. Measurable goal driven tasks have boundaries, but who sets them? When does a computer know, if it’s found enough to make a decision. If the balance of material about the Holocaust on the web for example, were written by Holocaust deniers will AI know who is right? How will AI know what is trusted and by whose measure?

What will matter most is surely not going to be how to optimise knowledge transfer from human to AI — that is the baseline knowledge of supervised learning — and it won’t even be for AI to know when to use its skill set in one place and when to apply it elsewhere in a different context; so-called learning transfer, as Mike Loukides says. But rather, will AI reach the point where it cares?

  • Will AI ever care what it should know and where to stop or when it knows enough on any given subject?
  • How will it know or care if what it learns is true?
  • If in the best interests of advancing technology or through inaction  we do not limit its boundaries, what oversight is there of its implications?

Online limits will limit what we can reach in Thinking and Learning

If you look carefully at how humans learn online, I think rather than seeing  surprisingly little unsupervised learning, you see a lot of unsupervised questioning. It is often in the questioning that is done in private we discover, and through discovery we learn. Often valuable discoveries are made; whether in science, in maths, or important truths are found where there is a need to challenge the status quo. Imagine if Galileo had given up.

The freedom to think freely and to challenge authority, is vital to protect, and one reason why I and others are concerned about the compulsory web monitoring starting on September 5th in all schools in England, and its potential chilling effect. Some are concerned who  might have access to these monitoring results today or in future, if stored could they be opened to employers or academic institutions?

If you tell children do not use these search terms and do not be curious about *this* subject without repercussions, it is censorship. I find the idea bad enough for children, but for us as adults its scary.

As Frankie Boyle wrote last November, we need to consider what our internet history is:

“The legislation seems to view it as a list of actions, but it’s not. It’s a document that shows what we’re thinking about.”

Children think and act in ways that they may not as an adult. People also think and act differently in private and in public. It’s concerning that our private online activity will become visible to the State in the IP Bill — whether photographs that captured momentary actions in social media platforms without the possibility to erase them, or trails of transitive thinking via our web history — and third-parties may make covert judgements and conclusions about us, correctly or not, behind the scenes without transparency, oversight or recourse.

Children worry about lack of recourse and repercussions. So do I. Things done in passing, can take on a permanence they never had before and were never intended. If expert providers of the tech world such as Apple Inc, Facebook Inc, Google Inc, Microsoft Corp, Twitter Inc and Yahoo Inc are calling for change, why is the government not listening? This is more than very concerning, it will have disastrous implications for trust in the State, data use by others, self-censorship, and fear that it will lead to outright censorship of adults online too.

By narrowing our parameters what will we not discover? Not debate?  Or not invent? Happy are the clockmakers, and kids who create. Any restriction on freedom to access information, to challenge and question will restrict children’s learning or even their wanting to.  It will limit how we can improve our shared knowledge and improve our society as a result. The same is true of adults.

So in teaching AI how to learn, I wonder how the limitations that humans put on its scope — otherwise how would it learn what the developers want — combined with showing it ‘our thinking’ through search terms,  and how limitations on that if users self-censor due to surveillance, will shape what AI will help us with in future and will it be the things that could help the most people, the poorest people, or will it be people like those who programme the AI and use search terms and languages it already understands?

Who is accountable for the scope of what we allow AI to do or not? Who is accountable for what AI learns about us, from our behaviour data if it is used without our knowledge?

How far does AI have to go?

The leap for AI will be if and when AI can determine what it doesn’t know, and it sees a need to fill that gap. To do that, AI will need to discover a purpose for its own learning, indeed for its own being, and be able to do so without limitation from the that humans shaped its framework for doing so. How will AI know what it needs to know and why? How will it know, what it knows is right and sources to trust? Against what boundaries will AI decide what it should engage with in its learning, who from and why? Will it care? Why will it care? Will it find meaning in its reason for being? Why am I here?

We assume AI will know better. We need to care, if AI is going to.

How far are we away from a machine that is capable of recursive self-improvement, asks John Naughton in yesterday’s Guardian, referencing work by Yuval Harari suggesting artificial intelligence and genetic enhancements will usher in a world of inequality and powerful elites. As I was finishing this, I read his article, and found myself nodding, as I read the implications of new technology focus too much on technology and too little on society’s role in shaping it.

AI at the moment has a very broad meaning to the general public. Is it living with life-supporting humanoids?  Do we consider assistive search tools as AI? There is a fairly general understanding of “What is A.I., really?” Some wonder if we are “probably one of the last generations of Homo sapiens,” as we know it.

If the purpose of AI is to improve human lives, who defines improvement and who will that improvement serve? Is there a consensus on the direction AI should and should not take, and how far it should go? What will the global language be to speak AI?

As AI learning progresses, every time AI turns to ask its creators, “Are we there yet?”,  how will we know what to say?

image: Stephen Barling flickr.com/photos/cripsyduck (CC BY-NC 2.0)

Gotta know it all? Pokémon GO, privacy and behavioural research

I caught my first Pokémon and I liked it. Well, OK, someone else handed me a phone and insisted I have a go. Turns out my curve ball is pretty good. Pokémon GO is enabling all sorts of new discoveries.

Discoveries reportedly including a dead man, robbery, picking up new friends, and scrapes and bruises. While players are out hunting anime in augmented reality, enjoying the novelty, and discovering interesting fun facts about their vicinity, Pokémon GO is gathering a lot of data. It’s influencing human activity in ways that other games can only envy, taking in-game interaction to a whole new level.

And it’s popular.

But what is it learning about us as we do it?

This week questions have been asked about the depth of interaction that the app gets by accessing users’ log in credentials.

What I would like to know is what access goes in the other direction?

Google, heavily invested in AI and Machine intelligence research, has “learning systems placed at the core of interactive services in a fast changing and sometimes adversarial environment, combinations of techniques including deep learning and statistical models need to be combined with ideas from control and game theory.”

The app, which is free to download, has raised concerns over suggestions the app could access a user’s entire Google account, including email and passwords. Then it seemed it couldn’t. But Niantic is reported to have made changes to permissions to limit access to basic profile information anyway.

If Niantic gets access to data owned by Google through its use of google log in credentials, does Nantic’s investor, Google’s Alphabet, get the reverse: user data from the Google log in interaction with the app, and if so, what does Google learn through the interaction?

Who gets access to what data and why?

Brian Crecente writes that Apple, Google, Niantic likely making more on Pokémon Go than Nintendo, with 30 percent of revenue from in-app purchases on their online stores.

Next stop  is to make money from marketing deals between Niantic and the offline stores used as in-game focal points, gyms and more, according to Bryan Menegus at Gizmodo who reported Redditors had discovered decompiled code in the Android and iOS versions of Pokémon Go earlier this week “that indicated a potential sponsorship deal with global burger chain McDonald’s.”

The logical progressions of this, is that the offline store partners, i.e. McDonald’s and friends, will be making money from players, the people who get led to their shops, restaurants and cafes where players will hang out longer than the Pokéstop, because the human interaction with other humans, the battles between your collected creatures and teamwork, are at the heart of the game. Since you can’t visit gyms until you are level 5 and have chosen a team, players are building up profiles over time and getting social in real life. Location data that may build up patterns about the players.

This evening the two players that I spoke to were already real-life friends on their way home from work (that now takes at least an hour longer every evening) and they’re finding the real-life location facts quite fun, including that thing they pass on the bus every day, and umm, the Scientology centre. Well, more about that later**.

Every player I spotted looking at the phone with that finger flick action gave themselves away with shared wry smiles. All 30 something men. There is possibly something of a legacy in this they said, since the initial Pokémon game released 20 years ago is drawing players who were tweens then.

Since the app is online and open to all, children can play too. What this might mean for them in the offline world, is something the NSPCC picked up on here before the UK launch. Its focus  of concern is the physical safety of young players, citing the risk of in-game lures misuse. I am not sure how much of an increased risk this is compared with existing scenarios and if children will be increasingly unsupervised or not. It’s not a totally new concept. Players of all ages must be mindful of where they are playing**. Some stories of people getting together in the small hours of the night has generated some stories which for now are mostly fun. (Go Red Team.) Others are worried about hacking. And it raises all sorts of questions if private and public space is has become a Pokestop.

While the NSPCC includes considerations on the approach to privacy in a recent more general review of apps, it hasn’t yet mentioned the less obvious considerations of privacy and ethics in Pokémon GO. Encouraging anyone, but particularly children, out of their home or protected environments and into commercial settings with the explicit aim of targeting their spending. This is big business.

Privacy in Pokémon GO

I think we are yet to see a really transparent discussion of the broader privacy implications of the game because the combination of multiple privacy policies involved is less than transparent. They are long, they seem complete, but are they meaningful?

We can’t see how they interact.

Google has crowd sourced the collection of real time traffic data via mobile phones.  Geolocation data from google maps using GPS data, as well as network provider data seem necessary to display the street data to players. Apparently you can download and use the maps offline since Pokémon GO uses the Google Maps API. Google goes to “great lengths to make sure that imagery is useful, and reflects the world our users explore.” In building a Google virtual reality copy of the real world, how data are also collected and will be used about all of us who live in it,  is a little wooly to the public.

U.S. Senator Al Franken is apparently already asking Niantic these questions. He points out that Pokémon GO has indicated it shares de-identified and aggregate data with other third parties for a multitude of purposes but does not describe the purposes for which Pokémon GO would share or sell those data [c].

It’s widely recognised that anonymisation in many cases fails so passing only anonymised data may be reassuring but fail in reality. Stripping out what are considered individual personal identifiers in terms of data protection, can leave individuals with unique characteristics or people profiled as groups.

Opt out he feels is inadequate as a consent model for the personal and geolocational data that the app is collecting and passing to others in the U.S.

While the app provider would I’m sure argue that the UK privacy model respects the European opt in requirement, I would be surprised if many have read it. Privacy policies fail.

Poor practices must be challenged if we are to preserve the integrity of controlling the use of our data and knowledge about ourselves. Being aware of who we have ceded control of marketing to us, or influencing how we might be interacting with our environment, is at least a step towards not blindly giving up control of free choice.

The Pokémon GO permissions “for the purpose of performing services on our behalf“, “third party service providers to work with us to administer and provide the Services” and  “also use location information to improve and personalize our Services for you (or your authorized child)” are so broad as they could mean almost anything. They can also be changed without any notice period. It’s therefore pretty meaningless. But it’s the third parties’ connection, data collection in passing, that is completely hidden from players.

If we are ever to use privacy policies as meaningful tools to enable consent, then they must be transparent to show how a chain of permissions between companies connect their services.

Otherwise they are no more than get out of jail free cards for the companies that trade our data behind the scenes, if we were ever to claim for its misuse.  Data collectors must improve transparency.

Behavioural tracking and trust

Covert data collection and interaction is not conducive to user trust, whether through a failure to communicate by design or not.

By combining location data and behavioural data, measuring footfall is described as “the holy grail for retailers and landlords alike” and it is valuable.  “Pavement Opportunity” data may be sent anonymously, but if its analysis and storage provides ways to pitch to people, even if not knowing who they are individually, or to groups of people, it is discriminatory and potentially invisibly predatory. The pedestrian, or the player, Jo Public, is a commercial opportunity.

Pokémon GO has potential to connect the opportunity for profit makers with our pockets like never before. But they’re not alone.

Who else is getting our location data that we don’t sign up for sharing “in 81 towns and cities across Great Britain?

Whether footfall outside the shops or packaged as a game that gets us inside them, public interest researchers and commercial companies alike both risk losing our trust if we feel used as pieces in a game that we didn’t knowingly sign up to. It’s creepy.

For children the ethical implications are even greater.

There are obligations to meet higher legal and ethical standards when processing children’s data and presenting them marketing. Parental consent requirements fail children for a range of reasons.

So far, the UK has said it will implement the EU GDPR. Clear and affirmative consent is needed. Parental consent will be required for the processing of personal data of children under age 16. EU Member States may lower the age requiring parental consent to 13, so what that will mean for children here in the UK is unknown.

The ethics of product placement and marketing rules to children of all ages go out the window however, when the whole game or programme is one long animated advert. On children’s television and YouTube, content producers have turned brand product placement into programmes: My Little Pony, Barbie, Playmobil and many more.

Alice Webb, Director of BBC Children’s and BBC North,  looked at some of the challenges in this as the BBC considers how to deliver content for children whilst adapting to technological advances in this LSE blog and the publication of a new policy brief about families and ‘screen time’, by Alicia Blum-Ross and Sonia Livingstone.

So is this augmented reality any different from other platforms?

Yes because you can’t play the game without accepting the use of the maps and by default some sacrifice of your privacy settings.

Yes because the ethics and implications of of putting kids not simply in front of a screen that pitches products to them, but puts them physically into the place where they can consume products – if the McDonalds story is correct and a taster of what will follow – is huge.

Boundaries between platforms and people

Blum-Ross says, “To young people, the boundaries and distinctions that have traditionally been established between genres, platforms and devices mean nothing; ditto the reasoning behind the watershed system with its roots in decisions about suitability of content. “

She’s right. And if those boundaries and distinctions mean nothing to providers, then we must have that honest conversation with urgency. With our contrived consent, walking and running and driving without coercion, we are being packaged up and delivered right to the door of for-profit firms, paying for the game with our privacy. Smart cities are exploiting street sensors to do the same.

Freewill is at the very heart of who we are. “The ability to choose between different possible courses of action. It is closely linked to the concepts of responsibility, praise, guilt, sin, and other judgments which apply only to actions that are freely chosen.” Free choice of where we shop, what we buy and who we interact with is open to influence. Influence that is not entirely transparent presents opportunity for hidden manipulation, while the NSPCC might be worried about the risk of rare physical threat, the potential for the influencing of all children’s behaviour, both positive and negative, reaches everyone.

Some stories of how behaviour is affected, are heartbreakingly positive. And I met and chatted with complete strangers who shared the joy of something new and a mutual curiosity of the game. Pokémon GOis clearly a lot of fun. It’s also unclear on much more.

I would like to explicitly understand if Pokémon GO is gift packaging behavioural research by piggybacking on the Google platforms that underpin it, and providing linked data to Google or third parties.

Fishing for frequent Pokémon encourages players to ‘check in’ and keep that behaviour tracking live. 4pm caught a Krabby in the closet at work. 6pm another Krabby. Yup, still at work. 6.32pm Pidgey on the street outside ThatGreenCoffeeShop. Monday to Friday.

The Google privacy policies changed in the last year require ten clicks for opt out, and in part, the download of an add-on. Google has our contacts, calendar events, web searches, health data, has invested in our genetics, and all the ‘Things that make you “you”. They have our history, and are collecting our present. Machine intelligence work on prediction, is the future. For now, perhaps that will be pinging you with a ‘buy one get one free’ voucher at 6.20, or LCD adverts shifting as you drive back home.

Pokémon GO doesn’t have to include what data Google collects in its privacy policy. It’s in Google’s privacy policy. And who really read that when it came out months ago, or knows what it means in combination with new apps and games we connect it with today? Tracking and linking data on geolocation, behavioural patterns, footfall, whose other phones are close by,  who we contact, and potentially even our spend from Google wallet.

Have Google and friends of Niantic gotta know it all?

The illusion that might cheat us: ethical data science vision and practice

This blog post is also available as an audio file on soundcloud.


Anais Nin, wrote in her 1946 diary of the dangers she saw in the growth of technology to expand our potential for connectivity through machines, but diminish our genuine connectedness as people. She could hardly have been more contemporary for today:

“This is the illusion that might cheat us of being in touch deeply with the one breathing next to us. The dangerous time when mechanical voices, radios, telephone, take the place of human intimacies, and the concept of being in touch with millions brings a greater and greater poverty in intimacy and human vision.”
[Extract from volume IV 1944-1947]

Echoes from over 70 years ago, can be heard in the more recent comments of entrepreneur Elon Musk. Both are concerned with simulation, a lack of connection between the perceived, and reality, and the jeopardy this presents for humanity. But both also have a dream. A dream based on the positive potential society has.

How will we use our potential?

Data is the connection we all have between us as humans and what machines and their masters know about us. The values that masters underpin their machine design with, will determine the effect the machines and knowledge they deliver, have on society.

In seeking ever greater personalisation, a wider dragnet of data is putting together ever more detailed pieces of information about an individual person. At the same time data science is becoming ever more impersonal in how we treat people as individuals. We risk losing sight of how we respect and treat the very people whom the work should benefit.

Nin grasped the risk that a wider reach, can mean more superficial depth. Facebook might be a model today for the large circle of friends you might gather, but how few you trust with confidences, with personal knowledge about your own personal life, and the privilege it is when someone chooses to entrust that knowledge to you. Machine data mining increasingly tries to get an understanding of depth, and may also add new layers of meaning through profiling, comparing our characteristics with others in risk stratification.
Data science, research using data, is often talked about as if it is something separate from using information from individual people. Yet it is all about exploiting those confidences.

Today as the reach has grown in what is possible for a few people in institutions to gather about most people in the public, whether in scientific research, or in surveillance of different kinds, we hear experts repeatedly talk of the risk of losing the valuable part, the knowledge, the insights that benefit us as society if we can act upon them.

We might know more, but do we know any better? To use a well known quote from her contemporary, T S Eliot, ‘Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?’

What can humans achieve? We don’t yet know our own limits. What don’t we yet know?  We have future priorities we aren’t yet aware of.

To be able to explore the best of what Nin saw as ‘human vision’ and Musk sees in technology, the benefits we have from our connectivity; our collaboration, shared learning; need to be driven with an element of humility, accepting values that shape  boundaries of what we should do, while constantly evolving with what we could do.

The essence of this applied risk is that technology could harm you, more than it helps you. How do we avoid this and develop instead the best of what human vision makes possible? Can we also exceed our own expectations of today, to advance in moral progress?

Continue reading The illusion that might cheat us: ethical data science vision and practice

OkCupid and Google DeepMind: Happily ever after? Purposes and ethics in datasharing

This blog post is also available as an audio file on soundcloud.


What constitutes the public interest must be set in a universally fair and transparent ethics framework if the benefits of research are to be realised – whether in social science, health, education and more – that framework will provide a strategy to getting the pre-requisite success factors right, ensuring research in the public interest is not only fit for the future, but thrives. There has been a climate change in consent. We need to stop talking about barriers that prevent datasharing  and start talking about the boundaries within which we can.

What is the purpose for which I provide my personal data?

‘We use math to get you dates’, says OkCupid’s tagline.

That’s the purpose of the site. It’s the reason people log in and create a profile, enter their personal data and post it online for others who are looking for dates to see. The purpose, is to get a date.

When over 68K OkCupid users registered for the site to find dates, they didn’t sign up to have their identifiable data used and published in ‘a very large dataset’ and onwardly re-used by anyone with unregistered access. The users data were extracted “without the express prior consent of the user […].”

Are the registration consent purposes compatible with the purposes to which the researcher put the data should be a simple enough question.  Are the research purposes what the person signed up to, or would they be surprised to find out their data were used like this?

Questions the “OkCupid data snatcher”, now self-confessed ‘non-academic’ researcher, thought unimportant to consider.

But it appears in the last month, he has been in good company.

Google DeepMind, and the Royal Free, big players who do know how to handle data and consent well, paid too little attention to the very same question of purposes.

The boundaries of how the users of OkCupid had chosen to reveal information and to whom, have not been respected in this project.

Nor were these boundaries respected by the Royal Free London trust that gave out patient data for use by Google DeepMind with changing explanations, without clear purposes or permission.

The legal boundaries in these recent stories appear unclear or to have been ignored. The privacy boundaries deemed irrelevant. Regulatory oversight lacking.

The respectful ethical boundaries of consent to purposes, disregarding autonomy, have indisputably broken down, whether by commercial org, public body, or lone ‘researcher’.

Research purposes

The crux of data access decisions is purposes. What question is the research to address – what is the purpose for which the data will be used? The intent by Kirkegaard was to test:

“the relationship of cognitive ability to religious beliefs and political interest/participation…”

In this case the question appears intended rather a test of the data, not the data opened up to answer the test. While methodological studies matter, given the care and attention [or self-stated lack thereof] given to its extraction and any attempt to be representative and fair, it would appear this is not the point of this study either.

The data doesn’t include profiles identified as heterosexual male, because ‘the scraper was’. It is also unknown how many users hide their profiles, “so the 99.7% figure [identifying as binary male or female] should be cautiously interpreted.”

“Furthermore, due to the way we sampled the data from the site, it is not even representative of the users on the site, because users who answered more questions are overrepresented.” [sic]

The paper goes on to say photos were not gathered because they would have taken up a lot of storage space and could be done in a future scraping, and

“other data were not collected because we forgot to include them in the scraper.”

The data are knowingly of poor quality, inaccurate and incomplete. The project cannot be repeated as ‘the scraping tool no longer works’. There is an unclear ethical or peer review process, and the research purpose is at best unclear. We can certainly give someone the benefit of the doubt and say intent appears to have been entirely benevolent. It’s not clear what the intent was. I think it is clearly misplaced and foolish, but not malevolent.

The trouble is, it’s not enough to say, “don’t be evil.” These actions have consequences.

When the researcher asserts in his paper that, “the lack of data sharing probably slows down the progress of science immensely because other researchers would use the data if they could,”  in part he is right.

Google and the Royal Free have tried more eloquently to say the same thing. It’s not research, it’s direct care, in effect, ignore that people are no longer our patients and we’re using historical data without re-consent. We know what we’re doing, we’re the good guys.

However the principles are the same, whether it’s a lone project or global giant. And they’re both wildly wrong as well. More people must take this on board. It’s the reason the public interest needs the Dame Fiona Caldicott review published sooner rather than later.

Just because there is a boundary to data sharing in place, does not mean it is a barrier to be ignored or overcome. Like the registration step to the OkCupid site, consent and the right to opt out of medical research in England and Wales is there for a reason.

We’re desperate to build public trust in UK research right now. So to assert that the lack of data sharing probably slows down the progress of science is misplaced, when it is getting ‘sharing’ wrong, that caused the lack of trust in the first place and harms research.

A climate change in consent

There has been a climate change in public attitude to consent since care.data, clouded by the smoke and mirrors of state surveillance. It cannot be ignored.  The EUGDPR supports it. Researchers may not like change, but there needs to be an according adjustment in expectations and practice.

Without change, there will be no change. Public trust is low. As technology advances and if we continue to see commercial companies get this wrong, we will continue to see public trust falter unless broken things get fixed. Change is possible for the better. But it has to come from companies, institutions, and people within them.

Like climate change, you may deny it if you choose to. But some things are inevitable and unavoidably true.

There is strong support for public interest research but that is not to be taken for granted. Public bodies should defend research from being sunk by commercial misappropriation if they want to future-proof public interest research.

The purpose for which the people gave consent are the boundaries within which you have permission to use data, that gives you freedom within its limits, to use the data.  Purposes and consent are not barriers to be overcome.

If research is to win back public trust developing a future proofed, robust ethical framework for data science must be a priority today.

Commercial companies must overcome the low levels of public trust they have generated in the public to date if they ask ‘trust us because we’re not evil‘. If you can’t rule out the use of data for other purposes, it’s not helping. If you delay independent oversight it’s not helping.

This case study and indeed the Google DeepMind recent episode by contrast demonstrate the urgency with which working out what common expectations and oversight of applied ethics in research, who gets to decide what is ‘in the public interest’ and data science public engagement must be made a priority, in the UK and beyond.

Boundaries in the best interest of the subject and the user

Society needs research in the public interest. We need good decisions made on what will be funded and what will not be. What will influence public policy and where needs attention for change.

To do this ethically, we all need to agree what is fair use of personal data, when is it closed and when is it open, what is direct and what are secondary uses, and how advances in technology are used when they present both opportunities for benefit or risks to harm to individuals, to society and to research as a whole.

The potential benefits of research are potentially being compromised for the sake of arrogance, greed, or misjudgement, no matter intent. Those benefits cannot come at any cost, or disregard public concern, or the price will be trust in all research itself.

In discussing this with social science and medical researchers, I realise not everyone agrees. For some, using deidentified data in trusted third party settings poses such a low privacy risk, that they feel the public should have no say in whether their data are used in research as long it’s ‘in the public interest’.

For the DeepMind researchers and Royal Free, they were confident even using identifiable data, this is the “right” thing to do, without consent.

For the Cabinet Office datasharing consultation, the parts that will open up national registries, share identifiable data more widely and with commercial companies, they are convinced it is all the “right” thing to do, without consent.

How can researchers, society and government understand what is good ethics of data science, as technology permits ever more invasive or covert data mining and the current approach is desperately outdated?

Who decides where those boundaries lie?

“It’s research Jim, but not as we know it.” This is one aspect of data use that ethical reviewers will need to deal with, as we advance the debate on data science in the UK. Whether independents or commercial organisations. Google said their work was not research. Is‘OkCupid’ research?

If this research and data publication proves anything at all, and can offer lessons to learn from, it is perhaps these three things:

Who is accredited as a researcher or ‘prescribed person’ matters. If we are considering new datasharing legislation, and for example, who the UK government is granting access to millions of children’s personal data today. Your idea of a ‘prescribed person’ may not be the same as the rest of the public’s.

Researchers and ethics committees need to adjust to the climate change of public consent. Purposes must be respected in research particularly when sharing sensitive, identifiable data, and there should be no assumptions made that differ from the original purposes when users give consent.

Data ethics and laws are desperately behind data science technology. Governments, institutions, civil, and all society needs to reach a common vision and leadership how to manage these challenges. Who defines these boundaries that matter?

How do we move forward towards better use of data?

Our data and technology are taking on a life of their own, in space which is another frontier, and in time, as data gathered in the past might be used for quite different purposes today.

The public are being left behind in the game-changing decisions made by those who deem they know best about the world we want to live in. We need a say in what shape society wants that to take, particularly for our children as it is their future we are deciding now.

How about an ethical framework for datasharing that supports a transparent public interest, which tries to build a little kinder, less discriminating, more just world, where hope is stronger than fear?

Working with people, with consent, with public support and transparent oversight shouldn’t be too much to ask. Perhaps it is naive, but I believe that with an independent ethical driver behind good decision-making, we could get closer to datasharing like that.

That would bring Better use of data in government.

Purposes and consent are not barriers to be overcome. Within these, shaped by a strong ethical framework, good data sharing practices can tackle some of the real challenges that hinder ‘good use of data’: training, understanding data protection law, communications, accountability and intra-organisational trust. More data sharing alone won’t fix these structural weaknesses in current UK datasharing which are our really tough barriers to good practice.

How our public data will be used in the public interest will not be a destination or have a well defined happy ending, but it is a long term  process which needs to be consensual and there needs to be a clear path to setting out together and achieving collaborative solutions.

While we are all different, I believe that society shares for the most part, commonalities in what we accept as good, and fair, and what we believe is important. The family sitting next to me have just counted out their money and bought an ice cream to share, and the staff gave them two. The little girl is beaming. It seems that even when things are difficult, there is always hope things can be better. And there is always love.

Even if some might give it a bad name.

********

img credit: flickr/sofi01/ Beauty and The Beast  under creative commons