Tag Archives: data

The Children of Covid: Where are they now? #CPC22

At Conservative Party Conference (“CPC22”) yesterday, the CSJ Think Tank hosted an event called, The Children of Lockdown: Where are they now?

When the speakers were finished, and other questions had been asked, I had the opportunity to raise the following three points.

They matter to me because I am concerned that bad policy-making for children will come from the misleading narrative based on bad data. The data used in the discussion is bad data for a number of reasons, based on our research over the last 4 years at defenddigitalme, and previously as part of the Counting Children coalition with particular regard to the Schools Bill.

The first is a false fact that has been often bandied about over the last year in the media and in Parliamentary debate, and that the Rt Hon Sir Iain Duncan Smith MP repeated in opening the panel discussion, that 100,000 children have not returned to school, “as a result of all of this“.

Full Fact has sought to correct this misrepresentation by individuals and institutions in the public domain several times, including one year ago today, when a Sunday Times article, published on 3 October 2021, claimed new figures showed “that between 95,000 and 135,000 children did not return to school in the autumn term, credited to the Commission on Young Lives, a task force headed up by former Children’s Commissioner for England.” Anne Longfield had then told Full Fact, that on 16 September 2021, “the rate of absence was around 1.5 percentage points higher than would normally be expected in the autumn term pre-pandemic.

Full Fact wrote, “This analysis attempts to highlight an estimated level of ‘unexplained absence’, and comes with a number of caveats—for example it is just one day’s data, and it does not record or estimate persistent absence.”

There was no attempt made in the CPC22 discussion to disaggregate the “expected” absence rate from anything on top, and presenting the idea as fact, that 100,000 children have not returned to school, “as a result of all of this”, is misleading.

Suggesting this causation for 100,000 children is wrong for two reasons. The first, is not talking about the number of children within that number who were out of school before the pandemic and reasons for that. The CSJ’s own report published in 2021, said that, “In the autumn term of 2019, i.e pre-Covid 60,244 pupils were labeled as severely absent.”

Whether it is the same children or not who were out of school before and afterwards also matters to apply causation. This named pupil-level absence data is already available for every school child at national level on a termly basis, alongside the other personal details collected termly in the school census, among other collections.

Full Fact went on to say, “The Telegraph reported in April 2021 that more than 20,000 children had “fallen off” school registers when the Autumn 2020 term began. The Association of Directors of Children’s Services projected that, as of October 2020, more than 75,000 children were being educated at home. However, as explained above, this is not the same as being persistently absent.”

The second point I made yesterday, was that the definition of persistent absence has changed three times since 2010, so that children are classified as persistently absent more quickly now at 10%, than when it meant 20% or more of sessions were missed.

(It’s also worth noting that data are inconsistent over time in another way too. The 2019 Guide to Absence Statistics draws attention to the fact that, “Year on year comparisons of local authority data may be affected by schools converting to academies.”)

And third and finally, I pointed out where we have found a further problem in counting children correctly. Local Authorities do this in different ways. Some count each actual child once in the year in their data, some count each time a child changes status (i.e a move from mainstream into Alternative Provision to Elective Home Education could see the same child counted three times in total, once in each dataset across the same year), and some count full-time equivalent funded places (i.e. if five children each have one day a week outside mainstream education, they would be counted only as one single full-time child in total in the reported data).

Put together, this all means not only that the counts are wrong, but the very idea of “ghost children” who simply ‘disappear’ from school without anything known about them anywhere at all, is a fictitious and misleading presentation.

All schools (including academies and independent schools) must notify their local authority when they are about to remove a pupil’s name from the school admission register under any of the fifteen grounds listed in Regulation 8(1) a-n of the Education (Pupil Registration) (England) Regulations 2006. On top of that, children are recorded as Children Missing Education, “CME” where the Local Authority decides a child is not in receipt of suitable education.

For those children,  processing of personal data of children not-in-school by Local Authorities is already required under s436Aof the The Education Act 1996, Duty to make arrangements to identify children not receiving education.

Research done as part of the Counting Children coalition with regards to the Schools Bill, has found every Local Authority that has replied to date (with a 67% response rate to FOI on July 5, 2022) upholds its statutory duty to record these children who either leave state education, or who are found to be otherwise missing education. Every Local Authority has a record of these children, by name, together with much more detailed data.**  The GB News journalist on the panel said she had taken her children out of school and the Local Authority had not contacted her. But as a home-educating audience member then pointed out, that does not mean therefore the LA did not know about her decision, since they would already have her child-/ren’s details recorded. There is law in place already on what LAs must track. Whether or not and how the LA is doing its job, was beyond this discussion, but the suggestion that more law is needed to make them collect the same data as is already required is superfluous.

This is not only about the detail of context and nuance in the numbers and its debate, but substantially alters the understanding of the facts. This matters to have correct, so that bad policy doesn’t get made based on bad data and misunderstanding the conflated causes.

Despite this, in closing Iain Duncan Smith asked the attendees to go out from the meeting and evangelise about these issues. If they do so based on his selection of ‘facts’ they will spread misinformation.

At the event, I did not mention two further parts of this context that matter if policy makers and the public are to find solutions to what is no doubt an important series of problems, and that must not be manipulated to present as if they are entirely as a result of the pandemic. And not only the pandemic, but lockdowns specifically.

Historically, the main driver for absence is illness. In 2020/21, this was 2.1% across the full year. This was a reduction on the rates seen before the pandemic (2.5% in 2018/19).

A pupil on-roll is identified as a persistent absentee if they miss 10% or more of their possible sessions (one school day has two sessions, morning and afternoon.)  1.1% of pupil enrolments missed 50% or more of their possible sessions in 2020/21. Children with additional educational and health needs or disability, have higher rates of absence. During Covid, the absence rate for pupils with an EHC plan was 13.1% across 2020/21.

Authorised other reasons has risen to 0.9% from 0.3%, reflecting that vulnerable children were prioritised to continue attending school but where parents did not want their child to attend, schools were expected to authorise the absence.” (DfE data, academic year 2020/21)

While there were several references made by the panel to the impact of the pandemic on children’s poor mental health, no one mentioned the cuts to youth services’ funding by 70% over ten years, that has allowed CAMHS funding and service provision to wither and fail children well before 2020. The pandemic has exacerbated children’s pre-existing needs that the government has not only failed to meet since, but actively reduced provision for.

It was further frustrating to hear, as someone with Swedish relatives, of their pandemic approach presented as comparable with the UK and that in effect, they managed it ‘better’. It seems absurd to me, to compare the UK uncritically with a country with the population density of Sweden. But if we *are* going to do comparisons with other countries, it should be with fuller understanding of context, and all of their data, and caveats if comparison is to be meaningful.

I was somewhat surprised that Iain Duncan Smith also failed to acknowledge, even once, that thousands of people in the UK have died and continue to die or have lasting effects as a result of and with COVID-19. According to the King’s Fund report,Overall, the number of people who have died from Covid-19 to end-July 2022 is 180,000, about 1 in 8 of all deaths in England and Wales during the pandemic.” Furthermore in England and Wales, “The pandemic has resulted in about 139,000 excess deaths“. “Among comparator high-income countries (other than the US), only Spain and Italy had higher rates of excess mortality in the pandemic to mid-2021 than the UK.” I believe that if we’re going to compare ‘lockdown success’ at all, we should look at the wider comparable data before making it. He might also have chosen to mention alongside this, the UK success story of research and discovery, and the NHS vaccination programme.

And there was no mention at all made of the further context, that while much was made of the economic harm of the impact of the pandemic on children, “The Children of Lockdown” are also, “The Children of Brexit”. It is non-partisan to point out this fact, and, I would suggest, disingenuous to leave out entirely in any discussion of the reasons for or impact of economic downturn in the UK in the last three years. In fact, the FT recently called it a “deafening silence.”

At defenddigitalme, we raised the problem of this inaccurate “counting” narrative numerous times including with MPs, members of the House of Lords in the Schools Bill debate as part of the Counting Children coalition, and in a letter to The Telegraph in March this year. More detail is here, in a blog from April.


Update May 23, 2023

Today I received the DfE held figures of he number of children who leave an educational setting for an unknown onward destination, a section of the Common Transfer Files holding space, in effect a digital limbo after leaving an educational setting until the child is ‘claimed’ by the destination. It’s  known as, the Lost Pupils Database.

Furthermore, the DfE has published exploratory statistics on EHE
and ad hoc stats on CME too.

October 2022. More background:

The panel was chaired by the Rt Hon Sir Iain Duncan Smith MP and other speakers included Fraser Nelson, Editor of The Spectator Magazine; Kieron Boyle, Chief Executive Officer of Guy’s & St Thomas Foundation; the Rt Hon Robert Halfon MP, Education Select Committee Chair; and Mercy Muroki, Journalist at GB News.

We have previously offered to share our original research data and discuss with the Department for Education, and repeated this offer to the panel to help correct the false facts. I look forward in the hope they will take it up.

** Data collected in the record by Local Authorities when children are deregistered from state education (including to move to private school) may include a wide range of personal details, including as an example in Harrow: Family Name, Forename, Middle name, DOB, Unique Pupil Number (“UPN”), Former UPN, Unique Learner Number, Home Address (multi-field), Chosen surname, Chosen given name, NCY (year group), Gender, Ethnicity, Ethnicity source, Home Language, First Language, EAL (English as an additional language), Religion, Medical flag, Connexions Assent, School name, School start date, School end date, Enrol Status, Ground for Removal, Reason for leaving, Destination school, Exclusion reason, Exclusion start date, Exclusion end date, SEN Stage, SEN Needs, SEN History, Mode of travel, FSM History, Attendance, Student Service Family, Carer details, Carer address details, Carer contract details, Hearing Impairment And Visual Impairment, Education Psychology support, and Looked After status.

When the gold standard no longer exists: data protection and trust

Last week the DCMS announced that consultation on changes to Data Protection laws is coming soon.

  • UK announces intention for new multi-billion pound global data partnerships with the US, Australia and Republic of Korea
  • International privacy expert John Edwards named as preferred new Information Commissioner to oversee shake-up
  • Consultation to be launched shortly to look at ways to increase trade and innovation through data regime.

The Telegraph reported, Mr Dowden argues that combined, they will enable Britain to set the “gold standard” in data regulation, “but do so in a way that is as light touch as possible”.

It’s an interesting mixture of metaphors. What is a gold standard? What is light touch? These rely on assumptions in the reader to assume meaning, but don’t convey any actual content. Whether there will be substantive changes or not, we need to wait for the full announcement this month.

Oliver Dowden’s recent briefing to the Telegraph (August 25) was not the first trailer for changes that are yet to be announced. He wrote in the FT in February this year, that, “the UK has an opportunity to be at the forefront of global, data-driven growth,” and it looks like he has tried to co-opt the rights’ framing as his own.  …”the beginning of a new era in the UK — one  where we start asking ourselves not just whether we have the right to use data, but whether,  given its potential for good, we have the right not to.”

There was nothing more on that in this week’s announcement, but the focus was on international trade. The Government says it is prioritising six international agreements with “the US, Australia, Colombia, Singapore, South Korea and Dubaibut in the future it also intends to target the world’s fastest growing economies, among them, India, Brazil, Kenya and Indonesia.” (my bold)

Notably absent from the ‘fastest growing’ among them mentions’ list is China. What those included in the list have in common, is that they are countries not especially renowned for protecting human rights.

Human rights like privacy. The GDPR and in turn the UK-GDPR recognised that rights matter.  Data Protection is not designed in other regimes to be about prioritising the protection of rights but harmonisation of data in trade, and that may be where we are headed. If so, it would be out of step with how the digital environment has changed since those older laws were seen as satisfactory. But weren’t.  And the reason why the EU countries moved towards both better harmonisation *and* rights protection.

At the same time, while data protection laws increasingly align towards a high interoperable and global standard, data sovereignty and protectionism is growing too where transfers to the US remain unprotected from government surveillance.

Some countries are establishing stricter rules on the cross-border transfer of personal information, in the name of digital sovereignty, security or business growth. such as Hessen’s decision on Microsoft and “bring the data home” moves to German-based data centres.

In the big focus on data-for-trade post-Brexit fire sale,  the DCMS appears to be ignoring these risks of data distribution, despite having a good domestic case study on its doorstep in 2020. The Department for Education has been giving data away sensitive pupil data since 2012. Millions of people, including my own children, have no idea where it’s gone. The lack of respect for current law makes me wonder how I will trust that our own government, and those others we trade with, will respect our rights and risks in future trade deals.

Dowden complains in the Telegraph about the ICO that, “you don’t know if you have done something wrong until after you’ve done it”.  Isn’t that the way that enforcement usually works? Should the 2019-20 ICO audit have turned a blind eye to  the Department for Education lack of prioritisation of the rights of the named records of over 21 million pupils? Don’t forget even gambling companies had access to learners’ records of which the Department for Education claimed to be unaware. To be ignorant of law that applies to you, is a choice.

Dowden claims the changes will enable Britain to set the “gold standard” in data regulation. It’s an ironic analogy to use, since the gold standard while once a measure of global trust between countries, isn’t used by any country today. Our government sold off our physical gold over 20 years ago, after being the centre of the global gold market for over 300 years. The gold standard is a meaningless thing of the past that sounds good. A true international gold standard existed for fewer than 50 years (1871 to 1914). Why did we even need it? Because we needed a consistent trusted measure of monetary value, backed by trust in a commodity. “We have gold because we cannot trust governments,” President Herbert Hoover famously said in 1933 in his statement to Franklin D. Roosevelt. The gold standard was all about trust.

At defenddigitalme we’ve very recently been talking with young people about politicians’ use of language in debating national data policy.  Specifically, data metaphors. They object to being used as the new “oil” to “power 21st century Britain” as Dowden described it.

A sustainable national data strategy must respect human rights to be in step with what young people want. It must not go back to old-fashioned data laws only  shaped by trade and not also by human rights; laws that are not fit for purpose even in the current digital environment. Any national strategy must be forward-thinking. It otherwise wastes time in what should be an urgent debate.

In fact, such a strategy is the wrong end of the telescope from which to look at personal data at all— government should be focussing on the delivery of quality public services to support people’s interactions with the State and managing the administrative data that comes out of digital services as a by-product and externality. Accuracy. Interoperability. Registers. Audit. Rights’ management infrastructure. Admin data quality is quietly ignored while we package it up hoping no one will notice it’s really. not. good.

Perhaps Dowden is doing nothing innovative at all. If these deals are to be about admin data given away in international trade deals he is simply continuing a long tradition of selling off the family silver. The government may have got to the point where there is little left to sell. The question now would be whose family does it come from?

To use another bad metaphor, Dowden is playing with fire here if they don’t fix the issue of the future of trust. Oil and fire don’t mix well. Increased data transfers—without meaningful safeguards including minimized data collection to start with—will increase risk, and transfer that risk to you and me.

Risks of a lifetime of identity fraud are not just minor personal externalities in short term trade. They affect nation state security. Digital statecraft. Knowledge of your public services is business intelligence. Loss of trust in data collection creates lasting collective harm to data quality, with additional risk and harm as a result passed on to public health programmes and public interest research.

I’ll wait and see what the details of the plans are when announced. We might find it does little more than package up recommendations on Codes of Practice, Binding Corporate Contracts and other guidance that the EDPB has issued in the last 12 months. But whatever it looks like, so far we are yet to see any intention to put in place the necessary infrastructure of rights management that admin data requires. While we need data registers, those we had have been axed. Few new ones under the Digital Economy Act replaced them. Transparency and controls for people to exercise rights are needed if the government wants our personal data to be part of new deals.

 

img: René Magritte The False Mirror Paris 1929

=========

Join me at the upcoming lunchtime online event, on September 17th from 13:00 to talk about the effect of policy makers’ language in the context of the National Data Strategy: ODI Fridays: Data is not an avocado – why it matters to Gen Z https://theodi.org/event/odi-fridays-data-is-not-an-avocado-why-it-matters-to-gen-z/

Views on a National AI strategy

Today was the APPG AI Evidence Meeting – The National AI Strategy: How should it look? Here’s some of my personal views and takeaways.

Have the Regulators the skills and competency to hold organisations to account for what they are doing? asked Roger Taylor, the former Chair of Ofqual the exams regulator, as he began the panel discussion, chaired by Lord Clement-Jones.

A good question was followed by another.

What are we trying to do with AI? asked Andrew Strait, Associate Director of Research Partnerships at Ada Lovelace Institute and formerly of DeepMind and Google. The goal of a strategy should not be to have more AI for the sake of having more AI, he said, but an articulation of values and goals. (I’d suggest the government may be in fact in favour of exactly that, more AI for its own sake where its appplication is seen as a growth market.) And interestingly he suggested that the Scottish strategy has more values-based model, such as fairness. [I had, it seems, wrongly assumed that a *national* AI strategy to come, would include all of the UK.]

The arguments on fairness are well worn in AI discussion and getting old. And yet they still too often fail to ask whether these tools are accurate or even work at all. Look at the education sector and one company’s product, ClassCharts, that claimed AI as its USP for years, but the ICO found in 2020 that the company didn’t actually use any AI at all. If company claims are not honest, or not accurate, then they’re not fair to anyone, never mind across everyone.

Fairness is still too often thought of in terms of explainability of a computer algorithm, not the entire process it operates in. As I wrote back in 2019, “yes we need fairness accountability and transparency. But we need those human qualities to reach across thinking beyond computer code. We need to restore humanity to automated systems and it has to be re-instated across whole processes.”

Strait went on to say that safe and effective AI would be something people can trust. And he asked the important question: who gets to define what a harm is? Rightly identifying that the harm identified by a developer of a tool, may be very different from those people affected by it. (No one on the panel attempted to define or limit what AI is, in these discussions.) He suggested that the carbon footprint from AI may counteract the benefit it would have to apply AI in the pursuit of climate-change goals. “The world we want to create with AI” was a very interesting position and I’d have liked to hear him address what he meant by that, who is “we”, and any assumptions within it.

Lord Clement-Jones asked him about some of the work that Ada Lovelace had done on harms such as facial recognition, and also asked whether some sector technologies are so high risk that they must be regulated?  Strait suggested that we lack adequate understanding of what harms are — I’d suggest academia and civil society have done plenty of work on identifying those, they’ve just been too often  ignored until after the harm is done and there are legal challenges. Strait also suggested he thought the Online Harms agenda was ‘a fantastic example’ of both horizontal and vertical regulation. [Hmm, let’s see. Many people would contest that, and we’ll see what the Queen’s Speech brings.]

Maria Axente then went on to talk about children and AI.  Her focus was on big platforms but also mentioned a range of other application areas. She spoke of the data governance work going on at UNICEF. She included the needs for driving awareness of the risks for children and AI, and digital literacy. The potential for limitations on child  development, the exacerbation of the digital divide,  and risks in public spaces but also hoped for opportunities. She suggested that the AI strategy may therefore be the place for including children.

This of course was something I would want to discuss at more length, but in summary the last decade of Westminster policy affecting children, even the Children’s Commissioner most recent Big Ask survey, bypass the question of children’s *rights* completely. If the national AI strategy by contrast would address rights, [the foundation upon which data laws are built] and create the mechanisms in public sector interactions with children that would enable them to be told if and how their data is being used (in AI systems or otherwise) and be able to exercise the choices that public engagement time and time again says is what people want, then that would be a *huge* and positive step forward to effective data practice across the public sector and for use of AI. Otherwise I see a risk that a strategy on AI and children will ignore children as rights holders across a full range of rights in the digital environment, and focus only on the role of AI in child protection, a key DCMS export aim, and ignore the invasive nature of safety tech tools, and its harms.

Next Dr Jim Weatherall from Astra Zeneca tied together  leveraging “the UK unique strengths of the NHS” and “data collected there” wanting a close knitting together of the national AI strategy and the national data strategy, so that healthcare, life sciences and biomedical sector can become “an international renowned asset.”  He’d like to see students doing data science modules in studies and international access to talent to work for AZ.

Lord Clement-Jones then asked him how to engender public trust in data use. Weatherall said a number of false starts in the past are hindering progress, but that he saw the way forward was data trusts and citizen juries.

His answer ignores the most obvious solution: respect existing law and human rights, using data only in ways that people want and give their permission to do so. Then show them that you did that, and nothing more. In short, what medConfidential first proposed in 2014, the creation of data usage reports.

The infrastructure for managing personal data controls in the public sector, as well as its private partners, must be the basic building block for any national AI strategy.  Views from public engagement work, polls, and outreach has not changed significantly since those done in 2013-14, but ask for the same over and over again. Respect for ‘red lines’ and to have control and choice. Won’t government please make it happen?

If the government fails to put in place those foundations, whatever strategy it builds will fall in the same ways they have done to date, like care.data did by assuming it was acceptable to use data in the way that the government wanted, without a social licence, in the name of “innovation”. Aims that were championed by companies such as Dr Foster, that profited from reusing personal data from the public sector, in a “hole and corner deal” as described by the chairman of the House of Commons committee of public accounts in 2006. Such deals put industry and “innovation” ahead of what the public want in terms of ‘red lines’ for acceptable re-uses of their own personal data and for data re-used in the public interest vs for commercial profit.  And “The Department of Health failed in its duty to be open to parliament and the taxpayer.” That openness and accountability are still missing nearly ten years on in the scope creep of national datasets and commercial reuse, and in expanding data policies and research programmes.

I disagree with the suggestion made that Data Trusts will somehow be more empowering to everyone than mechanisms we have today for data management. I believe Data Trusts will further stratify those who are included and those excluded, and benefit those who have capacity to be able to participate, and disadvantage those who cannot choose. They are also a figleaf of acceptability that don’t solve the core challenge . Citizen juries cannot do more than give a straw poll. Every person whose data is used has entitlement to rights in law, and the views of a jury or Trust cannot speak for everyone or override those rights protected in law.

Tabitha Goldstaub spoke next and outlined some of what AI Council Roadmap had published. She suggested looking at removing barriers to best support the AI start-up community.

As I wrote when the roadmap report was published, there are basics missing in government’s own practice that could be solved. It had an ambition to, “Lead the development of data governance options and its uses. The UK should lead in developing appropriate standards to frame the future governance of data,” but the Roadmap largely ignored the governance infrastructures that already exist. One can only read into that a desire to change and redesign what those standards are.

I believe that there should be no need to change the governance of data but instead make today’s rights able to be exercised and deliver enforcement to make existing governance actionable. Any genuine “barriers” to data use in data protection law,  are designed as protections for people; the people the public sector, its staff and these arms length bodies are supposed to serve.

Blaming AI and algorithms, blaming lack of clarity in the law, blaming “barriers” is often avoidance of one thing. Human accountability. Accountability for ignorance of the law or lack of consistent application. Accountability for bad policy, bad data and bad applications of tools is a human responsibility. Systems you choose to apply to human lives affect people, sometimes forever and in the most harmful ways, so those human decisions must be accountable.

I believe that some simple changes in practice when it comes to public administrative data could bring huge steps forward there:

  1. An audit of existing public admin data held, by national and local government, and consistent published registers of databases and algorithms / AI / ML currently in use.
  2. Identify the lawful basis for each set of data processes, their earliest records dates and content.
  3. Publish that resulting ROPA and storage limitations.
  4. Assign accountable owners to databases, tools and the registers.
  5. Sort out how you will communicate with people whose data you unlawfully process to meet the law, or stop processing it.
  6. And above all, publish a timeline for data quality processes and show that you understand how the degradation of data accuracy, quality affect the rights and responsibilities in law that change over time, as a result.

Goldstaub went on to say on ethics and inclusion, that if it’s not diverse, it’s not ethical. Perhaps the next panel itself and similar events could take a lesson learned from that, as such APPG panel events are not as diverse as they could or should be themselves.  Some of the biggest harms in the use of AI are after all for those in communities least represented, and panels like this tend to ignore lived reality.

The Rt Rev Croft then wrapped up the introductory talks on that more human note, and by exploding some myths.  He importantly talked about the consequences he expects of the increasing use of AI and its deployment in ‘the future of work’ for example, and its effects for our humanity. He proposed 5 topics for inclusion in the strategy and suggested it is essential to engage a wide cross section of society. And most importantly to ask, what is this doing to us as people?

There were then some of the usual audience questions asked on AI, transparency, garbage-in garbage-out, challenges of high risk assessment, and agreements or opposition to the EU AI regulation.

What frustrates me most in these discussions is that the technology is an assumed given, and the bias that gives to the discussion, is itself ignored. A holistic national AI strategy should be looking at if and why AI at all. What are the consequences of this focus on AI and what policy-making-oxygen and capacity does it take away from other areas of what government could or should be doing? The questioner who asks how adaptive learning could use AI for better learning in education, fails to ask what does good learning look like, and if and how adaptive tools fit into that, analogue or digital, at all.

I would have liked to ask panelists if they agree that proposals of public engagement and digital literacy distract from lack of human accountability for bad policy decisions that use machine-made support? Taking  examples from 2020 alone, there were three applications of algorithms and data in the public sector challenged by civil society because of their harms: from the Home Office dropping its racist visa algorithm, DWP court case finding ‘irrational and unlawful’ in Universal Credit decisions, and the “mutant algorithm” of summer 2020 exams. Digital literacy does nothing to help people in those situations. What AI has done is to increase the speed and scale of the harms caused by harmful policy, such as the ‘Hostile Environment’ which is harmful by design.

Any Roadmap, AI Council recommendations, and any national strategy if serious about what good looks like, must answer how would those harms be prevented in the public sector *before* being applied. It’s not about the tech, AI or not, but misuse of power. If the strategy or a Roadmap or ethics code fails to state how it would prevent such harms, then it isn’t serious about ethics in AI, but ethics washing its aims under the guise of saying the right thing.

One unspoken problem right now is the focus on the strategy solely for the delivery of a pre-determined tool (AI). Who cares what the tool is? Public sector data comes from the relationship between people and the provision of public services by government at various levels, and its AI strategy seems to have lost sight of that.

What good would look like in five years would be the end of siloed AI discussion as if it is a desirable silver bullet, and mythical numbers of ‘economic growth’ as a result, but see AI treated as is any other tech and its role in end-to-end processes or service delivery would be discussed proportionately. Panelists would stop suggesting that the GDPR is hard to understand or people cannot apply it.  Almost all of the same principles in UK data laws have applied for over twenty years. And regardless of the GDPR, the Convention 108 applies to the UK post-Brexit unchanged, including associated Council of Europe Guidelines on AI, data protection, privacy and profiling.

Data laws. AI regulation. Profiling. Codes of Practice on children, online safety or biometrics and emotional or gait recognition. There *are* gaps in data protection law when it comes to biometric data not used for unique identification purposes. But much of this is already rolled into other law and regulation for the purposes of upholding human rights and the rule of law. The challenge in the UK is often not having the law, but its lack of enforcement. There are concerns in civil society that the DCMS is seeking to weaken core ICO duties even further. Recent  government, council and think tank roadmaps talk of the UK leading on new data governance, but in reality simply want to see established laws rewritten to be less favourable of rights. To be less favourable towards people.

Data laws are *human* rights-based laws. We will never get a workable UK national data strategy or national AI strategy if government continues to ignore the very fabric of what they are to be built on. Policy failures will be repeated over and over until a strategy supports people to exercise their rights and have them respected.

Imagine if the next APPG on AI asked what would human rights’ respecting practice and policy look like, and what infrastructure would the government need to fund or build to make it happen?  In public-private sector areas (like edTech). Or in the justice system, health, welfare, children’s social care. What could that Roadmap look like and how we can make it happen over what timeframe? Strategies that could win public trust *and* get the sectoral wins the government and industry are looking for. Then we might actually move forwards on getting a functional strategy that would work, for delivering public services and where both AI and data fit into that.

Shifting power and sovereignty. Please don’t spaff our data laws up the wall.

Duncan Green’s book, How Change Happens reflects on how power and systems shape change, and its key theme is most timely post the General Election.

Critical junctures shake the status quo and throw all the power structures in the air.

The Sunday Times ran several post-election stories this weekend. Their common thread is about repositioning power; realigning the relationships across Whitehall departments, and with the EU.

It appears that meeting the political want, to be seen by the public to re-establish sovereignty for Britain, is going to come at a price.

The Sunday Times article suggests our privacy and data rights are likely to be high up on the list, in any post-Brexit fire sale:

“if they think we are going to be signing up to stick to their data laws and their procurement rules, that’s not going to happen”.

Whether it was simply a politically calculated statement or not, our data rights are clearly on the table in current wheeling and dealing.

Since there’s nothing in EU data protection law that is a barrier to trade doing what is safe, fair and transparent with personal data it may be simply be politically opportunistic to be seen to be doing something that was readily associated with the EU. “Let’s take back control of our cookies”, no less.

But reality is that either way the UK_GDPR is already weaker for UK residents than what is now being labelled here as EU_#GDPR.

If anything, GDPR is already too lenient to organisations and does little especially for children, to shift the power balance required to build the data infrastructures we need to use data well. The social contract for research and other things, appropriate to  ever-expanding technological capacity, is still absent in UK practice.

But instead of strengthening it, what lies ahead is expected divergence between the UK_GDPR and the EU_GDPR in future, via the powers in the European Union (Withdrawal) Act 2017.

A post-Brexit majority government might pass all the law it likes to remove the ability to exercise our human rights or data rights under UK Data protection law.  Henry VIII powers adopted in the last year, allow space for top down authoritarian rule-making across many sectors. The UK government was alone among other countries when the government created its own exemption for immigration purposes in the UK Data Protection Act in 2018. That removed the ability from all of us,  to exercise rights under GDPR. It might choose to further reduce our freedom of speech, and access to the courts.

But would the harmful economic side effects be worth it?

If Britain is to become a ‘buzz of tech firms in the regions’, and since  much of tech today relies on personal data processing, then a ‘break things and move fast’ approach (yes, that way round), won’t protect  SMEs from reputational risk, or losing public trust. Divergence may in fact break many businesses. It will cause confusion and chaos, to have UK self-imposed double standards, increasing workload for many.

Weakened UK data laws for citizens, will limit and weaken UK business both in terms of their own positioning in being able to trade with others, and being able to manage trusted customer relations. Weakened UK data laws will weaken the position of UK research.

Having an accountable data protection officer can be seen as a challenge. But how much worse might challenges in court be, when you cock up handling millions of patients’ pharmaceutical records [1], or school children’s biometric data? Save nothing of the potential implications for national security [2] or politicians when lists of millions of people could be open to blackmail or abuse for a generation.

The level playing field that every company can participate in, is improved, not harmed, by good data protection law. Small businesses that moan about it, might simply never have been good at doing data well. Few significant changes have been of substance in Britain’s Data Protection laws over the last twenty years.

Data laws are neither made-up, bonkers banana-shaped standards,  nor a meaningful symbol of sovereignty.

GDPR is also far from the only law the UK must follow when it comes to data.  Privacy and other rights may be infringed unlawfully, even where data protection law is no barrier to processing. And that’s aside from ethical questions too.

There isn’t so much a reality of “their data laws”, but rather *our* data laws, good for our own protection, for firms, *and* the public good.

Policy makers who might want such changes to weaken rights, may not care, looking out for fast headlines, not slow-to-realise harms.

But if they want a legacy of having built a better infrastructure that positions the UK for tech firms, for UK research, for citizens and for the long game, then they must not spaff our data laws up the wall.


Duncan Green’s book, How Change Happens is available via Open Access.


Updated December 26, 2019 to add links to later news:

[1]   20/12/2019 The Information Commissioner’s Office (ICO) has fined a London-based pharmacy £275,000 for failing to ensure the security of special category data. https://ico.org.uk/action-weve-taken/enforcement/doorstep-dispensaree-ltd-mpn/

[2] 23/12/2019 Pentagon warns military members DNA kits pose ‘personal and operational risks’ https://www.yahoo.com/news/pentagon-warns-military-members-dna-kits-pose-personal-and-operational-risks-173304318.html

Google Family Link for Under 13s: children’s privacy friend or faux?

“With the Family Link app from Google, you can stay in the loop as your kid explores on their Android* device. Family Link lets you create a Google Account for your kid that’s like your account, while also helping you set certain digital ground rules that work for your family — like managing the apps your kid can use, keeping an eye on screen time, and setting a bedtime on your kid’s device.”


John Carr shared his blog post about the Google Family Link today which was the first I had read about the new US account in beta. In his post, with an eye on GDPR, he asks, what is the right thing to do?

What is the Family Link app?

Family Link requires a US based google account to sign up, so outside the US we can’t read the full details. However from what is published online, it appears to offer the following three key features:

“Approve or block the apps your kid wants to download from the Google Play Store.

Keep an eye on screen time. See how much time your kid spends on their favorite apps with weekly or monthly activity reports, and set daily screen time limits for their device. “

and

“Set device bedtime: Remotely lock your kid’s device when it’s time to play, study, or sleep.”

From the privacy and disclosure information it reads that there is not a lot of difference between a regular (over 13s) Google account and this one for under 13s. To collect data from under 13s it must be compliant with COPPA legislation.

If you google “what is COPPA” the first result says, The Children’s Online Privacy Protection Act (COPPA) is a law created to protect the privacy of children under 13.”

But does this Google Family Link do that? What safeguards and controls are in place for use of this app and children’s privacy?

What data does it capture?

“In order to create a Google Account for your child, you must review the Disclosure (including the Privacy Notice) and the Google Privacy Policy, and give consent by authorizing a $0.30 charge on your credit card.”

Google captures the parent’s verified real-life credit card data.

Google captures child’s name, date of birth and email.

Google captures voice.

Google captures location.

Google may associate your child’s phone number with their account.

And lots more:

Google automatically collects and stores certain information about the services a child uses and how a child uses them, including when they save a picture in Google Photos, enter a query in Google Search, create a document in Google Drive, talk to the Google Assistant, or watch a video in YouTube Kids.

What does it offer over regular “13+ Google”?

In terms of general safeguarding, it doesn’t appear that SafeSearch is on by default but must be set and enforced by a parent.

Parents should “review and adjust your child’s Google Play settings based on what you think is right for them.”

Google rightly points out however that, “filters like SafeSearch are not perfect, so explicit, graphic, or other content you may not want your child to see makes it through sometimes.”

Ron Amadeo at Arstechnica wrote a review of the Family Link app back in February, and came to similar conclusions about added safeguarding value:

“Other than not showing “personalized” ads to kids, data collection and storage seems to work just like in a regular Google account. On the “Disclosure for Parents” page, Google notes that “your child’s Google Account will be like your own” and “Most of these products and services have not been designed or tailored for children.” Google won’t do any special content blocking on a kid’s device, so they can still get into plenty of trouble even with a monitored Google account.”

Your child will be able to share information, including photos, videos, audio, and location, publicly and with others, when signed in with their Google Account. And Google wants to see those photos.

There’s some things that parents cannot block at all.

Installs of app updates can’t be controlled, so leave a questionable grey area. Many apps are built on classic bait and switch – start with a free version and then the upgrade contains paid features. This is therefore something to watch for.

“Regardless of the approval settings you choose for your child’s purchases and downloads, you won’t be asked to provide approval in some instances, such as if your child: re-downloads an app or other content; installs an update to an app (even an update that adds content or asks for additional data or permissions); or downloads shared content from your Google Play Family Library. “

The child “will have the ability to change their activity controls, delete their past activity in “My Activity,” and grant app permissions (including things like device location, microphone, or contacts) to third parties”.

What’s in it for children?

You could argue that this gives children “their own accounts” and autonomy. But why do they need one at all? If I give my child a device on which they can download an app, then I approve it first.

If I am not aware of my under 13 year old child’s Internet time physically, then I’m probably not a parent who’s going to care to monitor it much by remote app either. Is there enough insecurity around ‘what children under 13 really do online’, versus what I see or they tell me as a parent, that warrants 24/7 built-in surveillance software?

I can use safe settings without this app. I can use a device time limiting app without creating a Google account for my child.

If parents want to give children an email address, yes, this allows them to have a device linked Gmail account to which you as a parent, cannot access content. But wait a minute, what’s this. Google can?

Google can read their mails and provide them “personalised product features”. More detail is probably needed but this seems clear:

“Our automated systems analyze your child’s content (including emails) to provide your child personally relevant product features, such as customized search results and spam and malware detection.”

And what happens when the under 13s turn 13? It’s questionable that it is right for Google et al. to then be able draw on a pool of ready-made customers’ data in waiting. Free from COPPA ad regulation. Free from COPPA privacy regulation.

Google knows when the child reaches 13 (the set-up requires a child’s date of birth, their first and last name, and email address, to set up the account). And they will inform the child directly when they become eligible to sign up to a regular account free of parental oversight.

What a birthday gift. But is it packaged for the child or Google?

What’s in it for Google?

The parental disclosure begins,

“At Google, your trust is a priority for us.”

If it truly is, I’d suggest they revise their privacy policy entirely.

Google’s disclosure policy also makes parents read a lot before you fully understand the permissions this app gives to Google.

I do not believe Family Link gives parents adequate control of their children’s privacy at all nor does it protect children from predatory practices.

While “Google will not serve personalized ads to your child“, your child “will still see ads while using Google’s services.”

Google also tailors the Family Link apps that the child sees, (and begs you to buy) based on their data:

“(including combining personal information from one service with information, including personal information, from other Google services) to offer them tailored content, such as more relevant app recommendations or search results.”

Contextual advertising using “persistent identifiers” is permitted under COPPA, and is surely a fundamental flaw. It’s certainly one I wouldn’t want to see duplicated under GDPR. Serving up ads that are relevant to the content the child is using, doesn’t protect them from predatory ads at all.

Google captures geolocators and knows where a child is and builds up their behavioural and location patterns. Google, like other online companies, captures and uses what I’ve labelled ‘your synthesised self’; the mix of online and offline identity and behavioural data about a user. In this case, the who and where and what they are doing, are the synthesised selves of under 13 year old children.

These data are made more valuable by the connection to an adult with spending power.

The Google Privacy Policy’s description of how Google services generally use information applies to your child’s Google Account.

Google gains permission via the parent’s acceptance of the privacy policy, to pass personal data around to third parties and affiliates. An affiliate is an entity that belongs to the Google group of companies. Today, that’s a lot of companies.

Google’s ad network consists of Google services, like Search, YouTube and Gmail, as well as 2+ million non-Google websites and apps that partner with Google to show ads.

I also wonder if it will undo some of the previous pro-privacy features on any linked child’s YouTube account if Google links any logged in accounts across the Family Link and YouTube platforms.

Is this pseudo-safe use a good thing?

In practical terms, I’d suggest this app is likely to lull parents into a false sense of security. Privacy safeguarding is not the default set up.

It’s questionable that Google should adopt some sort of parenting role through an app. Parental remote controls via an app isn’t an appropriate way to regulate whether my under 13 year old is using their device, rather than sleeping.

It’s also got to raise questions about children’s autonomy at say, 12. Should I as a parent know exactly every website and app that my child visits? What does that do for parental-child trust and relations?

As for my own children I see no benefit compared with letting them have supervised access as I do already.  That is without compromising my debit card details, or under a false sense of safeguarding. Their online time is based on age appropriate education and trust, and yes I have to manage their viewing time.

That said, if there are people who think parents cannot do that, is the app a step forward? I’m not convinced. It’s definitely of benefit to Google. But for families it feels more like a sop to adults who feel a duty towards safeguarding children, but aren’t sure how to do it.

Is this the best that Google can do by children?

In summary it seems to me that the Family Link app is a free gift from Google. (Well, free after the thirty cents to prove you’re a card-carrying adult.)

It gives parents three key tools: App approval (accept, pay, or block), Screen-time surveillance,  and a remote Switch Off of child’s access.

In return, Google gets access to a valuable data set – a parent-child relationship with credit data attached – and can increase its potential targeted app sales. Yet Google can’t guarantee additional safeguarding, privacy, or benefits for the child while using it.

I think for families and child rights, it’s a false friend. None of these tools per se require a Google account. There are alternatives.

Children’s use of the Internet should not mean they are used and their personal data passed around or traded in hidden back room bidding by the Internet companies, with no hope of control.

There are other technical solutions to age verification and privacy too.

I’d ask, what else has Google considered and discarded?

Is this the best that a cutting edge technology giant can muster?

This isn’t designed to respect children’s rights as intended under COPPA or ready for GDPR, and it’s a shame they’re not trying.

If I were designing Family Link for children, it would collect no real identifiers. No voice. No locators. It would not permit others access to voice or images or need linked. It would keep children’s privacy intact, and enable them when older, to decide what they disclose. It would not target personalised apps/products  at children at all.

GDPR requires active, informed parental consent for children’s online services. It must be revocable, personal data must collect the minimum necessary and be portable. Privacy policies must be clear to children. This, in terms of GDPR readiness, is nowhere near ‘it’.

Family Link needs to re-do their homework. And this isn’t a case of ‘please revise’.

Google is a multi-billion dollar company. If they want parental trust, and want to be GDPR and COPPA compliant, they should do the right thing.

When it comes to child rights, companies must do or do not. There is no try.


image source: ArsTechnica