It’s a privilege to have a letter published in the FT as I do today, and thanks to the editors for all their work in doing so.
I’m a bit sorry that it lost the punchline which was supposed to bring a touch of AI humour about pirates and their stochastic parrots. And its rather key point was cut that,
“Nothing in current European laws, including Convention 108 for the UK, prevents companies developing AI lawfully.”
So for the record, and since it’s (£), my agreed edited version was:
“The multi-signatory open letter advertisement, paid for by Meta, entitled “Europe needs regulatory certainty on AI” (September 19) was fittingly published on International Talk Like a Pirate Day.
It seems the signatories believe they cannot do business in Europe without “pillaging” more of our data and are calling for new law.
Since many companies lobbied against the General Data Protection Regulation or for the EU AI Act to be weaker, or that the Council of Europe’s AI regulation should not apply to them, perhaps what they really want is approval to turn our data into their products without our permission.
Nothing in current European laws, including Convention 108 for the UK, prevents companies developing AI lawfully. If companies want more consistent enforcement action, I suggest Data Protection Authorities comply and act urgently to protect us from any pirates out there, and their greedy stochastic parrots. “
Prior to print they asked to cut out a middle paragraph too.
“In the same week, LinkedIn sneakily switched on a ‘use me for AI development’ feature for UK users without telling us (paused the next day); Larry Ellison suggested at Oracle’s Financial Analyst Meeting that more AI should usher in an era of mass citizen surveillance, and our Department for Education has announced it will allow third parties to exploit school children’s assessment data for AI product building, and can’t rule out it will include personal data.”
It is in fact the cumulative effect around the recent flurry of AI activities by various parties, state and commercial, that deserves greater attention rather than being only about this Meta-led complaint. Who is grabbing what data and what infrastructure contracts and creating what state dependencies and strengths to what end game? While some present the “AI race” as China or India versus the EU or the US to become AI “super powers”, is what “Silicon Valley” offers, their way is the only way, a better offer?
It’s not in fact, “Big Tech” I’m concerned about, but the arrogance of so many companies that in the middle of regulatory scrutiny would align themselves with one that would rather put out PR that omits the fact they are under it, instead only calling for the law to be changed, and frankly misleading the public by suggesting it is all for our own good than talk about how this serves their own interests.
Who do they think they are to dictate what new laws must look like when they seem simply unwilling to stick to those we have?
The Meta-led ad called for “harmonisation enshrined in regulatory frameworks like the GDPR” and I absolutely agree. The DPAs need to stand tall and stand up to OpenAI and friends (ever dwindling in number so it seems) and reassert the basic, fundamental principles of data protection laws from the GDPR to Convention 108 to protect fundamental human rights. Our laws should do so whether companies like them or not. After all, it is often abuse of data rights by companies, and states, that populations need protection from.
The Netherlands DPA is right to say scraping is almost always unlawful. A legitimate interest cannot be simply plucked from thin air by anyone who is neither an existing data controller nor processor and has no prior relationship to the data subjects who have no reasonable expectation of their re-use of data online that was not posted for the purposes that the scraper has grabbed it and without any informed processing and offer of an opt out. Instead the only possible basis for this kind of brand new controller should be consent. Having to break the law, hardly screams ‘innovation’.
Regulators do not exist to pander to wheedling, but to independently uphold the law in a democratic society in order to protect people, not prioritise the creation of products:
Lawfulness, fairness and transparency.
Purpose limitation.
Data minimisation.
Accuracy.
Storage limitation.
Integrity and confidentiality (security)
and
Accountability.
In my view, it is the lack of dissausive enforcement as part of checks-and-balances on big power like this, regardless of where it resides, that poses one of the biggest data-related threats to humanity.
Not AI, nor being “left out” of being used to build it for their profit.
Today Keir Starmer talked about us having more control in our lives. He said, “markets don’t give you control – that is almost literally their point.”
This week we’ve seen it embodied in a speech given by Oracle co-founder Larry Ellison, at their Financial Analyst Meeting 2024. He said that AI is on the verge of ushering in a new era of mass behavioural surveillance, of police and citizens. Oracle he suggested, would be the technological backbone for such applications, keeping everyone “on their best behaviour” through constant real-time machine-learning-powered monitoring.(LE FAQs 1:09:00).
Ellison’s sense of unquestionable entitlement to decide that *he* should be the company to control how all citizens (and police) behave-by-design, and his omission of any consideration of any democratic mandate for that, should shock us. Not least because he’s wrong in some of his claims. (There is no evidence that having a digital dystopia makes this difference in school safety, in particular given the numbers of people known to the school).
How does society trust in this direction of travel in how our behaviour is shaped and how corporations impose their choices? How can a government promise society more control in our lives and yet enable a digital environment, which plays a large part in our everyday life, of which we seem to have ever less control?
The new government sounds keen on public infrastructure investment as a route to structural transformation. But the risk is that the cost constraints mean they seek the results expected from the same plays in an industrial development strategy of old, but now using new technology and tools. It’s a big mistake, huge. And nothing less than national democracy is at stake because individuals cannot meaningfully hold corporations to account. The economic and political context in which an industrial strategy is defined is now behind paywalls and without parliamentary consensus, oversight or societal legitimacy in a formal democratic environment. The nature of the constraints on businesses’ power were once more tangible and localised and their effects easier to see in one place. Power has moved away from government to corporations considerably in the time Labour was out of it. We are now more dependent on being users of multiple private techno-solutions to everyday things often that we hate using, from paying for the car park, to laundry, to job hunting. All with indirect effects on national security, on service provsion at scale, as well as direct everyday effects for citizens and increasingly, our disempowerment and lack of agency in our own lives.
Data Protection law is supposed to offer people protection from such misuse, but without enforcement, ever more companies are starting to copy each other on rinse and repeat.
The Convention 108 requires respect for rights and fundamental freedoms, and in particular the right to privacy, and special categories of data which may not be processed automatically unless domestic law provides appropriate safeguards. It must be obtained fairly. In the cases of many emerging [Generative] AI companies have disregarded fundamentals of European DP laws: Purpose limitation for incompatible uses, no relationship between the subject and company, lack of accuracy or even being actively offered a right to object when in fact it should be a consent basis, means there is unfair and unlawful processing. If we are to accept that anyone at all can take any personal data online and use it for an entirely different purpose to turn into commercial products ignoring all this, then frankly the Data Protection authorities may as well close. Are these commercial interests simply so large that they believe they can get away with steam-rollering over democratic voice and human rights, as well as the rule of (data protection) law? Unless there is consistent ‘cease and desist’ type enforcement, it seems to be rapidly becoming the new normal.
If instead regulators were to bring meaningful enforcement that is dissuasive as the law is supposed to be, what would change? What will shift practice on facial recognition in the world as foreseen by Larry Ellison, and shift public policy to sourcing responsibly? How is democracy to be saved from technocratic authoritarianism going global? If the majority of people in living in democracies are never asked for their views, and have changes imposed on their lives that they do not want how do we raise a right to object and take control?
While the influence in in our political systems of institutions, such as Oracle, and their financial interests are in growing ever more-, and ever larger -, AI models, the interests of our affected communities are not represented in state decisions at national or global levels.
Ted Chiang asked in the New Yorker in 2023, whether an alternative is possible to the current direction of travel. “Some might say that it’s not the job of A.I. to oppose capitalism. That may be true, but it’s not the job of A.I. to strengthen capitalism, either. Yet that is what it currently does.” The greatest current risk of AI is not what we imagine from I, Robot he suggested, but “A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value.”
I remember giving a debate talk as a teen thirty years ago, about the risk of rising sea levels in Vanuatu. It is a reality that causes harm. Satellite data indicates the sea level has risen there by about 6 mm per year since 1993. Someone told me in an interesting recent Twitter exchange, when it comes to climate impacts and AI that they are, “not a proponent of ‘reducing consumption'”. The reality of climate change in the UK today, only hints at the long-term consequences for the world from the effects of migration and extreme weather events. Only by restraining consumption might we change those knock-on effects of climate change in anything like the necessary timeframe, or the next thirty years worsen more quickly.
But instead of hearing meaningful assessment of what we need in public policy, we hear politicians talk about growth as what they want and that bigger can only be better. More AI. More data centres. What about more humanity in this machine-led environment?
While some argue for an alternative, responsible, rights respecting path as the only sustainable path forwards, like Meeri Haataja, Chief Executive and Founder, Saidot, Helsinki, Finland in his August letter to the FT, some of the largest companies appear to be suggesting this week, publishing ads in various European press, that they seem to struggle to follow the law (like Meta), so Europe needs a new one. And to paraphrase, they suggest it’s for our own good.
And that’s whether we like it, or not. You might not choose to use any of their Meta products but might still be being used by them, using your activity online to to create advertising market data that can be sold. Researchers at the University of Oxford analysed a million smartphone apps and found that “the average one contains third‑party code from 10 different companies that facilitates this kind of tracking. Nine out of 10 of them sent data to Google. Four out of 10 of them sent data to Facebook. In the case of Facebook, many of them sent data automatically without the individual having the opportunity to say no to it.” Reuse for training AI is far more explicit and wrong and “it is for Meta [or any other company] to ensure and demonstrate ongoing compliance.” We have rights, and the rule of law, and at stake are our democratic processes.
“The balance of the interests of working people” must include respect for their fundamental rights and freedoms in the digital environment as well as supporting the interests of economic growth and they are mutually achievable not mutually exclusive.
But in the UK we put the regulator on a leash in 2015, constrained by a duty towards “economic growth”. It’s a constraint that should not apply to industry and market regulators, like the ICO.
While some companies plead to be let off with their bad behaviour, others expect to profit from encouraging the state to increase the monitoring of ours or asking that the law be written ever in their favour. Regulators need to stand tall, and stand up to it and the government needs to remove their leash.
We need not accept this techno-authoritarianism as a “new normal” and inevitable. If indeed as Starmer concluded, “Britain belongs to you“, then it needs MPs to act like it and defend fundamental rights and freedoms to uphold values like the rule of law, even with companies who believe it does not apply to them. With a growing swell of nationalism and plenty who may not believe Britain belongs to all of us, but that, ‘Tomorrow belongs to Me‘, it is indeed a time when “great forces demand a decisive government prepared to face the future.”
The video referenced above is of Dr. Sasha Luccioni, the research scientist and climate lead at HuggingFace, an open-source community and machine-learning platform for AI developers, and is part 1 of the TED Radio Hour episode, Our tech has a climate problem.
Are we asking the right questions today about AI and education? In 2016 in a post for Nesta, Sam Smith foresaw the algorithmic fiasco that would happen in the summer of 2020 pointing out that exam-marking algorithms like any other decisions, have unevenly distributed consequences. What prevents that happening daily but behind closed doors and in closed systems? The answer is, nothing.
Both the adoption of AI in education and education about AI is unevenly distributed. Driven largely by commercial interests, some are co-opting teaching unions for access to the sector, others more cautious, have focused on the challenges of bias and discrimination and plagiarism. As I recently wrote in Schools Week, the influence of corporate donors and their interests in shaping public sector procurement, such as the Tony Blair Institute’s backing by Oracle owner Larry Ellison, therefore demands scrutiny.
Should society allow its public sector systems and laws to be shaped primarily to suit companies? The users of the systems are shaped by how those companies work, so who keeps the balance in check?
The human response to surveillance (and that is what much of AI relies on, massive data-veillance and dashboards) is a result of the chilling effect of being ‘watched‘ by known or unknown persons behind the monitoring. We modify our behaviours to be compliant to their expectations. We try not to stand out from the norm, or to protect ourselves from resulting effects.
The second reason we modify our behaviours is to be compliant with the machine itself. Thanks to the lack of a responsible human in the interaction mediated by the AI tool, we are forced to change what we do to comply with what the machine can manage. How AI is changing human behaviour is not confined to where we walk, meet, play and are overseen in out- or indoor spaces. It is in how we respond to it, and ultimately, how we think.
In the simplest examples, using voice assistants shapes how children speak, and in prompting generative AI applications, we can see how we are forced to adapt how we think to put the questions best suited to getting the output we want. We are changing how we behave to suit machines. How we change behaviour is therefore determined by the design of the company behind the product.
There is limited public debate yet on the effects of this for education, on how children act, interact, and think using machines, and no consensus in the UK education sector whether it is desirable to introduce these companies and their steering that bring changes in teaching and learning and to the future of society, as a result.
And since then in 2021, I would go further. The neo-liberal approach to education and its emphasis on the efficiency of human capital and productivity, on individualism and personalisation, all about producing ‘labour market value’, and measurable outcomes, is commonly at the core of AI in teaching and learning platforms.
Many tools dehumanise children into data dashboards, rank and spank their behaviours and achivements, punish outliers and praise norms, and expect nothing but strict adherence to rules (sometimes incorrect ones, like mistakes in maths apps). As some companies have expressly said, the purpose of this is to normalise such behaviours ready to be employees of the future, and the reason their tools are free is to normalise their adoption for life.
Don’t forget, in this project announcement the Minister said, “This is the first of many projects that will transform how we see and use public sector data.” That’s our data, about us. And when it comes to schools, that’s not only the millions of learners who’ve left already but who are school children today. Are we really going to accept turning them into data fodder for AI without a fight? As Michael Rosen summed up so perfectly in 2018, “First they said they needed data about the children to find out what they’re learning… then the children became data.” If this is to become the new normal, where is the mechanism for us to object? And why this, now, in such a hurry?
Purpose limitation should also prevent retrospective reuse of learners’ records and data, but it hasn’t so far on general identifying and sensitive data distribution from the NPD at national level or from edTech in schools. The project details, scant as they are, suggest parents were asked for consent in this particular pilot, but the Faculty AI notice seems legally weak for schools, and when it comes to using pupil data for building into AI products the question is whether consent can ever be valid — since it cannot be withdrawn once given, and the nature of being ‘freely given’ is affected by the power imbalance.
The regulator has been silent so far on the DSIT/DfE announcement despite lack of fair processing and failures on Articles 12, 13 and 14 of the GDPR being one of the key findings in its 2020 DfE audit. I can use a website to find children’s school photos, scraped without our permission. What about our school records?
Will the government consult before commercialising children’s lives in data to feed AI companies and ‘the economy’ or any of the other “many projects that will transform how we see and use public sector data“? How is it different from the existing ONS, ADR, or SAIL databank access points and processes? Will the government evaluate the impact on child development, behaviour or mental health of increasing surveillance in schools? Will MPs get an opt-in or even -out, of the commercialisation of their own school records?
Reinforced Autoclaved Aerated Concrete (RAAC) used in the school environment is giving our Education Minister a headache. Having been the first to address the problem most publicly, she’s coming under fire as responsible for failure; for Ministerial failure to act on it in thirteen years of a Conservative government since 2010, and the failure of the fabric of educational settings itself.
Decades after buildings’ infrastructure started using RAAC, there is now a parallel digital infrastructure in educational settings. It’s worth thinking about what’s caused the RAAC problem and how it was identified. Could we avoid the same things in the digital environment and in the design, procurement and use of edTech products, and in particular, Artificial Intelligence?
Where has it been used?
In the procurement of school infrastructure, RAAC has been integrated into some parts of the everyday school system, especially in large flat roofs built around the 1960s-80s. It is now hard to detect and remedy or remove without significant effort. There was short-term thinking, short-term spending, and no strategy for its full life cycle or end-of-life expectations. It’s going to be expensive, slow, and difficult to find it and fix.
Where is the risk and what was the risk assessment?
Both most well-known recent cases, the 2016 Edinburgh School masonry collapse and the 2018 roof incident, happened in the early morning when no pupils were present, but, according to the 2019 safety alert by SCOSS, “in either case, the consequences could have been more severe, possibly resulting in injuries or fatalities. There is therefore a risk, although its extent is uncertain.”
That risk has been known for a long time, as today’s education minister Gillian Keegan rightly explained in that interview before airing her frustration. Perhaps it was not seen as a pressing priority because it was not seen as a new problem. In fact locally it often isn’t seen much at all, as it is either hidden behind front-end facades or built into hard-to-see places, like roofs. But already, ‘in the 1990s structural deficiencies became apparent’. (Discussed in papers by the Building Research Establishment (BRE) In the 1990s and again in 2002).
What has changed, according to expert reports, is that those visible problems are no longer behaving as expected in advance, giving time for mitigation in what had previously been one-off catastrophic incidents. What was only affecting a few, could now affect the many at scale, and without warning. The most recent failures show there is no longer a reliable margin to act, before parts of the mainstream state education infrastructure pose children a threat to life.
Where is the similarity in the digital environment?
AI is the RAAC of another Minister’s future—it’s often similarly sold today as cost-saving, quick and easy to put in place. You might need fewer people to install it rather than the available alternatives.
AI is being widely introduced at speed into children’s private and family life in England through its procurement and application in the infrastructure of public services; in education and children’s services and policing and in welfare; and some companies claim to be able to identify mood or autism or to be able to profile and influence mental health. Children rarely have any choice or agency to control its often untested effects or outcomes on them, in non-consensual settings.
If you’re working in AI “safety” right now, consider this a parable.
There are plenty of people pointing out risk in the current adoption of AI into UK public sector infrastructure; in schools, in health, in welfare, and in prisons and the justice system;
There are plenty of cases where harm is very real, but first seen by those in power as affecting the marginalised and minority;
There are no consistent published standards or obligations on transparency or of accountability to which AI sellers must hold their products before procurement and affect on people;
And there are no easily accessible records of where what type of AI is being procured and built into which public infrastructure, making tracing and remedy even harder in case of product recall.
The Cardiff Data Justice Lab together with Carnegie Trust have published numerous examples of cancelled systems across public services. “Pressure on public finances means that governments are trying to do more with less. Increasingly, policymakers are turning to technology to cut costs. But what if this technology doesn’t work as it should?” they asked.
In places where similar technology has been in place longer, we already see the impact and harm to people. In 2022, the Chicago Sun Times published an article noting that, “Illinois wisely stopped using algorithms in child welfare cases, but at least 26 states and Washington, D.C., have considered using them, and at least 11 have deployed them. A recent investigation found they are often unreliable and perpetuate racial disparities.” And the author wrote, “Government agencies that oversee child welfare should be prohibited from using algorithms.”
Where are the parallels in the problem and its fixes?
It’s also worth considering how AI can be “removed” or stopped from working in a system. Often not through removal at all, but simply throttling, shutting off that functionality. The problematic parts of the infrastructure remains in situ, but can’t easily be taken out after being designed-in. Whole products may also be difficult to remove.
The 2022 Institution of Structural Engineers’ report summarises the challenge now how to fix the current RAAC problems. Think about what this would mean doing to fix a failure of digital infrastructure:
Positive remedial supports and Emergency propping, to mitigate against known deficiencies or unknown/unproven conditions
Passive, fail safe supports, to mitigate catastrophic failure of the panels if a panel was to fail
Removal of individual panels and replacement with an alternative solution
Entire roof replacement to remove the ongoing liabilities
Periodic monitoring of the panels for their remaining service life
RAAC has not become a risk to life. It already was from design. While still recognised as a ‘good construction material for many purposes’ it has been widely used in unsafe ways in the wrong places.
RAAC planks made fifty years ago did not have the same level of quality control as we would demand today and yet was procured and put in place for decades after it was known to be unsafe for some uses, and risk assessments saying so.
RAAC was given an exemption from the commonly used codes of practice of reinforced concrete design (RC).
RAAC is scattered among non-RAAC infrastructure, making finding and fixing it, or its removal, very much harder than if it had been recorded in a register, making it easily traceable.
Current AI discourse should be asking not only for retrospective accountability or even life-cycle accountability, but also what does accountable AI look like by design and how do you guarantee it?
How do we prevent risk of harm to people from poor quality of systems designed to support them, what will protect people from being affected by unsafe products in those settings in the first place?
Are the incentives correct in procurement to enable adequate Risk Assessment be carried out by those who choose to use it?
Rather than accepting risk and retroactively expecting remedial action across all manner of public services in future—ignoring a growing number of ticking time bombs—what should public policy makers be doing to avoid putting them in place?
How will we know where the unsafe products were built into, if they are permitted then later found to be a threat-to-life?
How is safety or accountability upheld for the lifecycle of the product if companies stop making it, or go out of business?
How does anyone working with systems applied to people, assess their ongoing use and ensure it promotes human flourishing?
In the digital environment we still have margin to act, to ensure the safety of everyday parts of institutional digital infrastructure in mainstream state education and prevent harm to children. Whether that’s from parts of a product’s code, or use in the wrong way, or entire products. AI is already used in the infrastructure of school’ curriculum planning, curriculum content, or steering children’s self-beliefs and behaviours, and the values of the adult society these pupils will become. Some products have been oversold as AI when they weren’t, overhyped, overused and under explained, their design is hidden away and kept from sight or independent scrutiny– some with real risks and harms. Right now, some companies and policy makers are making familiar errors and ‘safety-washing’ AI harms, ignoring criticism and pushing it off as someone else’s future problem.
In education, they could learn lessons from RAAC.
Background references
BBC Newsnight Timeline: reports from as far back as 1961 about aerated concrete concerns. 01/09/2023
Pre-1980 RAAC roof planks are now past their expected service life. CROSS. (2020) Failure of RAAC planks in schools.
A 2019 safety alert by SCOSS, “Failure of Reinforced Autoclaved Aerated Concrete (RAAC) Planks” following the sudden collapse of a school flat roof in 2018.
The Local Government Association (LGA) and the Department for Education (DfE) then contacted all school building owners and warned of ‘risk of sudden structural failure.’
Today was the APPG AI Evidence Meeting – The National AI Strategy: How should it look? Here’s some of my personal views and takeaways.
Have the Regulators the skills and competency to hold organisations to account for what they are doing? asked Roger Taylor, the former Chair of Ofqual the exams regulator, as he began the panel discussion, chaired by Lord Clement-Jones.
A good question was followed by another.
What are we trying to do with AI? asked Andrew Strait, Associate Director of Research Partnerships at Ada Lovelace Institute and formerly of DeepMind and Google. The goal of a strategy should not be to have more AI for the sake of having more AI, he said, but an articulation of values and goals. (I’d suggest the government may be in fact in favour of exactly that, more AI for its own sake where its appplication is seen as a growth market.) And interestingly he suggested that the Scottish strategy has more values-based model, such as fairness. [I had, it seems, wrongly assumed that a *national* AI strategy to come, would include all of the UK.]
The arguments on fairness are well worn in AI discussion and getting old. And yet they still too often fail to ask whether these tools are accurate or even work at all. Look at the education sector and one company’s product, ClassCharts, that claimed AI as its USP for years, but the ICO found in 2020 that the company didn’t actually use any AI at all. If company claims are not honest, or not accurate, then they’re not fair to anyone, never mind across everyone.
Fairness is still too often thought of in terms of explainability of a computer algorithm, not the entire process it operates in. As I wrote back in 2019, “yes we need fairness accountability and transparency. But we need those human qualities to reach across thinking beyond computer code. We need to restore humanity to automated systems and it has to be re-instated across whole processes.”
Strait went on to say that safe and effective AI would be something people can trust. And he asked the important question: who gets to define what a harm is? Rightly identifying that the harm identified by a developer of a tool, may be very different from those people affected by it. (No one on the panel attempted to define or limit what AI is, in these discussions.) He suggested that the carbon footprint from AI may counteract the benefit it would have to apply AI in the pursuit of climate-change goals. “The world we want to create with AI” was a very interesting position and I’d have liked to hear him address what he meant by that, who is “we”, and any assumptions within it.
Lord Clement-Jones asked him about some of the work that Ada Lovelace had done on harms such as facial recognition, and also asked whether some sector technologies are so high risk that they must be regulated? Strait suggested that we lack adequate understanding of what harms are — I’d suggest academia and civil society have done plenty of work on identifying those, they’ve just been too often ignored until after the harm is done and there are legal challenges. Strait also suggested he thought the Online Harms agenda was ‘a fantastic example’ of both horizontal and vertical regulation. [Hmm, let’s see. Many people would contest that, and we’ll see what the Queen’s Speech brings.]
Maria Axente then went on to talk about children and AI. Her focus was on big platforms but also mentioned a range of other application areas. She spoke of the data governance work going on at UNICEF. She included the needs for driving awareness of the risks for children and AI, and digital literacy. The potential for limitations on child development, the exacerbation of the digital divide, and risks in public spaces but also hoped for opportunities. She suggested that the AI strategy may therefore be the place for including children.
This of course was something I would want to discuss at more length, but in summary the last decade of Westminster policy affecting children, even the Children’s Commissioner most recent Big Ask survey, bypass the question of children’s *rights* completely. If the national AI strategy by contrast would address rights, [the foundation upon which data laws are built] and create the mechanisms in public sector interactions with children that would enable them to be told if and how their data is being used (in AI systems or otherwise) and be able to exercise the choices that public engagement time and time again says is what people want, then that would be a *huge* and positive step forward to effective data practice across the public sector and for use of AI. Otherwise I see a risk that a strategy on AI and children will ignore children as rights holders across a full range of rights in the digital environment, and focus only on the role of AI in child protection, a key DCMS export aim, and ignore the invasive nature of safety tech tools, and its harms.
Next Dr Jim Weatherall from Astra Zeneca tied together leveraging “the UK unique strengths of the NHS” and “data collected there” wanting a close knitting together of the national AI strategy and the national data strategy, so that healthcare, life sciences and biomedical sector can become “an international renowned asset.” He’d like to see students doing data science modules in studies and international access to talent to work for AZ.
Lord Clement-Jones then asked him how to engender public trust in data use. Weatherall said a number of false starts in the past are hindering progress, but that he saw the way forward was data trusts and citizen juries.
His answer ignores the most obvious solution: respect existing law and human rights, using data only in ways that people want and give their permission to do so. Then show them that you did that, and nothing more. In short, what medConfidential first proposed in 2014, the creation of data usage reports.
If the government fails to put in place those foundations, whatever strategy it builds will fall in the same ways they have done to date, like care.data did by assuming it was acceptable to use data in the way that the government wanted, without a social licence, in the name of “innovation”. Aims that were championed by companies such as Dr Foster, that profited from reusing personal data from the public sector, in a “hole and corner deal” as described by the chairman of the House of Commons committee of public accounts in 2006. Such deals put industry and “innovation” ahead of what the public want in terms of ‘red lines’ for acceptable re-uses of their own personal data and for data re-used in the public interest vs for commercial profit. And “The Department of Health failed in its duty to be open to parliament and the taxpayer.” That openness and accountability are still missing nearly ten years on in the scope creep of national datasets and commercial reuse, and in expanding data policies and research programmes.
I disagree with the suggestion made that Data Trusts will somehow be more empowering to everyone than mechanisms we have today for data management. I believe Data Trusts will further stratify those who are included and those excluded, and benefit those who have capacity to be able to participate, and disadvantage those who cannot choose. They are also a figleaf of acceptability that don’t solve the core challenge . Citizen juries cannot do more than give a straw poll. Every person whose data is used has entitlement to rights in law, and the views of a jury or Trust cannot speak for everyone or override those rights protected in law.
Tabitha Goldstaub spoke next and outlined some of what AI Council Roadmap had published. She suggested looking at removing barriers to best support the AI start-up community.
As I wrote when the roadmap report was published, there are basics missing in government’s own practice that could be solved. It had an ambition to, “Lead the development of data governance options and its uses. The UK should lead in developing appropriate standards to frame the future governance of data,” but the Roadmap largely ignored the governance infrastructures that already exist. One can only read into that a desire to change and redesign what those standards are.
I believe that there should be no need to change the governance of data but instead make today’s rights able to be exercised and deliver enforcement to make existing governance actionable. Any genuine “barriers” to data use in data protection law, are designed as protections for people; the people the public sector, its staff and these arms length bodies are supposed to serve.
Blaming AI and algorithms, blaming lack of clarity in the law, blaming “barriers” is often avoidance of one thing. Human accountability. Accountability for ignorance of the law or lack of consistent application. Accountability for bad policy, bad data and bad applications of tools is a human responsibility. Systems you choose to apply to human lives affect people, sometimes forever and in the most harmful ways, so those human decisions must be accountable.
I believe that some simple changes in practice when it comes to public administrative data could bring huge steps forward there:
An audit of existing public admin data held, by national and local government, and consistent published registers of databases and algorithms / AI / ML currently in use.
Identify the lawful basis for each set of data processes, their earliest records dates and content.
Assign accountable owners to databases, tools and the registers.
Sort out how you will communicate with people whose data you unlawfully process to meet the law, or stop processing it.
And above all, publish a timeline for data quality processes and show that you understand how the degradation of data accuracy, quality affect the rights and responsibilities in law that change over time, as a result.
Goldstaub went on to say on ethics and inclusion, that if it’s not diverse, it’s not ethical. Perhaps the next panel itself and similar events could take a lesson learned from that, as such APPG panel events are not as diverse as they could or should be themselves. Some of the biggest harms in the use of AI are after all for those in communities least represented, and panels like this tend to ignore lived reality.
The Rt Rev Croft then wrapped up the introductory talks on that more human note, and by exploding some myths. He importantly talked about the consequences he expects of the increasing use of AI and its deployment in ‘the future of work’ for example, and its effects for our humanity. He proposed 5 topics for inclusion in the strategy and suggested it is essential to engage a wide cross section of society. And most importantly to ask, what is this doing to us as people?
There were then some of the usual audience questions asked on AI, transparency, garbage-in garbage-out, challenges of high risk assessment, and agreements or opposition to the EU AI regulation.
What frustrates me most in these discussions is that the technology is an assumed given, and the bias that gives to the discussion, is itself ignored. A holistic national AI strategy should be looking at if and why AI at all. What are the consequences of this focus on AI and what policy-making-oxygen and capacity does it take away from other areas of what government could or should be doing? The questioner who asks how adaptive learning could use AI for better learning in education, fails to ask what does good learning look like, and if and how adaptive tools fit into that, analogue or digital, at all.
I would have liked to ask panelists if they agree that proposals of public engagement and digital literacy distract from lack of human accountability for bad policy decisions that use machine-made support? Taking examples from 2020 alone, there were three applications of algorithms and data in the public sector challenged by civil society because of their harms: from the Home Office dropping its racist visa algorithm, DWP court case finding ‘irrational and unlawful’ in Universal Credit decisions, and the “mutant algorithm” of summer 2020 exams. Digital literacy does nothing to help people in those situations. What AI has done is to increase the speed and scale of the harms caused by harmful policy, such as the ‘Hostile Environment’ which is harmful by design.
Any Roadmap, AI Council recommendations, and any national strategy if serious about what good looks like, must answer how would those harms be prevented in the public sector *before* being applied. It’s not about the tech, AI or not, but misuse of power. If the strategy or a Roadmap or ethics code fails to state how it would prevent such harms, then it isn’t serious about ethics in AI, but ethics washing its aims under the guise of saying the right thing.
One unspoken problem right now is the focus on the strategy solely for the delivery of a pre-determined tool (AI). Who cares what the tool is? Public sector data comes from the relationship between people and the provision of public services by government at various levels, and its AI strategy seems to have lost sight of that.
What good would look like in five years would be the end of siloed AI discussion as if it is a desirable silver bullet, and mythical numbers of ‘economic growth’ as a result, but see AI treated as is any other tech and its role in end-to-end processes or service delivery would be discussed proportionately. Panelists would stop suggesting that the GDPR is hard to understand or people cannot apply it. Almost all of the same principles in UK data laws have applied for over twenty years. And regardless of the GDPR, the Convention 108 applies to the UK post-Brexit unchanged, including associated Council of Europe Guidelines on AI, data protection, privacy and profiling.
Data laws. AI regulation. Profiling. Codes of Practice on children, online safety or biometrics and emotional or gait recognition. There *are* gaps in data protection law when it comes to biometric data not used for unique identification purposes. But much of this is already rolled into other law and regulation for the purposes of upholding human rights and the rule of law. The challenge in the UK is often not having the law, but its lack of enforcement. There are concerns in civil society that the DCMS is seeking to weaken core ICO duties even further. Recent government, council and think tank roadmaps talk of the UK leading on new data governance, but in reality simply want to see established laws rewritten to be less favourable of rights. To be less favourable towards people.
Data laws are *human* rights-based laws. We will never get a workable UK national data strategy or national AI strategy if government continues to ignore the very fabric of what they are to be built on. Policy failures will be repeated over and over until a strategy supports people to exercise their rights and have them respected.
Imagine if the next APPG on AI asked what would human rights’ respecting practice and policy look like, and what infrastructure would the government need to fund or build to make it happen? In public-private sector areas (like edTech). Or in the justice system, health, welfare, children’s social care. What could that Roadmap look like and how we can make it happen over what timeframe? Strategies that could win public trust *and* get the sectoral wins the government and industry are looking for. Then we might actually move forwards on getting a functional strategy that would work, for delivering public services and where both AI and data fit into that.
Time and again, thinking and discussion about these topics is siloed. At the Turing Institute, the Royal Society, the ADRN and EPSRC, in government departments, discussions on data, or within education practitioner, and public circles — we are all having similar discussions about data and ethics, but with little ownership and no goals for future outcomes. If government doesn’t get it, or have time for it, or policy lacks ethics by design, is it in the public interest for private companies, Google et al., to offer a fait accompli?
There is lots of talking about Machine Learning (ML), Artificial Intelligence (AI) and ethics. But what is being done to ensure that real values — respect for rights, human dignity, and autonomy — are built into practice in the public services delivery?
In most recent data policy it is entirely absent. The Digital Economy Act s33 risks enabling, through removal of inter and intra-departmental data protections, an unprecedented expansion of public data transfers, with “untrammelled powers”. Powers without codes of practice, promised over a year ago. That has fall out for the trustworthiness of legislative process, and data practices across public services.
Predictive analytics is growing but poorly understood in the public and public sector.
There is already dependence on computers in aspects of public sector work. Its interactions with others in sensitive situations demands better knowledge of how systems operate and can be wrong. Debt recovery, and social care to take two known examples.
Risk averse, staff appear to choose not to question the outcome of ‘algorithmic decision making’ or do not have the ability to do so. There is reportedly no analysis training for practitioners, to understand the basis or bias of conclusions. This has the potential that instead of making us more informed, decision-making by machine makes us humans less clever.
What does it do to professionals, if they feel therefore less empowered? When is that a good thing if it overrides discriminatory human decisions? How can we tell the difference and balance these risks if we don’t understand or feel able to challenge them?
In education, what is it doing to children whose attainment is profiled, predicted, and acted on to target extra or less focus from school staff, who have no ML training and without informed consent of pupils or parents?
If authorities use data in ways the public do not expect, such as to ID homes of multiple occupancy without informed consent, they will fail the future to deliver uses for good. The ‘public interest’, ‘user need,’ and ethics can come into conflict according to your point of view. The public and data protection law and ethics object to harms from use of data. This type of application has potential to be mind-blowingly invasive and reveal all sorts of other findings.
Widely informed thinking must be made into meaningful public policy for the greatest public good
Our politicians are caught up in the General Election and buried in Brexit.
Meanwhile, the commercial companies taking AI first rights to capitalise on existing commercial advantage could potentially strip public assets, use up our personal data and public trust, and leave the public with little public good. We are already used by global data players, and by machine-based learning companies, without our knowledge or consent. That knowledge can be used to profit business models, that pay little tax into the public purse.
There are valid macro economic arguments about whether private spend and investment are preferable compared with a state’s ability to do the same. But these companies make more than enough to do it all. Does it signal a failure to a commitment to the wider community; not paying just amounts of taxes, is it a red flag to a company’s commitment to public good?
What that public good should look like, depends on who is invited to participate in the room, and not to tick boxes, but to think and to build.
The Royal Society’s Report on AI and Machine Learning published on April 25, showed a working group of 14 participants, including two Google DeepMind representatives, one from Amazon, private equity investors, and academics from cognitive science and genetics backgrounds.
If we are going to form objective policies the inputs that form the basis for them must be informed, but must also be well balanced, and be seen to be balanced. Not as an add on, but be in the same room.
As Natasha Lomas in TechCrunch noted, “Public opinion is understandably a big preoccupation for the report authors — unsurprisingly so, given that a technology that potentially erodes people’s privacy and impacts their jobs risks being drastically unpopular.”
“The report also calls on researchers to consider the wider impact of their work and to receive training in recognising the ethical implications.”
What are those ethical implications? Who decides which matter most? How do we eliminate recognised discriminatory bias? What should data be used for and AI be working on at all? Who is it going to benefit? What questions are we not asking? Why are young people left out of this debate?
Who decides what the public should or should not know?
AI and ML depend on data. Data is often talked about as a panacea to problems of better working together. But data alone does not make people better informed. In the same way that they fail, if they don’t feel it is their job to pick up the fax. A fundamental building block of our future public and private prosperity is understanding data and how we, and the AI, interact. What is data telling us and how do we interpret it, and know it is accurate?
How and where will we start to educate young people about data and ML, if not about their own and use by government and commercial companies?
The whole of Chapter 5 in the report is very good as a starting point for policy makers who have not yet engaged in the area. Privacy while summed up too short in conclusions, is scattered throughout.
Blind spots remain, however.
Over willingness to accommodate existing big private players as their expertise leads design, development and a desire to ‘re-write regulation’.
Slowness to react to needed regulation in the public sector (caught up in Brexit) while commercial drivers and technology change forge ahead
‘How do we develop technology that benefits everyone’ must not only think UK, but global South, especially in the bias in how AI is being to taught, and broad socio-economic barriers in application
Predictive analytics and professional application = unwillingness to question the computer result. In children’s social care this is already having a damaging upturn in the family courts (S31)
Data and technology knowledge and ethics training, must be embedded across the public sector, not only post grad students in machine learning.
Young people are left out of discussions which, after all, are about their future. [They might have some of the best ideas, we miss at our peril.]
There is no time to waste
Children and young people have the most to lose while their education, skills, jobs market, economy, culture, care, and society goes through a series of gradual but seismic shift in purpose, culture, and acceptance before finding new norms post-Brexit. They will also gain the most if the foundations are right. One of these must be getting age verification right in GDPR, not allowing it to enable a massive data grab of child-parent privacy.
Although the RS Report considers young people in the context of a future workforce who need skills training, they are otherwise left out of this report.
“The next curriculum reform needs to consider the educational needs of young people through the lens of the implications of machine learning and associated technologies for the future of work.”
Yes it does, but it must give young people and the implications of ML broader consideration for their future, than classroom or workplace.
We are not yet talking about the effects of teaching technology to learn, and its effect on public services and interactions with the public. Questions that Sam Smith asked in Shadow of the smart machine: Will machine learning end?
At the end of this Information Age we are at a point when machine learning, AI and biotechnology are potentially life enhancing or could have catastrophic effects, if indeed “AI will cause people ‘more pain than happiness” as described by Alibaba’s founder Jack Ma.
The conflict between commercial profit and public good, what commercial companies say they will do and actually do, and fears and assurances over predicted outcomes is personified in the debate between Demis Hassabis, co-founder of DeepMind Technologies, (a London-based machine learning AI startup), and Elon Musk, discussing the perils of artificial intelligence.
Vanity Fair reported that, “Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.””
Musk was of the opinion that A.I. was probably humanity’s “biggest existential threat.”
We are not yet joining up multi disciplinary and cross sector discussions of threats and opportunities
Jobs, shift in needed skill sets for education, how we think, interact, value each other, accept or reject ownership and power models; and later, from the technology itself. We are not yet talking conversely, the opportunities that the seismic shifts offer in real terms. Or how and why to accept or reject or regulate them.
Where private companies are taking over personal data given in trust to public services, it is reckless for the future of public interest research to assume there is no public objection. How can we object, if not asked? How can children make an informed choice? How will public interest be assured to be put ahead of private profit? If it is intended on balance to be all about altruism from these global giants, then they must be open and accountable.
Private companies are shaping how and where we find machine learning and AI gathering data about our behaviours in our homes and public spaces.
SPACE10, an innovation hub for IKEA is currently running a survey on how the public perceives and “wants their AI to look, be, and act”, with an eye on building AI into their products, for us to bring flat-pack into our houses.
As the surveillance technology built into the Things in our homes attached to the Internet becomes more integral to daily life, authorities are now using it to gather evidence in investigations; from mobile phones, laptops, social media, smart speakers, and games. The IoT so far seems less about the benefits of collaboration, and all about the behavioural data it collects and uses to target us to sell us more things. Our behaviours tell much more than how we act. They show how we think inside the private space of our minds.
Do you want Google to know how you think and have control over that? The companies of the world that have access to massive amounts of data, and are using that data to now teach AI how to ‘think’. What is AI learning? And how much should the State see or know about how you think, or try to predict it?
Who cares, wins?
It is not overstated to say society and future public good of public services, depends on getting any co-dependencies right. As I wrote in the time of care.data, the economic value of data, personal rights and the public interest are not opposed to one another, but have synergies and co-dependency. One player getting it wrong, can create harm for all. Government must start to care about this, beyond the side effects of saving political embarrassment.
Without joining up all aspects, we cannot limit harms and make the most of benefits. There is nuance and unknowns. There is opaque decision making and secrecy, packaged in the wording of commercial sensitivity and behind it, people who can be brilliant but at the end of the day, are also, human, with all our strengths and weaknesses.
And we can get this right, if data practices get better, with joined up efforts.
Our future society, as our present, is based on webs of trust, on our social networks on- and offline, that enable business, our education, our cultural, and our interactions. Children must trust they will not be used by systems. We must build trustworthy systems that enable future digital integrity.
The immediate harm that comes from blind trust in AI companies is not their AI, but the hidden powers that commercial companies have to nudge public and policy maker behaviours and acceptance, towards private gain. Their ability and opportunity to influence regulation and future direction outweighs most others. But lack of transparency about their profit motives is concerning. Carefully staged public engagement is not real engagement but a fig leaf to show ‘the public say yes’.
The unwillingness by Google DeepMind, when asked at their public engagement event, to discuss their past use of NHS patient data, or the profit model plan or their terms of NHS deals with London hospitals, should be a warning that these questions need answers and accountability urgently.
Companies that have already extracted and benefited from personal data in the public sector, have already made private profit. They and their machines have learned for their future business product development.
A transparent accountable future for all players, private and public, using public data is a necessary requirement for both the public good and private profit. It is not acceptable for departments to hide their practices, just as it is unacceptable if firms refuse algorithmic transparency.
“Rebooting antitrust for the information age will not be easy. It will entail new risks: more data sharing, for instance, could threaten privacy. But if governments don’t want a data economy dominated by a few giants, they will need to act soon.” [The Economist, May 6]
If the State creates a single data source of truth, or private Giant tech thinks it can side-step regulation and gets it wrong, their practices screw up public trust. It harms public interest research, and with it our future public good.
But will they care?
If we care, then across public and private sectors, we must cherish shared values and better collaboration. Embed ethical human values into development, design and policy. Ensure transparency of where, how, who and why my personal data has gone.
We must ensure that as the future becomes “smarter”, we educate ourselves and our children to stay intelligent about how we use data and AI.
We must start today, knowing how we are used by both machines, and man.