AI in the public sector today, is the RAAC of the future

Reinforced Autoclaved Aerated Concrete (RAAC) used in the school environment is giving our Education Minister a headache. Having been the first to address the problem most publicly, she’s coming under fire as responsible for failure; for Ministerial failure to act on it in thirteen years of a Conservative government since 2010, and the failure of the fabric of educational settings itself.

Decades after buildings’ infrastructure started using RAAC, there is now a parallel digital infrastructure in educational settings. It’s worth thinking about what’s caused the RAAC problem and how it was identified. Could we avoid the same things in the digital environment and in the design, procurement and use of edTech products, and in particular, Artificial Intelligence?

Where has it been used?

In the procurement of school infrastructure, RAAC has been integrated into some parts of the everyday school system, especially in large flat roofs built around the 1960s-80s. It is now hard to detect and remedy or remove without significant effort. There was short-term thinking, short-term spending, and no strategy for its full life cycle or end-of-life expectations. It’s going to be expensive, slow, and difficult to find it and fix.

Where is the risk and what was the risk assessment?

Both most well-known recent cases, the 2016 Edinburgh School masonry collapse and the 2018 roof incident, happened in the early morning when no pupils were present, but, according to the 2019 safety alert by SCOSS, “in either case, the consequences could have been more severe, possibly resulting in injuries or fatalities. There is therefore a risk, although its extent is uncertain.”

That risk has been known for a long time, as today’s education minister Gillian Keegan rightly explained in that interview before airing her frustration. Perhaps it was not seen as a pressing priority because it was not seen as a new problem. In fact locally it often isn’t seen much at all, as it is either hidden behind front-end facades or built into hard-to-see places, like roofs. But already, ‘in the 1990s structural deficiencies became apparent’. (Discussed in papers by the Building Research Establishment (BRE) In the 1990s and again in 2002).

What has changed, according to expert reports, is that those visible problems are no longer behaving as expected in advance,  giving time for mitigation in what had previously been one-off catastrophic incidents. What was only affecting a few, could now affect the many at scale, and without warning. The most recent failures show there is no longer a reliable margin to act, before parts of the mainstream state education infrastructure pose children a threat to life.

Where is the similarity in the digital environment?

AI is the RAAC of another Minister’s future—it’s often similarly sold today as cost-saving, quick and easy to put in place.  You might need fewer people to install it rather than the available alternatives.

AI is being widely introduced at speed into children’s private and family life in England through its procurement and application in the infrastructure of public services; in education and children’s services and policing and in welfare; and some companies claim to be able to identify mood or autism or to be able to profile and influence mental health. Children rarely have any choice or agency to control its often untested effects or outcomes on them, in non-consensual settings.

If you’re working in AI “safety” right now, consider this a parable.

  • There are plenty of people pointing out risk in the current adoption of AI into UK public sector infrastructure; in schools, in health, in welfare, and in prisons and the justice system;
  • There are plenty of cases where harm is very real, but first seen by those in power as affecting the marginalised and minority;
  • There are no consistent published standards or obligations on transparency or of accountability to which AI sellers must hold their products before procurement and affect on people;
  • And there are no easily accessible records of where what type of AI is being procured and built into which public infrastructure, making tracing and remedy even harder in case of product recall.

The objectives of any company, State, service users, the public and investors may not be aligned. Do investors have a duty to ensure that artificial intelligence is developed in an ethical and responsible way? Prioritising short term economic gain and convenience, ahead of human impact or the long term public interest, has resulted in parts of schools’ infrastructure collapsing. And some AI is already going the same way.

The Cardiff Data Justice Lab together with Carnegie Trust have published numerous examples of cancelled systems across public services. “Pressure on public finances means that governments are trying to do more with less. Increasingly, policymakers are turning to technology to cut costs. But what if this technology doesn’t work as it should?” they asked.

In places where similar technology has been in place longer, we already see the impact and harm to people. In 2022, the Chicago Sun Times published an article noting that, “Illinois wisely stopped using algorithms in child welfare cases, but at least 26 states and Washington, D.C., have considered using them, and at least 11 have deployed them. A recent investigation found they are often unreliable and perpetuate racial disparities.” And the author wrote, “Government agencies that oversee child welfare should be prohibited from using algorithms.”

Where are the parallels in the problem and its fixes?

It’s also worth considering how AI can be “removed” or stopped from working in a system. Often not through removal at all, but simply throttling, shutting off that functionality. The problematic parts of the infrastructure remains in situ, but can’t easily be taken out after being designed-in. Whole products may also be difficult to remove.

The 2022 Institution of Structural Engineers’ report summarises the challenge now how to fix the current RAAC problems. Think about what this would mean doing to fix a failure of digital infrastructure:

  • Positive remedial supports and Emergency propping, to mitigate against known deficiencies or unknown/unproven conditions
  • Passive, fail safe supports, to mitigate catastrophic failure of the panels if a panel was to fail
  • Removal of individual panels and replacement with an alternative solution
  • Entire roof replacement to remove the ongoing liabilities
  • Periodic monitoring of the panels for their remaining service life

RAAC has not become a risk to life. It already was from design. While still recognised as a ‘good construction material for many purposes’ it has been widely used in unsafe ways in the wrong places.

RAAC planks made fifty years ago did not have the same level of quality control as we would demand today and yet was procured and put in place for decades after it was known to be unsafe for some uses, and risk assessments saying so.

RAAC was given an exemption from the commonly used codes of practice of reinforced concrete design (RC).

RAAC is scattered among non-RAAC infrastructure, making finding and fixing it, or its removal, very much harder than if it had been recorded in a register, making it easily traceable.

RAAC developers and sellers may no longer exist or have gone out of business without any accountability.

Current AI discourse should be asking not only for retrospective accountability or even life-cycle accountability, but also what does accountable AI look like by design and how do you guarantee it?

  • How do we prevent risk of harm to people from poor quality of systems designed to support them, what will protect people from being affected by unsafe products in those settings in the first place?
  • Are the incentives correct in procurement to enable adequate Risk Assessment be carried out by those who choose to use it?
  • Rather than accepting risk and retroactively expecting remedial action across all manner of public services in future—ignoring a growing number of ticking time bombs—what should public policy makers be doing to avoid putting them in place?
  • How will we know where the unsafe products were built into, if they are permitted then later found to be a threat-to-life?
  • How is safety or accountability upheld for the lifecycle of the product if companies stop making it, or go out of business?
  • How does anyone working with systems applied to people, assess their ongoing use and ensure it promotes human flourishing?

In the digital environment we still have margin to act, to ensure the safety of everyday parts of institutional digital infrastructure in mainstream state education and prevent harm to children. Whether that’s from parts of a product’s code, or use in the wrong way, or entire products. AI is already used in the infrastructure of school’ curriculum planning, curriculum content, or steering children’s self-beliefs and behaviours, and the values of the adult society these pupils will become. Some products have been oversold as AI when they weren’t, overhyped, overused and under explained,  their design is hidden away and kept from sight or independent scrutiny– some with real risks and harms. Right now, some companies and policy makers are making familiar errors and ‘safety-washing’ AI harms, ignoring criticism and pushing it off as someone else’s future problem.

In education, they could learn lessons from RAAC.


Background references

BBC Newsnight Timeline: reports from as far back as 1961 about aerated concrete concerns. 01/09/2023

BBC Radio 4 The World At One: Was RAAC mis-sold? 04/09/2023

Pre-1980 RAAC roof planks are now past their expected service life. CROSS. (2020) Failure of RAAC planks in schools.

A 2019 safety alert by SCOSS, “Failure of Reinforced Autoclaved Aerated Concrete (RAAC) Planks” following the sudden collapse of a school flat roof in 2018.

The Local Government Association (LGA) and the Department for Education (DfE) then contacted all school building owners and warned of ‘risk of sudden structural failure.’

In February 2022, the Institution of Structural Engineers published a report, Reinforced Autoclaved Aerated Concrete (RAAC) Panels Investigation and Assessment with follow up in April 2023, including a proposed approach to the classification of these risk factors and how these may impact on the proposed remediation and management of RAAC. (p.11)

image credit: DALL·E 2 OpenAI generated using the prompt “a model of Artificial Intelligence made from concrete slabs”.