Tag Archives: AI

AI in the public sector today, is the RAAC of the future

Reinforced Autoclaved Aerated Concrete (RAAC) used in the school environment is giving our Education Minister a headache. Having been the first to address the problem most publicly, she’s coming under fire as responsible for failure; for Ministerial failure to act on it in thirteen years of a Conservative government since 2010, and the failure of the fabric of educational settings itself.

Decades after buildings’ infrastructure started using RAAC, there is now a parallel digital infrastructure in educational settings. It’s worth thinking about what’s caused the RAAC problem and how it was identified. Could we avoid the same things in the digital environment and in the design, procurement and use of edTech products, and in particular, Artificial Intelligence?

Where has it been used?

In the procurement of school infrastructure, RAAC has been integrated into some parts of the everyday school system, especially in large flat roofs built around the 1960s-80s. It is now hard to detect and remedy or remove without significant effort. There was short-term thinking, short-term spending, and no strategy for its full life cycle or end-of-life expectations. It’s going to be expensive, slow, and difficult to find it and fix.

Where is the risk and what was the risk assessment?

Both most well-known recent cases, the 2016 Edinburgh School masonry collapse and the 2018 roof incident, happened in the early morning when no pupils were present, but, according to the 2019 safety alert by SCOSS, “in either case, the consequences could have been more severe, possibly resulting in injuries or fatalities. There is therefore a risk, although its extent is uncertain.”

That risk has been known for a long time, as today’s education minister Gillian Keegan rightly explained in that interview before airing her frustration. Perhaps it was not seen as a pressing priority because it was not seen as a new problem. In fact locally it often isn’t seen much at all, as it is either hidden behind front-end facades or built into hard-to-see places, like roofs. But already, ‘in the 1990s structural deficiencies became apparent’. (Discussed in papers by the Building Research Establishment (BRE) In the 1990s and again in 2002).

What has changed, according to expert reports, is that those visible problems are no longer behaving as expected in advance,  giving time for mitigation in what had previously been one-off catastrophic incidents. What was only affecting a few, could now affect the many at scale, and without warning. The most recent failures show there is no longer a reliable margin to act, before parts of the mainstream state education infrastructure pose children a threat to life.

Where is the similarity in the digital environment?

AI is the RAAC of another Minister’s future—it’s often similarly sold today as cost-saving, quick and easy to put in place.  You might need fewer people to install it rather than the available alternatives.

AI is being widely introduced at speed into children’s private and family life in England through its procurement and application in the infrastructure of public services; in education and children’s services and policing and in welfare; and some companies claim to be able to identify mood or autism or to be able to profile and influence mental health. Children rarely have any choice or agency to control its often untested effects or outcomes on them, in non-consensual settings.

If you’re working in AI “safety” right now, consider this a parable.

  • There are plenty of people pointing out risk in the current adoption of AI into UK public sector infrastructure; in schools, in health, in welfare, and in prisons and the justice system;
  • There are plenty of cases where harm is very real, but first seen by those in power as affecting the marginalised and minority;
  • There are no consistent published standards or obligations on transparency or of accountability to which AI sellers must hold their products before procurement and affect on people;
  • And there are no easily accessible records of where what type of AI is being procured and built into which public infrastructure, making tracing and remedy even harder in case of product recall.

The objectives of any company, State, service users, the public and investors may not be aligned. Do investors have a duty to ensure that artificial intelligence is developed in an ethical and responsible way? Prioritising short term economic gain and convenience, ahead of human impact or the long term public interest, has resulted in parts of schools’ infrastructure collapsing. And some AI is already going the same way.

The Cardiff Data Justice Lab together with Carnegie Trust have published numerous examples of cancelled systems across public services. “Pressure on public finances means that governments are trying to do more with less. Increasingly, policymakers are turning to technology to cut costs. But what if this technology doesn’t work as it should?” they asked.

In places where similar technology has been in place longer, we already see the impact and harm to people. In 2022, the Chicago Sun Times published an article noting that, “Illinois wisely stopped using algorithms in child welfare cases, but at least 26 states and Washington, D.C., have considered using them, and at least 11 have deployed them. A recent investigation found they are often unreliable and perpetuate racial disparities.” And the author wrote, “Government agencies that oversee child welfare should be prohibited from using algorithms.”

Where are the parallels in the problem and its fixes?

It’s also worth considering how AI can be “removed” or stopped from working in a system. Often not through removal at all, but simply throttling, shutting off that functionality. The problematic parts of the infrastructure remains in situ, but can’t easily be taken out after being designed-in. Whole products may also be difficult to remove.

The 2022 Institution of Structural Engineers’ report summarises the challenge now how to fix the current RAAC problems. Think about what this would mean doing to fix a failure of digital infrastructure:

  • Positive remedial supports and Emergency propping, to mitigate against known deficiencies or unknown/unproven conditions
  • Passive, fail safe supports, to mitigate catastrophic failure of the panels if a panel was to fail
  • Removal of individual panels and replacement with an alternative solution
  • Entire roof replacement to remove the ongoing liabilities
  • Periodic monitoring of the panels for their remaining service life

RAAC has not become a risk to life. It already was from design. While still recognised as a ‘good construction material for many purposes’ it has been widely used in unsafe ways in the wrong places.

RAAC planks made fifty years ago did not have the same level of quality control as we would demand today and yet was procured and put in place for decades after it was known to be unsafe for some uses, and risk assessments saying so.

RAAC was given an exemption from the commonly used codes of practice of reinforced concrete design (RC).

RAAC is scattered among non-RAAC infrastructure, making finding and fixing it, or its removal, very much harder than if it had been recorded in a register, making it easily traceable.

RAAC developers and sellers may no longer exist or have gone out of business without any accountability.

Current AI discourse should be asking not only for retrospective accountability or even life-cycle accountability, but also what does accountable AI look like by design and how do you guarantee it?

  • How do we prevent risk of harm to people from poor quality of systems designed to support them, what will protect people from being affected by unsafe products in those settings in the first place?
  • Are the incentives correct in procurement to enable adequate Risk Assessment be carried out by those who choose to use it?
  • Rather than accepting risk and retroactively expecting remedial action across all manner of public services in future—ignoring a growing number of ticking time bombs—what should public policy makers be doing to avoid putting them in place?
  • How will we know where the unsafe products were built into, if they are permitted then later found to be a threat-to-life?
  • How is safety or accountability upheld for the lifecycle of the product if companies stop making it, or go out of business?
  • How does anyone working with systems applied to people, assess their ongoing use and ensure it promotes human flourishing?

In the digital environment we still have margin to act, to ensure the safety of everyday parts of institutional digital infrastructure in mainstream state education and prevent harm to children. Whether that’s from parts of a product’s code, or use in the wrong way, or entire products. AI is already used in the infrastructure of school’ curriculum planning, curriculum content, or steering children’s self-beliefs and behaviours, and the values of the adult society these pupils will become. Some products have been oversold as AI when they weren’t, overhyped, overused and under explained,  their design is hidden away and kept from sight or independent scrutiny– some with real risks and harms. Right now, some companies and policy makers are making familiar errors and ‘safety-washing’ AI harms, ignoring criticism and pushing it off as someone else’s future problem.

In education, they could learn lessons from RAAC.


Background references

BBC Newsnight Timeline: reports from as far back as 1961 about aerated concrete concerns. 01/09/2023

BBC Radio 4 The World At One: Was RAAC mis-sold? 04/09/2023

Pre-1980 RAAC roof planks are now past their expected service life. CROSS. (2020) Failure of RAAC planks in schools.

A 2019 safety alert by SCOSS, “Failure of Reinforced Autoclaved Aerated Concrete (RAAC) Planks” following the sudden collapse of a school flat roof in 2018.

The Local Government Association (LGA) and the Department for Education (DfE) then contacted all school building owners and warned of ‘risk of sudden structural failure.’

In February 2022, the Institution of Structural Engineers published a report, Reinforced Autoclaved Aerated Concrete (RAAC) Panels Investigation and Assessment with follow up in April 2023, including a proposed approach to the classification of these risk factors and how these may impact on the proposed remediation and management of RAAC. (p.11)

image credit: DALL·E 2 OpenAI generated using the prompt “a model of Artificial Intelligence made from concrete slabs”.

 

Views on a National AI strategy

Today was the APPG AI Evidence Meeting – The National AI Strategy: How should it look? Here’s some of my personal views and takeaways.

Have the Regulators the skills and competency to hold organisations to account for what they are doing? asked Roger Taylor, the former Chair of Ofqual the exams regulator, as he began the panel discussion, chaired by Lord Clement-Jones.

A good question was followed by another.

What are we trying to do with AI? asked Andrew Strait, Associate Director of Research Partnerships at Ada Lovelace Institute and formerly of DeepMind and Google. The goal of a strategy should not be to have more AI for the sake of having more AI, he said, but an articulation of values and goals. (I’d suggest the government may be in fact in favour of exactly that, more AI for its own sake where its appplication is seen as a growth market.) And interestingly he suggested that the Scottish strategy has more values-based model, such as fairness. [I had, it seems, wrongly assumed that a *national* AI strategy to come, would include all of the UK.]

The arguments on fairness are well worn in AI discussion and getting old. And yet they still too often fail to ask whether these tools are accurate or even work at all. Look at the education sector and one company’s product, ClassCharts, that claimed AI as its USP for years, but the ICO found in 2020 that the company didn’t actually use any AI at all. If company claims are not honest, or not accurate, then they’re not fair to anyone, never mind across everyone.

Fairness is still too often thought of in terms of explainability of a computer algorithm, not the entire process it operates in. As I wrote back in 2019, “yes we need fairness accountability and transparency. But we need those human qualities to reach across thinking beyond computer code. We need to restore humanity to automated systems and it has to be re-instated across whole processes.”

Strait went on to say that safe and effective AI would be something people can trust. And he asked the important question: who gets to define what a harm is? Rightly identifying that the harm identified by a developer of a tool, may be very different from those people affected by it. (No one on the panel attempted to define or limit what AI is, in these discussions.) He suggested that the carbon footprint from AI may counteract the benefit it would have to apply AI in the pursuit of climate-change goals. “The world we want to create with AI” was a very interesting position and I’d have liked to hear him address what he meant by that, who is “we”, and any assumptions within it.

Lord Clement-Jones asked him about some of the work that Ada Lovelace had done on harms such as facial recognition, and also asked whether some sector technologies are so high risk that they must be regulated?  Strait suggested that we lack adequate understanding of what harms are — I’d suggest academia and civil society have done plenty of work on identifying those, they’ve just been too often  ignored until after the harm is done and there are legal challenges. Strait also suggested he thought the Online Harms agenda was ‘a fantastic example’ of both horizontal and vertical regulation. [Hmm, let’s see. Many people would contest that, and we’ll see what the Queen’s Speech brings.]

Maria Axente then went on to talk about children and AI.  Her focus was on big platforms but also mentioned a range of other application areas. She spoke of the data governance work going on at UNICEF. She included the needs for driving awareness of the risks for children and AI, and digital literacy. The potential for limitations on child  development, the exacerbation of the digital divide,  and risks in public spaces but also hoped for opportunities. She suggested that the AI strategy may therefore be the place for including children.

This of course was something I would want to discuss at more length, but in summary the last decade of Westminster policy affecting children, even the Children’s Commissioner most recent Big Ask survey, bypass the question of children’s *rights* completely. If the national AI strategy by contrast would address rights, [the foundation upon which data laws are built] and create the mechanisms in public sector interactions with children that would enable them to be told if and how their data is being used (in AI systems or otherwise) and be able to exercise the choices that public engagement time and time again says is what people want, then that would be a *huge* and positive step forward to effective data practice across the public sector and for use of AI. Otherwise I see a risk that a strategy on AI and children will ignore children as rights holders across a full range of rights in the digital environment, and focus only on the role of AI in child protection, a key DCMS export aim, and ignore the invasive nature of safety tech tools, and its harms.

Next Dr Jim Weatherall from Astra Zeneca tied together  leveraging “the UK unique strengths of the NHS” and “data collected there” wanting a close knitting together of the national AI strategy and the national data strategy, so that healthcare, life sciences and biomedical sector can become “an international renowned asset.”  He’d like to see students doing data science modules in studies and international access to talent to work for AZ.

Lord Clement-Jones then asked him how to engender public trust in data use. Weatherall said a number of false starts in the past are hindering progress, but that he saw the way forward was data trusts and citizen juries.

His answer ignores the most obvious solution: respect existing law and human rights, using data only in ways that people want and give their permission to do so. Then show them that you did that, and nothing more. In short, what medConfidential first proposed in 2014, the creation of data usage reports.

The infrastructure for managing personal data controls in the public sector, as well as its private partners, must be the basic building block for any national AI strategy.  Views from public engagement work, polls, and outreach has not changed significantly since those done in 2013-14, but ask for the same over and over again. Respect for ‘red lines’ and to have control and choice. Won’t government please make it happen?

If the government fails to put in place those foundations, whatever strategy it builds will fall in the same ways they have done to date, like care.data did by assuming it was acceptable to use data in the way that the government wanted, without a social licence, in the name of “innovation”. Aims that were championed by companies such as Dr Foster, that profited from reusing personal data from the public sector, in a “hole and corner deal” as described by the chairman of the House of Commons committee of public accounts in 2006. Such deals put industry and “innovation” ahead of what the public want in terms of ‘red lines’ for acceptable re-uses of their own personal data and for data re-used in the public interest vs for commercial profit.  And “The Department of Health failed in its duty to be open to parliament and the taxpayer.” That openness and accountability are still missing nearly ten years on in the scope creep of national datasets and commercial reuse, and in expanding data policies and research programmes.

I disagree with the suggestion made that Data Trusts will somehow be more empowering to everyone than mechanisms we have today for data management. I believe Data Trusts will further stratify those who are included and those excluded, and benefit those who have capacity to be able to participate, and disadvantage those who cannot choose. They are also a figleaf of acceptability that don’t solve the core challenge . Citizen juries cannot do more than give a straw poll. Every person whose data is used has entitlement to rights in law, and the views of a jury or Trust cannot speak for everyone or override those rights protected in law.

Tabitha Goldstaub spoke next and outlined some of what AI Council Roadmap had published. She suggested looking at removing barriers to best support the AI start-up community.

As I wrote when the roadmap report was published, there are basics missing in government’s own practice that could be solved. It had an ambition to, “Lead the development of data governance options and its uses. The UK should lead in developing appropriate standards to frame the future governance of data,” but the Roadmap largely ignored the governance infrastructures that already exist. One can only read into that a desire to change and redesign what those standards are.

I believe that there should be no need to change the governance of data but instead make today’s rights able to be exercised and deliver enforcement to make existing governance actionable. Any genuine “barriers” to data use in data protection law,  are designed as protections for people; the people the public sector, its staff and these arms length bodies are supposed to serve.

Blaming AI and algorithms, blaming lack of clarity in the law, blaming “barriers” is often avoidance of one thing. Human accountability. Accountability for ignorance of the law or lack of consistent application. Accountability for bad policy, bad data and bad applications of tools is a human responsibility. Systems you choose to apply to human lives affect people, sometimes forever and in the most harmful ways, so those human decisions must be accountable.

I believe that some simple changes in practice when it comes to public administrative data could bring huge steps forward there:

  1. An audit of existing public admin data held, by national and local government, and consistent published registers of databases and algorithms / AI / ML currently in use.
  2. Identify the lawful basis for each set of data processes, their earliest records dates and content.
  3. Publish that resulting ROPA and storage limitations.
  4. Assign accountable owners to databases, tools and the registers.
  5. Sort out how you will communicate with people whose data you unlawfully process to meet the law, or stop processing it.
  6. And above all, publish a timeline for data quality processes and show that you understand how the degradation of data accuracy, quality affect the rights and responsibilities in law that change over time, as a result.

Goldstaub went on to say on ethics and inclusion, that if it’s not diverse, it’s not ethical. Perhaps the next panel itself and similar events could take a lesson learned from that, as such APPG panel events are not as diverse as they could or should be themselves.  Some of the biggest harms in the use of AI are after all for those in communities least represented, and panels like this tend to ignore lived reality.

The Rt Rev Croft then wrapped up the introductory talks on that more human note, and by exploding some myths.  He importantly talked about the consequences he expects of the increasing use of AI and its deployment in ‘the future of work’ for example, and its effects for our humanity. He proposed 5 topics for inclusion in the strategy and suggested it is essential to engage a wide cross section of society. And most importantly to ask, what is this doing to us as people?

There were then some of the usual audience questions asked on AI, transparency, garbage-in garbage-out, challenges of high risk assessment, and agreements or opposition to the EU AI regulation.

What frustrates me most in these discussions is that the technology is an assumed given, and the bias that gives to the discussion, is itself ignored. A holistic national AI strategy should be looking at if and why AI at all. What are the consequences of this focus on AI and what policy-making-oxygen and capacity does it take away from other areas of what government could or should be doing? The questioner who asks how adaptive learning could use AI for better learning in education, fails to ask what does good learning look like, and if and how adaptive tools fit into that, analogue or digital, at all.

I would have liked to ask panelists if they agree that proposals of public engagement and digital literacy distract from lack of human accountability for bad policy decisions that use machine-made support? Taking  examples from 2020 alone, there were three applications of algorithms and data in the public sector challenged by civil society because of their harms: from the Home Office dropping its racist visa algorithm, DWP court case finding ‘irrational and unlawful’ in Universal Credit decisions, and the “mutant algorithm” of summer 2020 exams. Digital literacy does nothing to help people in those situations. What AI has done is to increase the speed and scale of the harms caused by harmful policy, such as the ‘Hostile Environment’ which is harmful by design.

Any Roadmap, AI Council recommendations, and any national strategy if serious about what good looks like, must answer how would those harms be prevented in the public sector *before* being applied. It’s not about the tech, AI or not, but misuse of power. If the strategy or a Roadmap or ethics code fails to state how it would prevent such harms, then it isn’t serious about ethics in AI, but ethics washing its aims under the guise of saying the right thing.

One unspoken problem right now is the focus on the strategy solely for the delivery of a pre-determined tool (AI). Who cares what the tool is? Public sector data comes from the relationship between people and the provision of public services by government at various levels, and its AI strategy seems to have lost sight of that.

What good would look like in five years would be the end of siloed AI discussion as if it is a desirable silver bullet, and mythical numbers of ‘economic growth’ as a result, but see AI treated as is any other tech and its role in end-to-end processes or service delivery would be discussed proportionately. Panelists would stop suggesting that the GDPR is hard to understand or people cannot apply it.  Almost all of the same principles in UK data laws have applied for over twenty years. And regardless of the GDPR, the Convention 108 applies to the UK post-Brexit unchanged, including associated Council of Europe Guidelines on AI, data protection, privacy and profiling.

Data laws. AI regulation. Profiling. Codes of Practice on children, online safety or biometrics and emotional or gait recognition. There *are* gaps in data protection law when it comes to biometric data not used for unique identification purposes. But much of this is already rolled into other law and regulation for the purposes of upholding human rights and the rule of law. The challenge in the UK is often not having the law, but its lack of enforcement. There are concerns in civil society that the DCMS is seeking to weaken core ICO duties even further. Recent  government, council and think tank roadmaps talk of the UK leading on new data governance, but in reality simply want to see established laws rewritten to be less favourable of rights. To be less favourable towards people.

Data laws are *human* rights-based laws. We will never get a workable UK national data strategy or national AI strategy if government continues to ignore the very fabric of what they are to be built on. Policy failures will be repeated over and over until a strategy supports people to exercise their rights and have them respected.

Imagine if the next APPG on AI asked what would human rights’ respecting practice and policy look like, and what infrastructure would the government need to fund or build to make it happen?  In public-private sector areas (like edTech). Or in the justice system, health, welfare, children’s social care. What could that Roadmap look like and how we can make it happen over what timeframe? Strategies that could win public trust *and* get the sectoral wins the government and industry are looking for. Then we might actually move forwards on getting a functional strategy that would work, for delivering public services and where both AI and data fit into that.

The power behind today’s AI in public services

The power behind today’s AI in public services

Thinking about whether education in England is preparing us for the jobs of the future, means also thinking about how technology will influence it.

Time and again, thinking and discussion about these topics is siloed. At the Turing Institute, the Royal Society, the ADRN and EPSRC, in government departments, discussions on data, or within education practitioner, and public circles — we are all having similar discussions about data and ethics, but with little ownership and no goals for future outcomes. If government doesn’t get it, or have time for it, or policy lacks ethics by design, is it in the public interest for private companies, Google et al., to offer a fait accompli?

There is lots of talking about Machine Learning (ML), Artificial Intelligence (AI) and ethics. But what is being done to ensure that real values — respect for rights, human dignity, and autonomy — are built into practice in the public services delivery?

In most recent data policy it is entirely absent. The Digital Economy Act s33 risks enabling, through removal of inter and intra-departmental data protections, an unprecedented expansion of public data transfers, with “untrammelled powers”. Powers without codes of practice, promised over a year ago. That has fall out for the trustworthiness of legislative process, and data practices across public services.

Predictive analytics is growing but poorly understood in the public and public sector.

There is already dependence on computers in aspects of public sector work. Its interactions with others in sensitive situations demands better knowledge of how systems operate and can be wrong. Debt recovery, and social care to take two known examples.

Risk averse, staff appear to choose not to question the outcome of ‘algorithmic decision making’ or do not have the ability to do so. There is reportedly no analysis training for practitioners, to understand the basis or bias of conclusions. This has the potential that instead of making us more informed, decision-making by machine makes us humans less clever.

What does it do to professionals, if they feel therefore less empowered? When is that a good thing if it overrides discriminatory human decisions? How can we tell the difference and balance these risks if we don’t understand or feel able to challenge them?

In education, what is it doing to children whose attainment is profiled, predicted, and acted on to target extra or less focus from school staff, who have no ML training and without informed consent of pupils or parents?

If authorities use data in ways the public do not expect, such as to ID homes of multiple occupancy without informed consent, they will fail the future to deliver uses for good. The ‘public interest’, ‘user need,’ and ethics can come into conflict according to your point of view. The public and data protection law and ethics object to harms from use of data. This type of application has potential to be mind-blowingly invasive and reveal all sorts of other findings.

Widely informed thinking must be made into meaningful public policy for the greatest public good

Our politicians are caught up in the General Election and buried in Brexit.

Meanwhile, the commercial companies taking AI first rights to capitalise on existing commercial advantage could potentially strip public assets, use up our personal data and public trust, and leave the public with little public good. We are already used by global data players, and by machine-based learning companies, without our knowledge or consent. That knowledge can be used to profit business models, that pay little tax into the public purse.

There are valid macro economic arguments about whether private spend and investment are preferable compared with a state’s ability to do the same. But these companies make more than enough to do it all. Does it signal a failure to a commitment to the wider community; not paying just amounts of taxes, is it a red flag to a company’s commitment to public good?

What that public good should look like, depends on who is invited to participate in the room, and not to tick boxes, but to think and to build.

The Royal Society’s Report on AI and Machine Learning published on April 25, showed a working group of 14 participants, including two Google DeepMind representatives, one from Amazon, private equity investors, and academics from cognitive science and genetics backgrounds.

Our #machinelearning working group chair, professor Peter Donnelly FRS, on today’s major #RSMachinelearning report https://t.co/PBYjzlESmB pic.twitter.com/RM9osnvOMX

— The Royal Society (@royalsociety) April 25, 2017

If we are going to form objective policies the inputs that form the basis for them must be informed, but must also be well balanced, and be seen to be balanced. Not as an add on, but be in the same room.

As Natasha Lomas in TechCrunch noted, “Public opinion is understandably a big preoccupation for the report authors — unsurprisingly so, given that a technology that potentially erodes people’s privacy and impacts their jobs risks being drastically unpopular.”

“The report also calls on researchers to consider the wider impact of their work and to receive training in recognising the ethical implications.”

What are those ethical implications? Who decides which matter most? How do we eliminate recognised discriminatory bias? What should data be used for and AI be working on at all? Who is it going to benefit? What questions are we not asking? Why are young people left out of this debate?

Who decides what the public should or should not know?

AI and ML depend on data. Data is often talked about as a panacea to problems of better working together. But data alone does not make people better informed. In the same way that they fail, if they don’t feel it is their job to pick up the fax. A fundamental building block of our future public and private prosperity is understanding data and how we, and the AI, interact. What is data telling us and how do we interpret it, and know it is accurate?

How and where will we start to educate young people about data and ML, if not about their own and use by government and commercial companies?

The whole of Chapter 5 in the report is very good as a starting point for policy makers who have not yet engaged in the area. Privacy while summed up too short in conclusions, is scattered throughout.

Blind spots remain, however.

  • Over willingness to accommodate existing big private players as their expertise leads design, development and a desire to ‘re-write regulation’.
  • Slowness to react to needed regulation in the public sector (caught up in Brexit) while commercial drivers and technology change forge ahead
  • ‘How do we develop technology that benefits everyone’ must not only think UK, but global South, especially in the bias in how AI is being to taught, and broad socio-economic barriers in application
  • Predictive analytics and professional application = unwillingness to question the computer result. In children’s social care this is already having a damaging upturn in the family courts (S31)
  • Data and technology knowledge and ethics training, must be embedded across the public sector, not only post grad students in machine learning.
  • Harms being done to young people today and potential for intense future exploitation, are being ignored by policy makers and some academics. Safeguarding is often only about blocking in case of liability to the provider, stopping children seeing content, or preventing physical exploitation. It ignores exploitation by online platform firms, and app providers and games creators, of a child’s synthesised online life and use. Laws and government departments’ own practices can be deeply flawed.
  • Young people are left out of discussions which, after all, are about their future. [They might have some of the best ideas, we miss at our peril.]

There is no time to waste

Children and young people have the most to lose while their education, skills, jobs market, economy, culture, care, and society goes through a series of gradual but seismic shift in purpose, culture, and acceptance before finding new norms post-Brexit. They will also gain the most if the foundations are right. One of these must be getting age verification right in GDPR, not allowing it to enable a massive data grab of child-parent privacy.

Although the RS Report considers young people in the context of a future workforce who need skills training, they are otherwise left out of this report.

“The next curriculum reform needs to consider the educational needs of young people through the lens of the implications of machine learning and associated technologies for the future of work.”

Yes it does, but it must give young people and the implications of ML broader consideration for their future, than classroom or workplace.

Facebook has targeted vulnerable young people, it is alleged, to facilitate predatory advertising practices. Some argue that emotive computing or MOOCs belong in the classroom. Who decides?

We are not yet talking about the effects of teaching technology to learn, and its effect on public services and interactions with the public. Questions that Sam Smith asked in Shadow of the smart machine: Will machine learning end?

At the end of this Information Age we are at a point when machine learning, AI and biotechnology are potentially life enhancing or could have catastrophic effects, if indeed “AI will cause people ‘more pain than happiness” as described by Alibaba’s founder Jack Ma.

The conflict between commercial profit and public good, what commercial companies say they will do and actually do, and fears and assurances over predicted outcomes is personified in the debate between Demis Hassabis, co-founder of DeepMind Technologies, (a London-based machine learning AI startup), and Elon Musk, discussing the perils of artificial intelligence.

Vanity Fair reported that, Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.””

Musk was of the opinion that A.I. was probably humanity’s “biggest existential threat.”

We are not yet joining up multi disciplinary and cross sector discussions of threats and opportunities

Jobs, shift in needed skill sets for education, how we think, interact, value each other, accept or reject ownership and power models; and later, from the technology itself. We are not yet talking conversely, the opportunities that the seismic shifts offer in real terms. Or how and why to accept or reject or regulate them.

Where private companies are taking over personal data given in trust to public services, it is reckless for the future of public interest research to assume there is no public objection. How can we object, if not asked? How can children make an informed choice? How will public interest be assured to be put ahead of private profit? If it is intended on balance to be all about altruism from these global giants, then they must be open and accountable.

Private companies are shaping how and where we find machine learning and AI gathering data about our behaviours in our homes and public spaces.

SPACE10, an innovation hub for IKEA is currently running a survey on how the public perceives and “wants their AI to look, be, and act”, with an eye on building AI into their products, for us to bring flat-pack into our houses.

As the surveillance technology built into the Things in our homes attached to the Internet becomes more integral to daily life, authorities are now using it to gather evidence in investigations; from mobile phones, laptops, social media, smart speakers, and games. The IoT so far seems less about the benefits of collaboration, and all about the behavioural data it collects and uses to target us to sell us more things. Our behaviours tell much more than how we act. They show how we think inside the private space of our minds.

Do you want Google to know how you think and have control over that? The companies of the world that have access to massive amounts of data, and are using that data to now teach AI how to ‘think’. What is AI learning? And how much should the State see or know about how you think, or try to predict it?

Who cares, wins?

It is not overstated to say society and future public good of public services, depends on getting any co-dependencies right. As I wrote in the time of care.data, the economic value of data, personal rights and the public interest are not opposed to one another, but have synergies and co-dependency. One player getting it wrong, can create harm for all. Government must start to care about this, beyond the side effects of saving political embarrassment.

Without joining up all aspects, we cannot limit harms and make the most of benefits. There is nuance and unknowns. There is opaque decision making and secrecy, packaged in the wording of commercial sensitivity and behind it, people who can be brilliant but at the end of the day, are also, human, with all our strengths and weaknesses.

And we can get this right, if data practices get better, with joined up efforts.

Our future society, as our present, is based on webs of trust, on our social networks on- and offline, that enable business, our education, our cultural, and our interactions. Children must trust they will not be used by systems. We must build trustworthy systems that enable future digital integrity.

The immediate harm that comes from blind trust in AI companies is not their AI, but the hidden powers that commercial companies have to nudge public and policy maker behaviours and acceptance, towards private gain. Their ability and opportunity to influence regulation and future direction outweighs most others. But lack of transparency about their profit motives is concerning. Carefully staged public engagement is not real engagement but a fig leaf to show ‘the public say yes’.

The unwillingness by Google DeepMind, when asked at their public engagement event, to discuss their past use of NHS patient data, or the profit model plan or their terms of NHS deals with London hospitals, should be a warning that these questions need answers and accountability urgently.

As TechCrunch suggested after the event, this is all “pretty standard playbook for tech firms seeking to workaround business barriers created by regulation.” Calls for more data, might mean an ever greater power shift.

Companies that have already extracted and benefited from personal data in the public sector, have already made private profit. They and their machines have learned for their future business product development.

A transparent accountable future for all players, private and public, using public data is a necessary requirement for both the public good and private profit. It is not acceptable for departments to hide their practices, just as it is unacceptable if firms refuse algorithmic transparency.

Rebooting antitrust for the information age will not be easy. It will entail new risks: more data sharing, for instance, could threaten privacy. But if governments don’t want a data economy dominated by a few giants, they will need to act soon.” [The Economist, May 6]

If the State creates a single data source of truth, or private Giant tech thinks it can side-step regulation and gets it wrong, their practices screw up public trust. It harms public interest research, and with it our future public good.

But will they care?

If we care, then across public and private sectors, we must cherish shared values and better collaboration. Embed ethical human values into development, design and policy. Ensure transparency of where, how, who and why my personal data has gone.

We must ensure that as the future becomes “smarter”, we educate ourselves and our children to stay intelligent about how we use data and AI.

We must start today, knowing how we are used by both machines, and man.


First published on Medium for a change.