The UK Department for Science and Technology has been criticised online for its publication of a links list to commercial AI resources packaged as practical AI skills for work.
There are two major problems if you enable AI “literacy” and policy to be led this way. The first, is the framing as something prioritised for employment, and it’s notable that many of these providers are the employers who are often the very same companies seeking to increase their profits through cost reduction from increased efficiency, or having fewer humans in their workforce. A position the UK government has accepted as caused by AI and an inevitability.
The second, is that the subject, and what society understands about its salience and meaning, is steered by the same hands of Big Tech, it plays into, through the effect of consolidation of power.
To present ‘teaching about AI’ as being about skills for the workforce (and a narrow range of workplaces at that), is not only misguided because it narrows learning to being only about technical skills, but because it misdirects us all to look away, more broadly, from what “AI” is being used for, how, why, and by whom.
The critique is therefore important to understand not just about quality of courses, but about the narrowing of AI literacy itself.
AI Literacy is in fact, vital democratic infrastructure.
Problem 1: AI Literacy as Workforce Optimisation
The recommendation of the AI Skills for Life and Work: Rapid Evidence Review, published on January 28th seems not to have been taken here, to involve professional organisations, such as the British Computer Society (BCS) and Royal Academy of Engineering (RAE), in defining and policing standards that training courses should meet. These expert organisations are notably absent from the list of new and founding partners.
Though the announcements claimed these courses were checked against Skills England’s AI foundation skills for work benchmark, also published on January 28th, something seems to have gone badly wrong in any basic due diligence to check even that the links all worked. That should have included checks being done for claims that free courses were actually at zero cost to users, before the public was steered towards those providers in media coverage.
If Skills England wants to restore both its own credibility and public trust in the providers, it could publish its criteria and findings about how the courses chosen for the AI Skills Boost programme, and evaluation of their assessment against Skills England’s new AI foundation skills for work benchmark and how that was designed.
The second challenge, is the Westminster government is focussed only on skills for some work, and ‘the rest’ of life is vague at best.
Problem 2: Narrative Capture by Big Tech distorts the big picture
Evidence from organisations that have scrutinised UK real-world AI in practice; one recent synthesis is by Data Justice Lab for example, of cancelled systems in the public sector, may not fit the narrow scope of AI skills for some types of work, but it does offer valuable lessons to learn from for other areas, in particular how AI affects public sector services, which in turn affects so many of us on a daily basis.
The government has repeatedly disagreed on AI policy, with recommendations from peers, from experts, and with what the public is saying. In stark contrast with other European countries approach, the UK refuses to legislate on unacceptable risk levels.
The public are already paying the price for this. The prioritisation of move fast and break things “route to impact” has so far come at the cost to citizens and broken everyday lives in welfare systems. Loss of agency and everyday friction are making life harder, less efficient, more stressful in many ways, the opposite of what many felt was the promise of technology and early Internet.
AI is already shaping the justice system through police surveillance, legal research, and citizens advice bots and making AI the cornerstone of its approach, while the basic courts’ IT tools are totally dysfunctional and those in charge won’t listen and won’t invest in the infrastructure to fix it.
[Notable aside, don’t let this put you off having your say and speaking out. There are a few days left to have your say in a consultation on the Wild West of facial recognition used for law enforcement.]
The youth backlash to AI slop has become incessant and the average older person in the street is fed up they need a multitude of apps and a smartphone to perform everyday tasks that used to be simpler to get done. (40% of drivers said that paying for parking with cash was their preferred choice in a 2025 poll of 13,755 drivers for The AA.)
Thousands of workers are run ragged by the algorithmic slave drivers of gig-economy apps, in precarious jobs, and less protected than European counterparts with weaker workers rights post Brexit, so tragically dramatised in the Loach film, Sorry we missed you.
The question is not, do we need literacy to live in a world of AI vs human? It is, how do we live everyday life well, under powerful, undemocratic, often unaccountable, corporate control that is being accelerated and intensified by tech tools we have no say over?
Any AI literacy approach that fails to address this, fails full stop.
Why we must prioritise AI Literacy as democratic infrastructure
The AI media narrative will, given time, not be driven by what government says about AI, but by how it makes us feel. Increasingly, that is, more vulnerable under uncertainty over income; fear of losing our jobs; increased surveillance; and loss of freedom; indeed a loss of power over our everyday life and need to “take back control“. We saw where that led in 2016. The government will pay the price for those feelings again, if it does not act now to address them.
We now have choices about whose version of AI literacy we follow in the UK. I have the privilege of contributing to work at the Council of Europe, in an approach that I hope will be adopted by the UK later this year, and we could lead on, instead of following ‘what tech says’.
It is an alternative comprehensive framework that addresses all the dimensions of AI literacy—particularly the human dimension— not only to more holistically train technologically skilled citizens to design or use AI, but prepares everyone for living with AI, with a focus on the values of democracy, human rights, and the rule of law.
Being AI literate means understanding how technology and companies affect fundamental economic, human, social and political rights and how we can protect ourselves, so that we can act in ways we choose.
Our parliamentary sovereignty and democratic processes, depend on the power to control our own national narratives and parliamentary processes, including the outcome of elections.
The media and public’s ability to be informed in an election and beyond, depends on the ability to identify and challenge misinformation, to use independent critical thought; to question power; and that depends on an informed and critical citizenry empowered with our own social agency.
We cannot centre these things if the government direction of travel is steered by U.S. led Open AI, Accenture, Google, IBM and Microsoft. Narrow media messaging is conflicted, both saying ‘use AI for furthering economic growth’ and at the same time, excusing those same companies for making job cuts as if they really can’t help it and it is in fact they who have no choice thanks to AI. ‘Blame the AI, don’t blame us (but please forget we chose to build / buy / use it)’.
Education and the role of AI and literacy in the Public Interest
The public interest depends on the state to offer education free from commercial influence and gain, and to objectively understand the implications of AI, not as products that may become obsolete from one day to the next, but with a human-centric, technology neutral approach that looks for outcomes rather than product skills.
We also need a UK government that is committed to doing what it says it will do on AI, not one that simply tells others how to do it.
Whitehall departments are not adequately transparent over the ways they use AI and algorithms and the use of the (perhaps overly complex) AI register is low, despite it being “a requirement for all government departments”.
As AI systems become increasingly embedded in social, economic, and political systems, we must ensure everyone has the necessary level of awareness and critical understanding, to navigate an AI-transformed world in everyday life. Not only to use AI effectively, but to ensure that those responsible for AI development and deployment can respect and enhance human dignity, rights, and democratic values.
We need to protect those people who are excluded in life, or over-policed, without the freedom necessary to what being fully human requires, especially for those who are marginalised, “the outliers” in society and often excluded in the biometric training data from which AI are built—by race, language, gender, age, health or disability.
We need to protect our biometric data, our faces and voices, to be able to show up and speak up when it matters.
As the Pope summed up in his recent World Communications Day message, AI literacy must prioritise understanding, “how algorithms shape our perception of reality, how AI biases work, what mechanisms determine the presence of certain content in our feeds, what the economic principles and models of the AI economy are and how they might change.”
The future of freedom in society in the UK, our humanity, our democracy, our trust, depend not on a handful of companies who strive for a brave new world, nor on AI infrastructure they are selling us well-packaged in hype. Our collective future depends on one digital Minister having the courage to take a new direction.

