Tag Archives: Artificial intelligence

Mutant algorithms, roadmaps and reports: getting real with public sector data

The CDEI has published ‘new analysis on the use of data in local government during the COVID-19 crisis’ (the Report) and it features some similarities in discussing data that the Office for AI roadmap (the Roadmap) did in January on machine learning.

A notable feature is that the CDEI work includes a public poll. Nearly a quarter of 2,000 adults said that the most important thing for them, to trust the council’s use of data, would be “a guarantee that information is anonymised before being shared, so your data can’t be linked back to you.”

Both the Report and the Roadmap shy away from or avoid that problematic gap in their conclusions, between public expectations and reality in the application of data used at scale in public service provision, especially in identifying vulnerability and risk prediction.

Both seek to provide vision and aims around the future development of data governance in the UK.

The fact is that everyone must take off their rose-tinted spectacles on data governance to accept this gap, and get basics fixed in existing practice to address it. In fact, as academic Michael Veale wrote, often the public sector is looking for the wrong solution entirely.The focus should be on taking off the ‘tech goggles’ to identify problems, challenges and needs, and to not be afraid to discover that other policy options are superior to a technology investment.”

But used as it is, the public sector procurement and use of big data at scale, whether in AI and Machine Learning or other systems, require significant changes in approach.

The CDEI poll asked, If an organisation is using an algorithmic tool to make decisions, what do you think are the most important safeguards that they should put in place  68% rated, that humans have a key role in overseeing the decision-making process, for example reviewing automated decisions and making the final decision, in their top three safeguards.

So what is this post about? Why our arms length bodies and various organisations’ work on data strategy are hindering the attainment of the goals they claim to promote, and what needs fixed to get back on track. Accountability.

Framing the future governance of data

On Data Infrastructure and Public Trust, the AI Council Roadmap stated an ambition to, “Lead the development of data governance options and its uses. The UK should lead in developing appropriate standards to frame the future governance of data.”

To suggest we not only should be a world leader but imagine that there is the capability to do so, suggests a disconnect with current reality, none of which was mentioned in the Roadmap but is drawn out a little more in the CDEI Report from local authority workshops.

When it comes to data policy and Artificial Intelligence (AI) or Machine Learning (ML) based on data processing and therefore dependent on its infrastructure, suggesting we should lead on data governance, as if separate from the existing standards and frameworks set out in law, would be disastrous for the UK and businesses in it.  Exports need to meet standards in the receiving countries. You cannot just ‘choose your own’ adventure here.

The CDEI Report says both that participants in their workshops found a lack of legal clarity “in the collection and use of data” and, “Participants finished the Forum by discussing ways of overcoming the barriers to effective and ethical data use.”

Lack of understanding of the law is a lack of competence and capability that I have seen and heard time and time and time again in participants at workshops, events, webinars, some of whom are in charge of deciding what tools are procured and how to implement public policy using administrative data, over the last 5 years. The law on data processing is accessible and generally straightforward.

If your work involves “overcoming barriers” then either there is not competence to understand what is lawful to proceed with confidence using data protections appropriately, or you are trying to avoid doing so. Neither is a good place to be in for public authorities, and bodes badly for the safe, fair, transparent and lawful use of our personal data by them.

But it is also lack of data infrastructure that increases the skills gap and leaves a bigger need to know what is lawful or not, because if your data is held in “excessive use of excel spreadsheets” then you need to make decisions about ‘sharing’ done through distribution of data. Data access can be more easily controlled through role-based access models, that make it clear when someone is working around their assigned security role, and creates an audit trail of access. You reduce risk by distributing access, not distributing data.

The CDEI Report quotes as a ‘concern’ that data access granted under emergency powers in the pandemic will be taken away. This is a mistaken view that should be challenged. That access was *always* conditional and time limited. It is not something that will be ‘taken away’ but an exceptional use only granted because it was temporary, for exceptional purposes in exceptional times. Had it not been time limited, you wouldn’t have had access. Emergency powers in law are not ‘taken away’, but can only be granted at all in an emergency. So let’s not get caught up in artificial imaginings of what could change and what ifs, but change what we know is necessary.

We would do well to get away from the hyperbole of being world-leading, and aim for a minimum high standard of competence and capability in all staff who have any data decision-making roles and invest in the basic data infrastructure they need to do a good job.

Appropriate standards to frame the future governance of data

The AI Council Roadmap suggested that, “The UK should lead in developing appropriate standards to frame the future governance of data.”  Let’s stop and really think for a minute, what did the Roadmap writers think they meant by that?

Because we have law that frames ‘appropriate standards.’ The UK government just seems unable or unwilling to meet it. And not only in these examples, in fact I’d challenge all the business owners on the AI Council to prove their own products meet it.

You could start with the Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (wp251rev.01). Or consider any of the Policy, recommendations, declarations, guidelines and other legal instruments issued by Council of Europe bodies or committees on artificial intelligence. Or valuable for export standards, ensure respect for the Convention 108  standards to which we are a signed up State party among its over 50 countries, and growing. That’s all before the simplicity of the UK Data Protection Act 2018 and the GDPR.

You could start with auditing current practice for lawfulness. The CDEI Roadmap says, “The CDEI is now working in partnership with local authorities, including Bristol City Council, to help them maximise the benefits of data and data-driven technologies.” I might suggest that includes a good legal team, as I think the Council needs one.

The UK is already involved in supporting the development of guidelines (as I was alongside UK representatives of government and the data regulator the ICO among hundreds of participants in drawing out Convention 108 Guidelines on data processing in education) but to suggest as a nation state that we have the authority to speak on the future governance of data without acknowledging what we should already be doing and where we get it wrong, is an odd place to start.

The current state of reality in various sectors

Take for example the ICO audit of the Department for Education.

Failures to meet basic principles of data protection law include knowing what data they’ve got, appropriate controls on distribution and failure to fair process (tell people you process their data). This is no small stuff. And it’s only highlights from the eight page summary.

The DfE don’t adequately understand what data they hold and not having a record of processing leads to a direct breach of #GDPR. Did you know the Department is not able to tell you to which third parties your own or your child’s sensitive, identifying personal data (from over 21m records) was sent, among 1000s of releases?

The approach on data releases has been to find a way to fit the law to suit data requests, rather than assess if data distribution should be approved at all. This ICO assessment was of only 400 applications — there’s been closer to 2,000 approved since 2012. One refusal was to the US. Another the MOD.


For too long, the DfE ‘internal cultural barriers and attitudes’ has meant it hasn’t cared about your rights and freedoms or meeting its lawful obligations. That is a national government Department in charge of over fifty such mega databases, the NPD is only one of. This is a systemic and structural set of problems, as a direct result of Ministerial decisions that changed the law in 2012 to give our personal data away from state education. It was a choice made not to tell the people whom the data were about. This continues to be in breach of the law. And that is the same across many government departments.

Why does it even matter some still ask? Because there is harm to people today. There is harm in history that must not be possible to repeat. And some of the data held could be used in dangerous ways.

You only need to glance at other applications in government departments and public services to see bad policy, bad data and bad AI or machine learning outcomes. And all of those lead to breakdowns in trust and relations between people and the systems meant to support them, that in turn lead to bad data, and policy.

Unless government changes its approach, the direction of travel is towards less trust, and for public health for example, we see the consequences in disastrous responses from not attending for vaccination based on mistrust of proven data sharing, to COVID conspiracy theories.

Commercial reuse of pubic admin data is a huge mistake and the direction of travel is damaging.

“Survey responses collected from more than 3,000 people across the UK and US show that in late 2018, some 95% of people were not willing to share their medical data with commercial industries. This contrasts with a Wellcome study conducted in 2016 which found that half of UK respondents were willing to do so.” (July 2020, Imperial College)

Mutant algorithms

Summer 2020 first saw no human accountability for grades “derailed by a mutant #algorithm — then the resignation of two  Ofqual executives. What aspects of the data governance failures will be addressed this year? Where’s the *fairness* —there is a legal duty to tell people how what data is used especially in its automated aspects.

Misplaced data and misplaced policy aims

In June 2020 The DWP argued in a court case that, “to change the way the benefit’s online computer calculation system worked in line with the original court ruling would undermine the principle of universal credit” — Not only does it fail its public interest purpose, and does harm, but is lax on its own #data governance controls. World leading is far, far, far away.

Entrenched racism

In August 2020 “The Home Office [has] agreed to stop using a computer algorithm to help decide visa applications after allegations that it contained “entrenched racism”. How did it ever get approved for use?

That entrenched racism is found in policing too. The Gangs Matrix use of data required an Enforcement Notice from the ICO and how it continues to operate at all, given its recognised discrimination and harm to young lives, is shocking.

Policy makers seem fixated on quick fixes that for the most part exist only in the marketing speak of the sellers of the products, while ignoring real problems in ethics and law, and denying harm.

“Now is a good time to stop.”

The most obvious case for me, where the Office for AI should step in, and where the CDEI Report from workshops with Local Authorities was most glaringly remiss, is where there is evidence of failure of efficacy and proven risk of danger to life through the procurement of technology in public policy. Don’t forget to ask what doesn’t work.

In January 2020  a report from researchers at The Turing institute, Rees Centre and What Works Centre published a report on ethics in Machine Learning in Children’s Social Care (CSC) and raised the “dangerous blind spots” and “lurking biases” in application of machine learning in UK children’s social care— totally unsuitable for life and death situations. Its later evidence showed models that do not work or wuld reach the threshold they set for defining ‘success’.

Out of the thirty four councils who had said they had acute difficulties in recruiting children’s social workers in December 2020 Local Government survey, 50 per cent said they had both difficulty recruiting generally and difficulty recruiting the required expertise, experience or qualification. Can staff in such challenging circumstances really have capacity to understand the limitations of developing technology on top of their every day expertise?

And when it comes to focussing on the data, there are problems too. By focusing on the data held, and using only that to make policy decisions rather than on the ground expertise, we end up in situations where only “those who get measured, get helped”.

As Michael Sanders wrote, on CSC, “Now is a good time to stop. With the global coronavirus pandemic, everything has been changed, all our data scrambled to the point of uselessness in any case.

There is no short cut

If the Office for AI Roadmap is to be taken seriously outside its own bubble, the board need to be and be seen to be independent of government. It must engage with reality of applied AI in practice in public services, getting basics fixed first.  Otherwise all its talk of “doubling down” and suggesting the UK government can build public trust and position the UK as a ‘global leader’ on Data Governance is misleading and a waste of everyone’s time and capacity.

I appreciate that it says, “This Roadmap and its recommendations reflects the views of the Council as well as 100+ additional experts.” All of whom I imagine are more expert than me. If so, which of them is working on fixing the basic underlying problems with data governance within public sector data, how and by when? If they are not, why are they not, and who is?

The CDEI report published today identified in local authorities that, “public consultation can be a ‘nice to have’, as it often involves significant costs where budgets are already limited.” If it’s a position the CDEI does not say is flawed, it may as well pack up and go home. On page 27 it reports, “When asked about their understanding of how their local council is currently using personal data and presented with a list of possible uses, 39% of respondents reported that they do not know how their personal data is being used.” The CDEI should be flagging this with a great big red pen as an indicator of unlawful practice.

The CDEI Report also draws on the GDS Ethical Framework but that will be forever flawed as long as its own users, not the used, are the leading principle focus, underpinning the aims. It starts with “Define and understand public benefit and user need.” There’s very little about ethics and it’s much more about “justifying our project”.

The Report did not appear to have asked the attendees what impact they think their processes have on everyday lives, and social justice.

Without fixes in these approaches, we will never be world leading, but will lag behind because we haven’t built the safe infrastructure necessitated by our vast public administrative data troves. We must end bad data practice which includes getting right the basic principles on retention and data minimisation, and security (all of which would be helped if we started by reducing those ‘vast public administrative data troves’ much of which ranges from poor to abysmal data quality anyway). Start proper governance and oversight procedures. And put in place all the communication channels, tools, policy and training to make telling people how data are used and fair processing happen. It is not, a ‘nice to have’ but is required in all data processing laws around the world.

Any genuine “barriers” to data use in data protection law,  are designed as protections for people; the people the public sector, its staff and these arms length bodies are supposed to serve.

Blaming algorithms, blaming lack of clarity in the law, blaming “barriers” is avoidance of one thing. Accountability. Accountability for bad policy, bad data and bad applications of tools is a human responsibility. The systems you apply to human lives affect people, sometimes forever and in the most harmful ways.

What would I love to see led from any of these arms length bodies?

  1. An audit of existing public admin data held, by national and local government, and consistent published registers of databases and algorithms / AI / ML currently in use.
  2. Expose where your data system is nothing more than excel spreadsheets and demand better infrastructure.
  3. Identify the lawful basis for each set of data processes, their earliest records dates and content.
  4. Publish that resulting ROPA and the retention schedule.
  5. Assign accountable owners to databases, tools and the registers.
  6. Sort out how you will communicate with people whose data you unlawfully process to meet the law, or stop processing it.
  7. And above all, publish a timeline for data quality processes and show that you understand how the degradation of data accuracy, quality, and storage limitations all affect the rights and responsibilities in law that change over time, as a result.

There is no short cut, to doing a good job, but a bad one.

If organisations and bodies are serious about “good data” use in the UK, they must stop passing the buck and spreading the hype. Let’s get on with what needs fixed.

In the words of Gavin Freeguard, then let’s see how it goes.

The devil craves DARPA

‘People, ideas, machines — in that order.’ This quote in that  latest blog by Dominic Cummings is spot on, but the blind spots or the deliberate scoping the blog reveals, are both just as interesting.

If you want to “figure out what characters around Putin might do”, move over Miranda. If your soul is for sale, then this might be the job for you. This isn’t anthropomorphism of Cummings, but an excuse to get in the parallels to Meryl Streep’s portrayal of Priestly.

“It will be exhausting but interesting and if you cut it you will be involved in things at the age of 21 that most people never see.”

Comments like these make people who are not of that mould, feel of less worth. Commitment comes in many forms. People with kids and caring responsibilities, may be some of your most loyal staff. You may not want them as your new PA, but you will almost certainly, not want to lose them across the board.

Some words would be wise in follow up to existing staff, the thousands of public servants we have today, after his latest post.

1. The blog is aimed at a certain kind of men. Speak to women too.

The framing of this call for staff is problematic, less for its suggested work ethic, than the structural inequalities it appears to purposely perpetuate. Despite the poke at public school bluffers. Do you want the best people around you, able to play well with others, or not?

I am disappointed that asking for “the sort of people we need to find” is designed, intentionally or not, to appeal to a certain kind of men. Even if he says it should be diverse and includes people, “like that girl hired by Bigend as a brand ‘diviner.'”

If Cummings is intentional about hiring the best people, then he needs to do by better by women. We already have a PM that many women would consider toxic to work around, and won’t as a result.

Some of the most brilliant, cognitively diverse, young people I know who fit these categories well, — and across the political spectrum–are themselves diverse by nature and expect their surroundings to be. They (unlike our generation), do not “babble about ‘gender identity diversity blah blah’.” Woke is not an adjective that needs explained, but a way of life. Put such people off by appearing to devalue their norms, and you’ll miss out on some potential brilliant applicants from the pool, which will already be self-selecting, excluding many who simply won’t work for you, or Boris, or Brexit blah blah. People prepared to burn out as you want them to, aren’t going to be at their best for long. And it takes a long time to recover.

‘That girl’ was the main character, and her name was Cayce Pollard.  Women know why you should say her name. Fewer women will have worked at CERN, perhaps for related reasons, compared with “the ideal candidate” described in this call.

“If you want an example of the sort of people we need to find in Britain, look at this’ he writes of C.C. Myers, with a link to, ‘On the Cover:  The World’s Fastest Man.

Charlie Munger, Warren Buffett, Alexander Grothendieck, Bret Victor, von Neumann, Cialdini. Groves, Mueller, Jain, Pearl, Kay, Gibson, Grove, Makridakis, Yudkowsky, Graham and Thiel.

The *men illustrated* list, goes on and on.

What does it matter how many lovers you have if none of them gives you the universe?

Not something I care to discuss over dinner either.

But women of all ages do care that our PM appears to be a cad. It matters therefore that your people be seen to work to a better standard. You want people loyal to your cause, and the public to approve, even if they don’t of your leader. Leadership goes far beyond electoral numbers and a mandate.

Women — including those that tick the skill boxes need, yet again, to look beyond the numbers and have to put up with a lot. This advertorial appeals to Peter Parker, when the future needs more of Miles Morales. Fewer people with the privilege and opportunity to work at the Large Hadron Collider, and more of those who stop Kingpin’s misuse and shut it down.

A different kind of the same kind of thing, isn’t real change. This call for something new, is far less radical than it is being portrayed as.

2. Change. Don’t forget to manage it by design.

In fact, the speculation that this is all change, hiring new people for new stuff [some of which elsewhere he has genuinely interesting ideas on, like, “decentralisation and distributed control to minimise the inevitable failures of even the best people”] doesn’t really feature here, rather it is something of a precursor. He’s starting less with building the new, and rather with let’s ‘drain the swamp’ of bureaucracy. The Washington-style of 1980’s Reagan, including, ‘let’s put in some more of our kind of people’.

His personal brand of longer-term change may not be what some of his cheerleaders think it will be, but if the outcome is the same and seen to be ‘showing these Swamp creatures the zero mercy they deserve‘, [sic] does intent matter? It does, and he needs to describe his future plans better, if he wants to have a civil service  that works well.

The biggest content gap (leaving actual policy content aside) is any appreciation of the current, and need for change management.

Training gets a mention; but new process success, depends on effectively communicating on change, and delivering training about it to all, not only those from whom you expect the most high performance. People not projects, remember?

Change management and capability transfer delivered by costly consultants, is not needed, but making it understandable not elitist, is.

  • genuinely present an understanding of the as-is,  (I get you and your org, for change *with* you, not to force change upon you)
  • communicating what the future model is going to move towards (this is why you want to change and what good looks like), and
  • a roadmap of how you expect the organisation to get there (how and when), that need not be constricted by artificial comms grids.

Because people and having their trust, are what make change work.

On top of the organisational model, *every* member of staff must know where their own path fits in, and if their role is under threat, whether training will be offered to adapt, or whether they will be made redundant. Uncertainty around this over time, is also toxic. You might not care if you lose people along the way. You might consider these the most expendable people. But if people are fearful and unhappy in your organisation, or about their own future, it will hold them back from delivering at their best, and the organisation as a result.  And your best will leave, as much as those who are not.

“How to build great teams and so on”, is not a bolt-on extra here, it is fundamental.  You can’t forget the kitchens. But changing the infrastructure alone, cannot deliver real change you want to see.

3. Communications. Neither propaganda and persuasion nor PR.

There is not such a vast difference between the business of communications as a campaign tool, and tool for control. Persuasion and propaganda. But where there may be a blind spot in the promotion of the Cialdini-six style comms, is that behavioural scientists that excel at these, will not use the kind of communication tools that either the civil service nor the country needs for the serious communications of change, beyond the immediate short term.

Five thoughts:

  1. Your comms strategy should simply be “Show the thing. Be clear. Be brief.”
  2. Communicating that failure is acceptable, is only so if it means learning from it.
  3. If policy comms plans depend on work led by people like you,  who like each other and like you, you’ll be told what you want to hear.
  4. Ditto, think tanks that think the same are not as helpful as others.
  5. And you need grit in the oyster for real change.

As an aside, for anyone having kittens about using an unofficial email to get around FOI requests and think it a conspiracy to hide internal communications, it really doesn’t work that way. Don’t panic, we know where our towel is.

4. The Devil craves DARPA. Build it with safe infrastructures.

Cumming’s long-established fetishing of technology and fascination with Moscow will be familiar to those close, or blog readers. They are also currently fashionable, again. The solution is therefore no surprise, and has been prepped in various blogs for ages. The language is familiar. But single-mindedness over this length of time, can make for short sightedness.

In the US. DARPA was set up in 1958 after the Soviet Union launched the world’s first satellite, with a remit to “prevent technological surprise” and pump money into “high risk, high reward” projects. (Sunday Times, Dec 28, 2019)

In  March, Cummings wrote in praise of Project Maven;

“The limiting factor for the Pentagon in deploying advanced technology to conflict in a useful time period was not new technical ideas — overcoming its own bureaucracy was harder than overcoming enemy action.”

Almost a year after that project collapsed, its most interesting feature was surely not the role of bureaucracy among tech failure. Maven was a failure not of tech, nor bureaucracy, but to align its values with the decency of its workforce. Whether the recallibration of its compass as a company is even possible, remains to be seen.

If firing staff who hold you to account against a mantra of ‘don’t be evil’ is championed, this drive for big tech values underpinning your staff thinking and action, will be less about supporting technology moonshots, than a shift to the Dark Side of capitalist surveillance.

The incessant narrative focus on man and the machine –machine learning, ⁠—the machinery of government, quantitative models and the frontiers of the science of prediction is an obsession with power. The downplay of the human in that world ⁠—is displayed in so many ways, but the most obvious is the press and political narrative of a need to devalue human rights, ⁠— and yet to succeed, tech and innovation needs an equal and equivalent counterweight, in accountability under human rights and the law, so that when systems fail people, they do not cause catastrophic harm at scale.

“Practically nobody is ever held accountable regardless of the scale of failure, you say? How do you measure your own failure? Or the failure of policy? Transparency over that, and a return to Ministerial accountability are changes I would like to see. Or how about demanding accountability for algorithms that send children to social care, of which the CEO has said his failure is only measured by a Local Authority not saving money as a result of using their system?

We must stop state systems failing children, if they are not to create a failed society.

A UK DARPA-esque, devolved hothousing for technology will fail, if you don’t shore up public trust. Both in the state and commercial sectors. An electoral mandate won’t last, nor reach beyond its scope for long. You need a social licence to have legitimacy for tech that uses public data, that is missing today. It is bone-headed and idiotic that we can’t get this right as a country.  Despite knowing how to, if government keeps avoiding doing it safely, it will come at a cost.

The Pentagon certainly cares about the implications for national security when the personal data of millions of people could be open to exploitation, blackmail or abuse.

You might of course, not care. But commercial companies will when they go under. The electorate will. Your masters might if their legacy will suffer and debate about the national good and the UK as a Life Sciences centre, all come to naught.

There was little in this blog, of the reality of what these hires should deliver beyond more tech and systems’ change. But the point is to make systems that work for people, not see more systems at work.

We could have it all, but not if you spaff our data laws up the wall.

“But the ship can’t sink.”

“She is made of iron, sir. I assure you, she can. And she will. It is a mathematical certainty.

[Attributed to Thomas Andrews, Chief Designer of the RMS Titanic.]

5. The ‘circle of competence’ needs values, not only to value skills.

It’s important and consistent behaviour that Cummings says he recognises his own weaknesses, that some decisions are beyond his ‘circle of competence’ and that he should in in effect become redundant, having brought in, “the sort of expertise supporting the PM and ministers that is needed.” Founder’s syndrome is common to organisations and politics is not exempt. But neither is the Peter principle a phenomenon particular to only the civil service.

“One of the problems with the civil service is the way in which people are shuffled such that they either do not acquire expertise or they are moved out of areas they really know to do something else.”

But so what? what’s worse, is politics has not only the Peter’s but the Dilbert principle when it comes to senior leadership. You can’t put people in positions expected to command respect when they tell others to shut up and go away. Or fire without due process. If you want orgs to function together at scale, especially beyond the current problems with silos, they need people on the ground who can work together, and have a common goal who respect those above them, and feel it is all worthwhile. Their politics don’t matter. But integrity, respect and trust do, even if they don’t matter to you personally.

I agree wholeheartedly that circles of competence matter [as I see the need to build some in education on data and edTech]. Without the appropriate infrastructure change, radical change of policy is nearly impossible. But skill is not the only competency that counts when it comes to people.

If the change you want is misaligned with people’s values, people won’t support it, no matter who you get to see it through. Something on the integrity that underpins this endeavour,  will matter to the applicants too. Most people do care how managers treat their own.

The blog was pretty clear that Cummings won’t value staff, unless their work ethic, skills and acceptance will belong to him alone to judge sufficient or not, to be “binned within weeks if you don’t fit.”

This government already knows it has treated parts of the public like that for too long. Policy has knowingly left some people behind on society’s  scrap heap, often those scored by automated systems as inadequate. Families in-work moved onto Universal Credit, feed their children from food banks for #5WeeksTooLong. The rape clause. Troubled families. Children with special educational needs battling for EHC plan recognition without which schools won’t take them, and DfE knowingly underfunding suitable Alternative Provision in education by a colossal several hundred per cent amount per place, by design.

The ‘circle of competence’ needs to recognise what happens as a result of policy, not only to place value on the skills in its delivery or see outcomes on people as inevitable or based on merit. Charlie Munger may have said, “At the end of the day – if you live long enough – most people get what they deserve.”

An awful lot of people deserve a better standard of living and human dignity than the UK affords them today. And we can’t afford not to fix it. A question for new hires: How will you contribute to doing this?

6. Remember that our civil servants, are after all, public servants.  

The real test of competence, and whether the civil service delivers for the people whom they serve, is inextricably bound with government policy. If its values, if its ethics are misguided, building a new path with or without new people, will be impossible.

The best civil servants I have worked with, have one thing in common. They have a genuine desire to make the world better. [We can disagree on what that looks like and for whom, on fraud detection, on immigration, on education, on exploitation of data mining and human rights, or the implications of the law. Their policy may bring harm, but their motivation is not malicious.] Your goal may be a ‘better’ civil service. They may be more focussed on better outcomes for people, not systems. Lose sight of that, and you put the service underpinning government, at risk. Not to bring change for good, but to destroy the very point of it.  Keep the point of a better service, focussed on the improvement for the public.

Civil servants civilly serve in the words of asked, so should we all ask Cummings to outline his thoughts on:

  • “What makes the decisions which civil servants implement legitimate?
  • Where are the boundaries of that legitimacy and how can they be detected?
  • What should civil servants do if those boundaries are reached and crossed?”

Self-destruction for its own sake, is not a compelling narrative for change, whether you say you want to control that narrative, or not.

Two hands are a lot, but many more already work in the civil service. If Cummings only works against them, he’ll succeed not in building change, but resistance.

Women Leading in AI — Challenging the unaccountable and the inevitable

Notes [and my thoughts] from the Women Leading in AI launch event of the Ten Principles of Responsible AI report and recommendations, February 6, 2019.

Speakers included Ivana Bartoletti (GemServ), Jo Stevens MP, Professor Joanna J Bryson, Lord Tim Clement-Jones, Roger Taylor (Centre for Data Ethics and Innovation, Chair), Sue Daley (techUK), Reema Patel, Nuffield Foundation and Ada Lovelace Institute.

Challenging the unaccountable and the ‘inevitable’ is the title of the conclusion of the Women Leading in AI report Ten Principles of Responsible AI, launched this week, and this makes me hopeful.

“There is nothing inevitable about how we choose to use this disruptive technology. […] And there is no excuse for failing to set clear rules so that it remains accountable, fosters our civic values and allows humanity to be stronger and better.”

Ivana Bartoletti, co-founder of Women Leading in AI, began the event, hosted at the House of Commons by Jo Stevens, MP for Cardiff Central, and spoke brilliantly of why it matters right now.

Everyone’s talking about ethics, she said, but it has limitations. I agree with that. This was by contrast very much a call to action.

It was nearly impossible not to cheer, as she set out without any of the usual bullshit, the reasons why we need to stop “churning out algorithms which discriminate against women and minorities.”

Professor Joanna J Bryson took up multiple issues, such as why

  • innovation, ‘flashes in the pan’ are not sustainable and not what we’re looking for things in that work for us [society].
  • The power dynamics of data, noting Facebook, Google et al are global assets, and are also global problems, and flagged the UK consultation on taxation open now.
  • And that it is critical that we do not have another nation with access to all of our data.

She challenged the audience to think about the fact that inequality is higher now than it has been since World War I. That the rich are getting richer and that imbalance of not only wealth, but of the control individuals have in their own lives, is failing us all.

This big picture thinking while zooming in on detailed social, cultural, political and tech issues, fascinated me most that evening. It frustrated the man next to me apparently, who said to me at the end, ‘but they haven’t addressed anything on the technology.’

[I wondered if that summed up neatly, some of why fixing AI cannot be a male dominated debate. Because many of these issues for AI, are not of the technology, but of people and power.] 

Jo Stevens, MP for Cardiff Central, hosted the event and was candid about politicians’ level of knowledge and the need to catch up on some of what matters in the tech sector.

We grapple with the speed of tech, she said. We’re slow at doing things and tech moves quickly. It means that we have to learn quickly.

While discussing how regulation is not something AI tech companies should fear, she suggested that a constructive framework whilst protecting society against some of the problems we see is necessary and just, because self-regulation has failed.

She talked about their enquiry which began about “fake news” and disinformation, but has grown to include:

  • wider behavioural economics,
  • how it affects democracy.
  • understanding the power of data.
  • disappointment with social media companies, who understand the power they have, and fail to be accountable.

She wants to see something that changes the way big business works, in the way that employment regulation challenged exploitation of the workforce and unsafe practices in the past.

The bias (conscious or unconscious) and power imbalance has some similarity with the effects on marginalised communities — women, BAME, disabilities — and she was looking forward to see the proposed solutions, and welcomed the principles.

Lord Clement-Jones, as Chair of the Select Committee on Artificial Intelligence, picked up the values they had highlighted in the March 2018 report, Artificial Intelligence, AI in the UK: ready, willing and able?

Right now there are so many different bodies, groups in parliament and others looking at this [AI / Internet / The Digital World] he said, so it was good that the topic is timely, front and centre with a focus on women, diversity and bias.

He highlighted, the importance of maintaining public trust. How do you understand bias? How do you know how algorithms are trained and understand the issues? He fessed up to being a big fan of DotEveryone and their drive for better ‘digital understanding’.

[Though sometimes this point is over complicated by suggesting individuals must understand how the AI works, the consensus of the evening was common sensed — and aligned with the Working Party 29 guidance — that data controllers must ensure they explain clearly and simply to individuals, how the profiling or automated decision-making process works, and what its effect is for them.]

The way forward he said includes:

  • Designing ethics into algorithms up front.
  • Data audits need to be diverse in order to embody fairness and diversity in the AI.
  • Questions of the job market and re-skilling.
  • The enforcement of ethical frameworks.

He also asked how far bodies will act, in different debates. Deciding who decides on that is still a debate to be had.

For example, aware of the social credit agenda and scoring in China, we should avoid the same issues. He also agreed with Joanna, that international cooperation is vital, and said it is important that we are not disadvantaged in this global technology. He expected that we [the Government Office for AI] will soon promote a common set of AI ethics, at the G20.

Facial recognition and AI are examples of areas that require regulation for safe use of the tech and to weed out those using it for the wrong purposes, he suggested.

However, on regulation he held back. We need to be careful about too many regulators he said. We’ve got the ICO, FCA, CMA, OFCOM, you name it, we’ve already got it, and they risk tripping over one another. [What I thought as CDEI was created para 31.]

We [the Lords Committee] didn’t suggest yet another regulator for AI, he said and instead the CDEI should grapple with those issues and encourage ethical design in micro-targeting for example.

Roger Taylor (Chair of the CDEI), — after saying it felt as if the WLinAI report was like someone had left their homework on his desk,  supported the concept of the WLinAI principles are important, and  agreed it was time for practical things, and what needs done.

Can our existing regulators do their job, and cover AI? he asked, suggesting new regulators will not be necessary. Bias he rightly recognised, already exists in our laws and bodies with public obligations, and in how AI is already operating;

  • CVs sorting. [problematic IMO > See Amazon, US teachers]
  • Policing.
  • Creditworthiness.

What evidence is needed, what process is required, what is needed to assure that we know how it is actually operating? Who gets to decide to know if this is fair or not? While these are complex decisions, they are ultimately not for technicians, but a decision for society, he said.

[So far so good.]

Then he made some statements which were rather more ambiguous. The standards expected of the police will not be the same as those for marketeers micro targeting adverts at you, for example.

[I wondered how and why.]

Start up industries pay more to Google and Facebook than in taxes he said.

[I wondered how and why.]

When we think about a knowledge economy, the output of our most valuable companies is increasingly ‘what is our collective truth? Do you have this diagnosis or not? Are you a good credit risk or not? Even who you think you are — your identity will be controlled by machines.’

What can we do as one country [to influence these questions on AI], in what is a global industry? He believes, a huge amount. We are active in the financial sector, the health service, education, and social care — and while we are at the mercy of large corporations, even large corporations obey the law, he said.

[Hmm, I thought, considering the Google DeepMind-Royal Free agreement that didn’t, and venture capitalists not renowned for their ethics, and yet advise on some of the current data / tech / AI boards. I am sceptical of corporate capture in UK policy making.]

The power to use systems to nudge our decisions, he suggested, is one that needs careful thought. The desire to use the tech to help make decisions is inbuilt into what is actually wrong with the technology that enables us to do so. [With this I strongly agree, and there is too little protection from nudge in data protection law.]

The real question here is, “What is OK to be owned in that kind of economy?” he asked.

This was arguably the neatest and most important question of the evening, and I vigorously agreed with him asking it, but then I worry about his conclusion in passing, that he was, “very keen to hear from anyone attempting to use AI effectively, and encountering difficulties because of regulatory structures.

[And unpopular or contradictory a view as it may be, I find it deeply ethically problematic for the Chair of the CDEI to be held by someone who had a joint-venture that commercially exploited confidential data from the NHS without public knowledge, and its sale to the Department of Health was described by the Public Accounts Committee, as a “hole and corner deal”. That was the route towards care.data, that his co-founder later led for NHS England. The company was then bought by Telstra, where Mr Kelsey went next on leaving NHS Engalnd. The whole commodification of confidentiality of public data, without regard for public trust, is still a barrier to sustainable UK data policy.]

Sue Daley (Tech UK) agreed this year needs to be the year we see action, and the report is a call to action on issues that warrant further discussion.

  • Business wants to do the right thing, and we need to promote it.
  • We need two things — confidence and vigilance.
  • We’re not starting from scratch, and talked about GDPR as the floor not the ceiling. A starting point.

[I’m not quite sure what she was after here, but perhaps it was the suggestion that data regulation is fundamental in AI regulation, with which I would agree.]

What is the gap that needs filled she asked? Gap analysis is what we need next and avoid duplication of effort —need to avoid complexity and duplicity of work with other bodies. If we can answer some of the big, profound questions need to be addressed to position the UK as the place where companies want to come to.

Sue was the only speaker that went on to talk about the education system that needs to frame what skills are needed for a future world for a generation, ‘to thrive in the world we are building for them.’

[The Silicon Valley driven entrepreneur narrative that the education system is broken, is not an uncontroversial position.]

She finished with the hope that young people watching BBC icons the night before would see, Alan Turing [winner of the title] and say yes, I want to be part of that.

Listening to Reema Patel, representative of the Ada Lovelace Institute, was the reason I didn’t leave early and missed my evening class. Everything she said resonated, and was some of the best I have heard in the recent UK debate on AI.

  • Civic engagement, the role of the public is as yet unclear with not one homogeneous, but many publics.
  • The sense of disempowerment is important, with disconnect between policy and decisions made about people’s lives.
  • Transparency and literacy are key.
  • Accountability is vague but vital.
  • What does the social contract look like on people using data?
  • Data may not only be about an individual and under their own responsibility, but about others and what does that mean for data rights, data stewardship and articulation of how they connect with one another, which is lacking in the debate.
  • Legitimacy; If people don’t believe it is working for them, it won’t work at all.
  • Ensuring tech design is responsive to societal values.

2018 was a terrible year she thought. Let’s make 2019 better. [Yes!]


Comments from the floor and questions included Professor Noel Sharkey, who spoke about the reasons why it is urgent to act especially where technology is unfair and unsafe and already in use. He pointed to Compass (Durham police), and predictive policing using AI and facial recognition, with 5% accuracy, and that the Met was not taking these flaws seriously. Liberty produced a strong report on it out this week.

Caroline, from Women in AI echoed my own comments on the need to get urgent review in place of these technologies used with children in education and social care. [in particular where used for prediction of child abuse and interventions in family life].

Joanna J Bryson added to the conversation on accountability, to say people are not following existing software and audit protocols,  someone just needs to go and see if people did the right thing.

The basic question of accountability, is to ask if any flaw is the fault of a corporation, of due diligence, or of the users of the tool? Telling people that this is the same problem as any other software, makes it much easier to find solutions to accountability.

Tim Clement-Jones asked, on how many fronts can we fight on at the same time? If government has appeared to exempt itself from some of these issues, and created a weak framework for itself on handing data, in the Data Protection Act — critically he also asked, is the ICO adequately enforcing on government and public accountability, at local and national levels?

Sue Daley also reminded us that politicians need not know everything, but need to know what the right questions are to be asking? What are the effects that this has on my constituents, in employment, my family? And while she also suggested that not using the technology could be unethical, a participant countered that it’s not the worst the thing to have to slow technology down and ensure it is safe before we all go along with it.

My takeaways of the evening, included that there is a very large body of women, of whom attendees were only a small part, who are thinking, building and engineering solutions to some of these societal issues embedded in policy, practice and technology. They need heard.

It was genuinely electric and empowering, to be in a room dominated by women, women reflecting diversity of a variety of publics, ages, and backgrounds, and who listened to one another. It was certainly something out of the ordinary.

There was a subtle but tangible tension on whether or not  regulation beyond what we have today is needed.

While regulating the human behaviour that becomes encoded in AI, we need to ensure ethics of human behaviour, reasonable expectations and fairness are not conflated with the technology [ie a question of, is AI good or bad] but how it is designed, trained, employed, audited, and assess whether it should be used at all.

This was the most effective group challenge I have heard to date, counter the usual assumed inevitability of a mythical omnipotence. Perhaps Julia Powles, this is the beginnings of a robust, bold, imaginative response.

Why there’s not more women or people from minorities working in the sector, was a really interesting if short, part of the discussion. Why should young women and minorities want to go into an environment that they can see is hostile, in which they may not be heard, and we still hold *them* responsible for making work work?

And while there were many voices lamenting the skills and education gaps, there were probably fewer who might see the solution more simply, as I do. Schools are foreshortening Key Stage 3 by a year, replacing a breadth of subjects, with an earlier compulsory 3 year GCSE curriculum which includes RE, and PSHE, but means that at 12, many children are having to choose to do GCSE courses in computer science / coding, or a consumer-style iMedia, or no IT at all, for the rest of their school life. This either-or content, is incredibly short-sighted and surely some blend of non-examined digital skills should be offered through to 16 to all, at least in parallel importance with RE or PSHE.

I also still wonder, about all that incredibly bright and engaged people are not talking about and solving, and missing in policy making, while caught up in AI. We need to keep thinking broadly, and keep human rights at the centre of our thinking on machines. Anaïs Nin wrote over 70 years ago about the risks of growth in technology to expand our potential for connectivity through machines, but diminish our genuine connectedness as people.

“I don’t think the [American] obsession with politics and economics has improved anything. I am tired of this constant drafting of everyone, to think only of present day events”.

And as I wrote about nearly 3 years ago, we still seem to have no vision for sustainable public policy on data, or establishing a social contract for its use as Reema said, which underpins the UK AI debate. Meanwhile, the current changing national public policies in England on identity and technology, are becoming catastrophic.

Challenging the unaccountable and the ‘inevitable’ in today’s technology and AI debate, is an urgent call to action.

I look forward to hearing how Women Leading in AI plan to make it happen.


References:

Women Leading in AI website: http://womenleadinginai.org/
WLiAI Report: 10 Principles of Responsible AI
@WLinAI #WLinAI

image credits 
post: creative commons Mark Dodds/Flickr
event photo:  / GemServ

Policy shapers, product makers, and profit takers (1)

In 2018, ethics became the new fashion in UK data circles.

The launch of the Women Leading in AI principles of responsible AI, has prompted me to try and finish and post these thoughts, which have been on my mind for some time. If two parts of 1K is tl:dr for you, then in summary, we need more action on:

  • Ethics as a route to regulatory avoidance.
  • Framing AI and data debates as a cost to the Economy.
  • Reframing the debate around imbalance of risk.
  • Challenging the unaccountable and the ‘inevitable’.

And in the next post on:

  • Corporate Capture.
  • Corporate Accountability, and
  • Creating Authentic Accountability.

Ethics as a route to regulatory avoidance

In 2019, the calls to push aside old wisdoms for new, for everyone to focus on the value-laden words of ‘innovation’ and ‘ethics’, appears an ever louder attempt to reframe regulation and law as barriers to business, asking to cast them aside.

On Wednesday evening, at the launch of the Women Leading in AI principles of responsible AI, the chair of the CDEI said in closing, he was keen to hear from companies where, “they were attempting to use AI effectively and encountering difficulties due to regulatory structures.”

In IBM’s own words to government recently,

A rush to further regulation can have the effect of chilling innovation and missing out on the societal and economic benefits that AI can bring.”

The vague threat is very clear, if you regulate, you’ll lose. But the the societal and economic benefits are just as vague.

So far, many talking about ethics are trying to find a route to regulatory avoidance. ‘We’ll do better,’ they promise.

In Ben Wagner’s recent paper, Ethics as an Escape from Regulation: From ethics-washing to ethics-shopping,he asks how to ensure this does not become the default engagement with ethical frameworks or rights-based design. He sums up, “In this world, ‘ethics’ is the new ‘industry self-regulation.”

Perhaps it’s ingenious PR to make sure that what is in effect self-regulation, right across the business model, looks like it comes imposed from others, from the very bodies set up to fix it.

But as I think about in part 2, is this healthy for UK public policy and the future not of an industry sector, but a whole technology, when it comes to AI?

Framing AI and data debates as a cost to the Economy

Companies, organisations and individuals arguing against regulation are framing the debate as if it would come at a great cost to society and the economy. But we rarely hear, what effect do they expect on their company. What’s the cost/benefit expected for them. It’s disingenuous to have only part of that conversation. In fact the AI debate would be richer were it to be included. If companies think their innovation or profits are at risk from non-use, or regulated use, and there is risk to the national good associated with these products, we should be talking about all of that.

And in addition, we can talk about use and non-use in society. Too often, the whole debate is intangible. Show me real costs, real benefits. Real risk assessments. Real explanations that speak human. Industry should show society what’s in it for them.

You don’t want it to ‘turn out like GM crops’? Then learn their lessons on transparency, trustworthiness, and avoid the hype. And understand sometimes there is simply tech, people do not want.

Reframing the debate around imbalance of risk

And while we often hear about the imbalance of power associated with using AI, we also need to talk about the imbalance of risk.

While a small false positive rate for a company product may be a great success for them, or for a Local Authority buying the service, it might at the same time, mean lives forever changed, children removed from families, and individual reputations ruined.

And where company owners may see no risk from the product they assure is safe, there are intangible risks that need factored in, for example in education where a child’s learning pathway is determined by patterns of behaviour, and how tools shape individualised learning, as well as the model of education.

Companies may change business model, ownership, and move on to other sectors after failure. But with the levels of unfairness already felt in the relationship between the citizen and State — in programmes like Troubled Families, Universal Credit, Policing, and Prevent — where use of algorithms and ever larger datasets is increasing, long term harm from unaccountable failure will grow.

Society needs a rebalance of the system urgently to promote transparent fairness in interactions, including but not only those with new applications of technology.

We must find ways to reframe how this imbalance of risk is assessed, and is distributed between companies and the individual, or between companies and state and society, and enable access to meaningful redress when risks turn into harm.

If we are to do that, we need first to separate truth from hype, public good from self-interest and have a real discussion of risk across the full range from individual, to state, to society at large.

That’s not easy against a non-neutral backdrop and scant sources of unbiased evidence and corporate capture.

Challenging the unaccountable and the ‘inevitable’.

In 2017 the Care Quality Commission reported into online services in the NHS, and found serious concerns of unsafe and ineffective care. They have a cross-regulatory working group.

By contrast, no one appears to oversee that risk and the embedded use of automated tools involved in decision-making or decision support, in children’s services, or education. Areas where AI and cognitive behavioural science and neuroscience are already in use, without ethical approval, without parental knowledge or any transparency.

Meanwhile, as all this goes on, academics many are busy debating fixing algorithmic bias, accountability and its transparency.

Few are challenging the narrative of the ‘inevitability’ of AI.

Julia Powles and Helen Nissenbaum recently wrote that many of these current debates are an academic distraction, removed from reality. It is under appreciated how deeply these tools are already embedded in UK public policy. “Trying to “fix” A.I. distracts from the more urgent questions about the technology. It also denies us the possibility of asking: Should we be building these systems at all?”

Challenging the unaccountable and the ‘inevitable’ is the title of the conclusion of the Women Leading in AI report on principles, and makes me hopeful.

“There is nothing inevitable about how we choose to use this disruptive technology. […] And there is no excuse for failing to set clear rules so that it remains accountable, fosters our civic values and allows humanity to be stronger and better.”

[1] Powles, Nissenbaum, 2018,The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence, Medium

Next: Part  2– Policy shapers, product makers, and profit takers on

  • Corporate Capture.
  • Corporate Accountability, and
  • Creating Authentic Accountability.

Policy shapers, product makers, and profit takers (2)

Corporate capture

Companies are increasingly in controlling positions of the tech narrative in the press. They are funding neutral third-sector orgs’ and think tanks’ research. Supporting organisations advising on online education. Closely involved in politics. And sit increasingly, within the organisations set up to lead the technology vision, advising government on policy and UK data analytics, or on social media, AI and ethics.

It is all subject to corporate capture.

But is this healthy for UK public policy and the future not of an industry sector, but a whole technology, when it comes to AI?

If a company’s vital business interests seem unfazed by the risk and harm they cause to individuals — from people who no longer trust the confidentiality of the system to measurable harms — why should those companies sit on public policy boards set up to shape the ethics they claim we need, to solve the problems and restore loss of trust that these very same companies are causing?

We laud people in these companies as co-founders and forward thinkers on new data ethics institutes. They are invited to sit on our national boards, or create new ones.

What does that say about the entire board’s respect for the law which the company breached? It is hard not to see it signal acceptance of the company’s excuses or lack of accountability.

Corporate accountability

The same companies whose work has breached data protection law, multiple ways, seemingly ‘by accident’ on national data extractions, are those companies that cross the t’s and dot the i’s on even the simplest conference call, and demand everything is said in strictest confidence. Meanwhile their everyday business practices ignore millions of people’s lawful rights to confidentiality.

The extent of commercial companies’ influence on these boards is  opaque. To allow this ethics bandwagon to be driven by the corporate giants surely eschews genuine rights-based values, and long-term integrity of the body they appear to serve.

I am told that these global orgs must be in the room and at the table, to use the opportunity to make the world a better place.

These companies already have *all* the opportunity. Not only monopoly positions on their own technology, but the datasets at scale which underpin it, excluding new entrants to the market. Their pick of new hires from universities. The sponsorship of events. The political lobbying. Access to the media. The lawyers. Bottomless pockets to pay for it all. And seats at board tables set up to shape UK policy responses.

It’s a struggle for power, and a stake in our collective future. The status quo is not good enough for many parts of society, and to enable Big Tech or big government to maintain that simply through the latest tools, is a missed chance to reshape for good.

You can see it in many tech boards’ make up, and pervasive white male bias. We hear it echoed in London think tank conferences, even independent tech design agencies, or set out in some Big Tech reports. All seemingly unconnected, but often funded by the same driving sources.

These companies are often those that made it worse to start with, and the very ethics issues the boards have been set up to deal with, are at the core of their business models and of their making.

The deliberate infiltration of influence on online safety policy for children, or global privacy efforts is very real, explicitly set out in the #FacebookEmails, for example.

We will not resolve these fundamental questions, as long as the companies whose business depend on them, steer national policy. The odds will be ever in their favour.

At the same time, some of these individuals are brilliant. In all senses.

So what’s the answer. If they are around the table, what should the UK public expect of their involvement, and ensure in whose best interests it is? How do we achieve authentic accountability?

Whether it be social media, data analytics, or AI in public policy, can companies be safely permitted to be policy shapers if they wear all the hats; product maker, profit taker, *and* process or product auditor?

Creating Authentic Accountability

At minimum we must demand responsibility for their own actions from board members who represent or are funded by companies.

  1. They must deliver on their own product problems first before being allowed to suggest solutions to societal problems.
  2. There should be credible separation between informing policy makers, and shaping policy.
  3. There must be total transparency of funding sources across any public sector boards, of members, and those lobbying them.
  4. Board members must be meaningfully held accountable for continued company transgressions on rights and freedoms, not only harms.
  5. Oversight of board decision making must be decentralised, transparent and available to scrutiny and meaningful challenge.

While these new bodies may propose solutions that include public engagement strategies, transparency, and standards, few propose meaningful oversight. The real test is not what companies say in their ethical frameworks, but in what they continue to do.

If they fail to meet legal or regulatory frameworks, minimum accountability should mean no more access to public data sets and losing positions of policy influence.

Their behaviour needs to go above and beyond meeting the letter of the law, scraping by or working around rights based protections. They need to put people ahead of profit and self interests. That’s what ethics should mean, not be a PR route to avoid regulation.

As long as companies think the consequences of their platforms and actions are tolerable and a minimal disruption to their business model, society will be expected to live with their transgressions, and our most vulnerable will continue to pay the cost.


This is part 2 of thoughts on Policy shapers, product makers, and profit takers — data and AI. Part 1 is here.

Ethically problematic

Five years ago, researchers at the Manchester University School of Social Sciences wrote, “It will no longer be possible to assume that secondary data use is ethically unproblematic.”

Five years on, other people’s use of the language of data ethics puts social science at risk. Event after event, we are witnessing the gradual dissolution of the value and meaning of ‘ethics’, into little more than a buzzword.

Companies and organisations are using the language of ‘ethical’ behaviour blended with ‘corporate responsibility’ modelled after their own values, as a way to present competitive advantage.

Ethics is becoming shorthand for, ‘we’re the good guys’. It is being subverted by personal data users’ self-interest. Not to address concerns over the effects of data processing on individuals or communities, but to justify doing it anyway.

An ethics race

There’s certainly a race on for who gets to define what data ethics will mean. We have at least three new UK institutes competing for a voice in the space. Digital Catapult has formed an AI ethics committee. Data charities abound. Even Google has developed an ethical AI strategy of its own, in the wake of their Project Maven.

Lessons learned in public data policy should be clear by now. There should be no surprises how administrative data about us are used by others. We should expect fairness. Yet these basics still seem hard for some to accept.

The NHS Royal Free Hospital in 2015 was rightly criticised – because they tried “to commercialise personal confidentiality without personal consent,” as reported in Wired recently.

The shortcomings we found were avoidable,” wrote Elizabeth Denham in 2017 when the ICO found six ways the Google DeepMind — Royal Free deal did not comply with the Data Protection Act. The price of innovation, she said, didn’t need to be the erosion of fundamental privacy rights underpinned by the law.

If the Centre for Data Ethics and Innovation is put on a statutory footing where does that leave the ICO, when their views differ?

It’s why the idea of DeepMind funding work in Ethics and Society seems incongruous to me. I wait to be proven wrong. In their own words, “technologists must take responsibility for the ethical and social impact of their work“. Breaking the law however, is conspicuous by its absence, and the Centre must not be used by companies, to generate pseudo lawful or ethical acceptability.

Do we need new digital ethics?

Admittedly, not all laws are good laws. But if recognising and acting under the authority of the rule-of-law is now an optional extra, it will undermine the ICO, sink public trust, and destroy any hope of achieving the research ambitions of UK social science.

I am not convinced there is any such thing as digital ethics. The claimed gap in an ability to get things right in this complex area, is too often after people simply get caught doing something wrong. Technologists abdicate accountability saying “we’re just developers,” and sociologists say, “we’re not tech people.

These shrugs of the shoulders by third-parties, should not be rewarded with more data access, or new contracts. Get it wrong, get out of our data.

This lack of acceptance of responsibility creates a sense of helplessness. We can’t make it work, so let’s make the technology do more. But even the most transparent algorithms will never be accountable. People can be accountable, and it must be possible to hold leaders to account for the outcomes of their decisions.

But it shouldn’t be surprising no one wants to be held to account. The consequences of some of these data uses are catastrophic.

Accountability is the number one problem to be solved right now. It includes openness of data errors, uses, outcomes, and policy. Are commercial companies, with public sector contracts, checking data are accurate and corrected from people who the data are about, before applying in predictive tools?

Unethical practice

As Tim Harford in the FT once asked about Big Data uses in general: “Who cares about causation or sampling bias, though, when there is money to be made?”

Problem area number two, whether researchers are are working towards a profit model, or chasing grant funding is this:

How data users can make unbiased decisions whether they should use the data? We have all the same bodies deciding on data access, that oversee its governance. Conflict of self interest is built-in by default, and the allure of new data territory is tempting.

But perhaps the UK key public data ethics problem, is that the policy is currently too often about the system goal, not about improving the experience of the people using systems. Not using technology as a tool, as if people mattered. Harmful policy, can generate harmful data.

Secondary uses of data are intrinsically dependent on the ethics of the data’s operational purpose at collection. Damage-by-design is evident right now across a range of UK commercial and administrative systems. Metrics of policy success and associated data may be just wrong.

Some of the damage is done by collecting data for one purpose and using it operationally for another in secret. Until these modus operandi change no one should think that “data ethics will save us”.

Some of the most ethical research aims try to reveal these problems. But we need to also recognise not all research would be welcomed by the people the research is about, and few researchers want to talk about it. Among hundreds of already-approved university research ethics board applications I’ve read, some were desperately lacking. An organisation is no more ethical than the people who make decisions in its name. People disagree on what is morally right. People can game data input and outcomes and fail reproducibility. Markets and monopolies of power bias aims. Trying to support the next cohort of PhDs and impact for the REF, shapes priorities and values.

Individuals turn into data, and data become regnant.” Data are often lacking in quality and completeness and given authority they do not deserve.

It is still rare to find informed discussion among the brightest and best of our leading data institutions, about the extensive everyday real world secondary data use across public authorities, including where that use may be unlawful and unethical, like buying from data brokers. Research users are pushing those boundaries for more and more without public debate. Who says what’s too far?

The only way is ethics? Where next?

The latest academic-commercial mash-ups on why we need new data ethics in a new regulatory landscape where the established is seen as past it, is a dangerous catch-all ‘get out of jail free card’.

Ethical barriers are out of step with some of today’s data politics. The law is being sidestepped and regulation diminished by lack of enforcement of gratuitous data grabs from the Internet of Things, and social media data are seen as a free-for-all. Data access barriers are unwanted. What is left to prevent harm?

I’m certain that we first need to take a step back if we are to move forward. Ethical values are founded on human rights that existed before data protection law. Fundamental human decency, rights to privacy, and to freedom from interference, common law confidentiality, tort, and professional codes of conduct on conflict of interest, and confidentiality.

Data protection law emphasises data use. But too often its first principles of necessity and proportionality are ignored. Ethical practice would ask more often, should we collect the data at all?

Although GDPR requires new necessary safeguards to ensure that technical and organisational measures are met to control and process data, and there is a clearly defined Right to Object, I am yet to see a single event thought giving this any thought.

Let’s not pretend secondary use of data is unproblematic, while uses are decided in secret. Calls for a new infrastructure actually seek workarounds of regulation. And human rights are dismissed.

Building a social license between data subjects and data users is unavoidable if use of data about people hopes to be ethical.

The lasting solutions are underpinned by law, and ethics. Accountability for risk and harm. Put the person first in all things.

We need more than hopes and dreams and talk of ethics.

We need realism if we are to get a future UK data strategy that enables human flourishing, with public support.

Notes of desperation or exasperation are increasingly evident in discourse on data policy, and start to sound little better than ‘we want more data at all costs’. If so, the true costs would be lasting.

Perhaps then it is unsurprising that there are calls for a new infrastructure to make it happen, in the form of Data Trusts. Some thoughts on that follow too.


Part 1. Ethically problematic

Ethics is dissolving into little more than a buzzword. Can we find solutions underpinned by law, and ethics, and put the person first?

Part 2. Can Data Trusts be trustworthy?

As long as data users ignore data subjects rights, Data Trusts have no social license.


Data Horizons: New Forms of Data For Social Research,

Elliot, M., Purdam, K., Mackey, E., School of Social Sciences, The University Of Manchester, CCSR Report 2013-312/6/2013

The power behind today’s AI in public services

The power behind today’s AI in public services

Thinking about whether education in England is preparing us for the jobs of the future, means also thinking about how technology will influence it.

Time and again, thinking and discussion about these topics is siloed. At the Turing Institute, the Royal Society, the ADRN and EPSRC, in government departments, discussions on data, or within education practitioner, and public circles — we are all having similar discussions about data and ethics, but with little ownership and no goals for future outcomes. If government doesn’t get it, or have time for it, or policy lacks ethics by design, is it in the public interest for private companies, Google et al., to offer a fait accompli?

There is lots of talking about Machine Learning (ML), Artificial Intelligence (AI) and ethics. But what is being done to ensure that real values — respect for rights, human dignity, and autonomy — are built into practice in the public services delivery?

In most recent data policy it is entirely absent. The Digital Economy Act s33 risks enabling, through removal of inter and intra-departmental data protections, an unprecedented expansion of public data transfers, with “untrammelled powers”. Powers without codes of practice, promised over a year ago. That has fall out for the trustworthiness of legislative process, and data practices across public services.

Predictive analytics is growing but poorly understood in the public and public sector.

There is already dependence on computers in aspects of public sector work. Its interactions with others in sensitive situations demands better knowledge of how systems operate and can be wrong. Debt recovery, and social care to take two known examples.

Risk averse, staff appear to choose not to question the outcome of ‘algorithmic decision making’ or do not have the ability to do so. There is reportedly no analysis training for practitioners, to understand the basis or bias of conclusions. This has the potential that instead of making us more informed, decision-making by machine makes us humans less clever.

What does it do to professionals, if they feel therefore less empowered? When is that a good thing if it overrides discriminatory human decisions? How can we tell the difference and balance these risks if we don’t understand or feel able to challenge them?

In education, what is it doing to children whose attainment is profiled, predicted, and acted on to target extra or less focus from school staff, who have no ML training and without informed consent of pupils or parents?

If authorities use data in ways the public do not expect, such as to ID homes of multiple occupancy without informed consent, they will fail the future to deliver uses for good. The ‘public interest’, ‘user need,’ and ethics can come into conflict according to your point of view. The public and data protection law and ethics object to harms from use of data. This type of application has potential to be mind-blowingly invasive and reveal all sorts of other findings.

Widely informed thinking must be made into meaningful public policy for the greatest public good

Our politicians are caught up in the General Election and buried in Brexit.

Meanwhile, the commercial companies taking AI first rights to capitalise on existing commercial advantage could potentially strip public assets, use up our personal data and public trust, and leave the public with little public good. We are already used by global data players, and by machine-based learning companies, without our knowledge or consent. That knowledge can be used to profit business models, that pay little tax into the public purse.

There are valid macro economic arguments about whether private spend and investment are preferable compared with a state’s ability to do the same. But these companies make more than enough to do it all. Does it signal a failure to a commitment to the wider community; not paying just amounts of taxes, is it a red flag to a company’s commitment to public good?

What that public good should look like, depends on who is invited to participate in the room, and not to tick boxes, but to think and to build.

The Royal Society’s Report on AI and Machine Learning published on April 25, showed a working group of 14 participants, including two Google DeepMind representatives, one from Amazon, private equity investors, and academics from cognitive science and genetics backgrounds.

Our #machinelearning working group chair, professor Peter Donnelly FRS, on today’s major #RSMachinelearning report https://t.co/PBYjzlESmB pic.twitter.com/RM9osnvOMX

— The Royal Society (@royalsociety) April 25, 2017

If we are going to form objective policies the inputs that form the basis for them must be informed, but must also be well balanced, and be seen to be balanced. Not as an add on, but be in the same room.

As Natasha Lomas in TechCrunch noted, “Public opinion is understandably a big preoccupation for the report authors — unsurprisingly so, given that a technology that potentially erodes people’s privacy and impacts their jobs risks being drastically unpopular.”

“The report also calls on researchers to consider the wider impact of their work and to receive training in recognising the ethical implications.”

What are those ethical implications? Who decides which matter most? How do we eliminate recognised discriminatory bias? What should data be used for and AI be working on at all? Who is it going to benefit? What questions are we not asking? Why are young people left out of this debate?

Who decides what the public should or should not know?

AI and ML depend on data. Data is often talked about as a panacea to problems of better working together. But data alone does not make people better informed. In the same way that they fail, if they don’t feel it is their job to pick up the fax. A fundamental building block of our future public and private prosperity is understanding data and how we, and the AI, interact. What is data telling us and how do we interpret it, and know it is accurate?

How and where will we start to educate young people about data and ML, if not about their own and use by government and commercial companies?

The whole of Chapter 5 in the report is very good as a starting point for policy makers who have not yet engaged in the area. Privacy while summed up too short in conclusions, is scattered throughout.

Blind spots remain, however.

  • Over willingness to accommodate existing big private players as their expertise leads design, development and a desire to ‘re-write regulation’.
  • Slowness to react to needed regulation in the public sector (caught up in Brexit) while commercial drivers and technology change forge ahead
  • ‘How do we develop technology that benefits everyone’ must not only think UK, but global South, especially in the bias in how AI is being to taught, and broad socio-economic barriers in application
  • Predictive analytics and professional application = unwillingness to question the computer result. In children’s social care this is already having a damaging upturn in the family courts (S31)
  • Data and technology knowledge and ethics training, must be embedded across the public sector, not only post grad students in machine learning.
  • Harms being done to young people today and potential for intense future exploitation, are being ignored by policy makers and some academics. Safeguarding is often only about blocking in case of liability to the provider, stopping children seeing content, or preventing physical exploitation. It ignores exploitation by online platform firms, and app providers and games creators, of a child’s synthesised online life and use. Laws and government departments’ own practices can be deeply flawed.
  • Young people are left out of discussions which, after all, are about their future. [They might have some of the best ideas, we miss at our peril.]

There is no time to waste

Children and young people have the most to lose while their education, skills, jobs market, economy, culture, care, and society goes through a series of gradual but seismic shift in purpose, culture, and acceptance before finding new norms post-Brexit. They will also gain the most if the foundations are right. One of these must be getting age verification right in GDPR, not allowing it to enable a massive data grab of child-parent privacy.

Although the RS Report considers young people in the context of a future workforce who need skills training, they are otherwise left out of this report.

“The next curriculum reform needs to consider the educational needs of young people through the lens of the implications of machine learning and associated technologies for the future of work.”

Yes it does, but it must give young people and the implications of ML broader consideration for their future, than classroom or workplace.

Facebook has targeted vulnerable young people, it is alleged, to facilitate predatory advertising practices. Some argue that emotive computing or MOOCs belong in the classroom. Who decides?

We are not yet talking about the effects of teaching technology to learn, and its effect on public services and interactions with the public. Questions that Sam Smith asked in Shadow of the smart machine: Will machine learning end?

At the end of this Information Age we are at a point when machine learning, AI and biotechnology are potentially life enhancing or could have catastrophic effects, if indeed “AI will cause people ‘more pain than happiness” as described by Alibaba’s founder Jack Ma.

The conflict between commercial profit and public good, what commercial companies say they will do and actually do, and fears and assurances over predicted outcomes is personified in the debate between Demis Hassabis, co-founder of DeepMind Technologies, (a London-based machine learning AI startup), and Elon Musk, discussing the perils of artificial intelligence.

Vanity Fair reported that, Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.””

Musk was of the opinion that A.I. was probably humanity’s “biggest existential threat.”

We are not yet joining up multi disciplinary and cross sector discussions of threats and opportunities

Jobs, shift in needed skill sets for education, how we think, interact, value each other, accept or reject ownership and power models; and later, from the technology itself. We are not yet talking conversely, the opportunities that the seismic shifts offer in real terms. Or how and why to accept or reject or regulate them.

Where private companies are taking over personal data given in trust to public services, it is reckless for the future of public interest research to assume there is no public objection. How can we object, if not asked? How can children make an informed choice? How will public interest be assured to be put ahead of private profit? If it is intended on balance to be all about altruism from these global giants, then they must be open and accountable.

Private companies are shaping how and where we find machine learning and AI gathering data about our behaviours in our homes and public spaces.

SPACE10, an innovation hub for IKEA is currently running a survey on how the public perceives and “wants their AI to look, be, and act”, with an eye on building AI into their products, for us to bring flat-pack into our houses.

As the surveillance technology built into the Things in our homes attached to the Internet becomes more integral to daily life, authorities are now using it to gather evidence in investigations; from mobile phones, laptops, social media, smart speakers, and games. The IoT so far seems less about the benefits of collaboration, and all about the behavioural data it collects and uses to target us to sell us more things. Our behaviours tell much more than how we act. They show how we think inside the private space of our minds.

Do you want Google to know how you think and have control over that? The companies of the world that have access to massive amounts of data, and are using that data to now teach AI how to ‘think’. What is AI learning? And how much should the State see or know about how you think, or try to predict it?

Who cares, wins?

It is not overstated to say society and future public good of public services, depends on getting any co-dependencies right. As I wrote in the time of care.data, the economic value of data, personal rights and the public interest are not opposed to one another, but have synergies and co-dependency. One player getting it wrong, can create harm for all. Government must start to care about this, beyond the side effects of saving political embarrassment.

Without joining up all aspects, we cannot limit harms and make the most of benefits. There is nuance and unknowns. There is opaque decision making and secrecy, packaged in the wording of commercial sensitivity and behind it, people who can be brilliant but at the end of the day, are also, human, with all our strengths and weaknesses.

And we can get this right, if data practices get better, with joined up efforts.

Our future society, as our present, is based on webs of trust, on our social networks on- and offline, that enable business, our education, our cultural, and our interactions. Children must trust they will not be used by systems. We must build trustworthy systems that enable future digital integrity.

The immediate harm that comes from blind trust in AI companies is not their AI, but the hidden powers that commercial companies have to nudge public and policy maker behaviours and acceptance, towards private gain. Their ability and opportunity to influence regulation and future direction outweighs most others. But lack of transparency about their profit motives is concerning. Carefully staged public engagement is not real engagement but a fig leaf to show ‘the public say yes’.

The unwillingness by Google DeepMind, when asked at their public engagement event, to discuss their past use of NHS patient data, or the profit model plan or their terms of NHS deals with London hospitals, should be a warning that these questions need answers and accountability urgently.

As TechCrunch suggested after the event, this is all “pretty standard playbook for tech firms seeking to workaround business barriers created by regulation.” Calls for more data, might mean an ever greater power shift.

Companies that have already extracted and benefited from personal data in the public sector, have already made private profit. They and their machines have learned for their future business product development.

A transparent accountable future for all players, private and public, using public data is a necessary requirement for both the public good and private profit. It is not acceptable for departments to hide their practices, just as it is unacceptable if firms refuse algorithmic transparency.

Rebooting antitrust for the information age will not be easy. It will entail new risks: more data sharing, for instance, could threaten privacy. But if governments don’t want a data economy dominated by a few giants, they will need to act soon.” [The Economist, May 6]

If the State creates a single data source of truth, or private Giant tech thinks it can side-step regulation and gets it wrong, their practices screw up public trust. It harms public interest research, and with it our future public good.

But will they care?

If we care, then across public and private sectors, we must cherish shared values and better collaboration. Embed ethical human values into development, design and policy. Ensure transparency of where, how, who and why my personal data has gone.

We must ensure that as the future becomes “smarter”, we educate ourselves and our children to stay intelligent about how we use data and AI.

We must start today, knowing how we are used by both machines, and man.


First published on Medium for a change.

Mum, are we there yet? Why should AI care.

Mike Loukides drew similarities between the current status of AI and children’s learning in an article I read this week.

The children I know are always curious to know where they are going, how long will it take, and how they will know when they get there. They ask others for guidance often.

Loukides wrote that if you look carefully at how humans learn, you see surprisingly little unsupervised learning.

If unsupervised learning is a prerequisite for general intelligence, but not the substance, what should we be looking for, he asked. It made me wonder is it also true that general intelligence is a prerequisite for unsupervised learning? And if so, what level of learning must AI achieve before it is capable of recursive self-improvement? What is AI being encouraged to look for as it learns, what is it learning as it looks?

What is AI looking for and how will it know when it gets there?

Loukides says he can imagine a toddler learning some rudiments of counting and addition on his or her own, but can’t imagine a child developing any sort of higher mathematics without a teacher.

I suggest a different starting point. I think children develop on their own, given a foundation. And if the foundation is accompanied by a purpose — to understand why they should learn to count, and why they should want to — and if they have the inspiration, incentive and  assets they’ll soon go off on their own, and outstrip your level of knowledge. That may or may not be with a teacher depending on what is available, cost, and how far they get compared with what they want to achieve.

It’s hard to learn something from scratch by yourself if you have no boundaries to set knowledge within and search for more, or to know when to stop when you have found it.

You’ve only to start an online course, get stuck, and try to find the solution through a search engine to know how hard it can be to find the answer if you don’t know what you’re looking for. You can’t type in search terms if you don’t know the right words to describe the problem.

I described this recently to a fellow codebar-goer, more experienced than me, and she pointed out something much better to me. Don’t search for the solution or describe what you’re trying to do, ask the search engine to find others with the same error message.

In effect she said, your search is wrong. Google knows the answer, but can’t tell you what you want to know, if you don’t ask it in the way it expects.

So what will AI expect from people and will it care if we dont know how to interrelate? How does AI best serve humankind and defined by whose point-of-view? Will AI serve only those who think most closely in AI style steps and language?  How will it serve those who don’t know how to talk about, or with it? AI won’t care if we don’t.

If as Loukides says, we humans are good at learning something and then applying that knowledge in a completely different area, it’s worth us thinking about how we are transferring our knowledge today to AI and how it learns from that. Not only what does AI learn in content and context, but what does it learn about learning?

His comparison of a toddler learning from parents — who in effect are ‘tagging’ objects through repetition of words while looking at images in a picture book — made me wonder how we will teach AI the benefit of learning? What incentive will it have to progress?

“the biggest project facing AI isn’t making the learning process faster and more efficient. It’s moving from machines that solve one problem very well (such as playing Go or generating imitation Rembrandts) to machines that are flexible and can solve many unrelated problems well, even problems they’ve never seen before.”

Is the skill to enable “transfer learning” what will matter most?

For AI to become truly useful, we need better as a global society to understand *where* it might best interface with our daily lives, and most importantly *why*.  And consider *who* is teaching and AI and who is being left out in the crowdsourcing of AI’s teaching.

Who is teaching AI what it needs to know?

The natural user interfaces for people to interact with today’s more common virtual assistants (Amazon’s Alexa, Apple’s Siri and Viv, Microsoft  and Cortana) are not just providing information to the user, but through its use, those systems are learning. I wonder what percentage of today’s  population is using these assistants, how representative are they, and what our AI assistants are being taught through their use? Tay was a swift lesson learned for Microsoft.

In helping shape what AI learns, what range of language it will use to develop its reference words and knowledge, society co-shapes what AI’s purpose will be —  and for AI providers to know what’s the point of selling it. So will this technology serve everyone?

Are providers counter-balancing what AI is currently learning from crowdsourcing, if the crowd is not representative of society?

So far we can only teach machines to make decisions based on what we already know, and what we can tell it to decide quickly against pre-known references using lots of data. Will your next image captcha, teach AI to separate the sloth from the pain-au-chocolat?

One of the task items for machine processing is better searches. Measurable goal driven tasks have boundaries, but who sets them? When does a computer know, if it’s found enough to make a decision. If the balance of material about the Holocaust on the web for example, were written by Holocaust deniers will AI know who is right? How will AI know what is trusted and by whose measure?

What will matter most is surely not going to be how to optimise knowledge transfer from human to AI — that is the baseline knowledge of supervised learning — and it won’t even be for AI to know when to use its skill set in one place and when to apply it elsewhere in a different context; so-called learning transfer, as Mike Loukides says. But rather, will AI reach the point where it cares?

  • Will AI ever care what it should know and where to stop or when it knows enough on any given subject?
  • How will it know or care if what it learns is true?
  • If in the best interests of advancing technology or through inaction  we do not limit its boundaries, what oversight is there of its implications?

Online limits will limit what we can reach in Thinking and Learning

If you look carefully at how humans learn online, I think rather than seeing  surprisingly little unsupervised learning, you see a lot of unsupervised questioning. It is often in the questioning that is done in private we discover, and through discovery we learn. Often valuable discoveries are made; whether in science, in maths, or important truths are found where there is a need to challenge the status quo. Imagine if Galileo had given up.

The freedom to think freely and to challenge authority, is vital to protect, and one reason why I and others are concerned about the compulsory web monitoring starting on September 5th in all schools in England, and its potential chilling effect. Some are concerned who  might have access to these monitoring results today or in future, if stored could they be opened to employers or academic institutions?

If you tell children do not use these search terms and do not be curious about *this* subject without repercussions, it is censorship. I find the idea bad enough for children, but for us as adults its scary.

As Frankie Boyle wrote last November, we need to consider what our internet history is:

“The legislation seems to view it as a list of actions, but it’s not. It’s a document that shows what we’re thinking about.”

Children think and act in ways that they may not as an adult. People also think and act differently in private and in public. It’s concerning that our private online activity will become visible to the State in the IP Bill — whether photographs that captured momentary actions in social media platforms without the possibility to erase them, or trails of transitive thinking via our web history — and third-parties may make covert judgements and conclusions about us, correctly or not, behind the scenes without transparency, oversight or recourse.

Children worry about lack of recourse and repercussions. So do I. Things done in passing, can take on a permanence they never had before and were never intended. If expert providers of the tech world such as Apple Inc, Facebook Inc, Google Inc, Microsoft Corp, Twitter Inc and Yahoo Inc are calling for change, why is the government not listening? This is more than very concerning, it will have disastrous implications for trust in the State, data use by others, self-censorship, and fear that it will lead to outright censorship of adults online too.

By narrowing our parameters what will we not discover? Not debate?  Or not invent? Happy are the clockmakers, and kids who create. Any restriction on freedom to access information, to challenge and question will restrict children’s learning or even their wanting to.  It will limit how we can improve our shared knowledge and improve our society as a result. The same is true of adults.

So in teaching AI how to learn, I wonder how the limitations that humans put on its scope — otherwise how would it learn what the developers want — combined with showing it ‘our thinking’ through search terms,  and how limitations on that if users self-censor due to surveillance, will shape what AI will help us with in future and will it be the things that could help the most people, the poorest people, or will it be people like those who programme the AI and use search terms and languages it already understands?

Who is accountable for the scope of what we allow AI to do or not? Who is accountable for what AI learns about us, from our behaviour data if it is used without our knowledge?

How far does AI have to go?

The leap for AI will be if and when AI can determine what it doesn’t know, and it sees a need to fill that gap. To do that, AI will need to discover a purpose for its own learning, indeed for its own being, and be able to do so without limitation from the that humans shaped its framework for doing so. How will AI know what it needs to know and why? How will it know, what it knows is right and sources to trust? Against what boundaries will AI decide what it should engage with in its learning, who from and why? Will it care? Why will it care? Will it find meaning in its reason for being? Why am I here?

We assume AI will know better. We need to care, if AI is going to.

How far are we away from a machine that is capable of recursive self-improvement, asks John Naughton in yesterday’s Guardian, referencing work by Yuval Harari suggesting artificial intelligence and genetic enhancements will usher in a world of inequality and powerful elites. As I was finishing this, I read his article, and found myself nodding, as I read the implications of new technology focus too much on technology and too little on society’s role in shaping it.

AI at the moment has a very broad meaning to the general public. Is it living with life-supporting humanoids?  Do we consider assistive search tools as AI? There is a fairly general understanding of “What is A.I., really?” Some wonder if we are “probably one of the last generations of Homo sapiens,” as we know it.

If the purpose of AI is to improve human lives, who defines improvement and who will that improvement serve? Is there a consensus on the direction AI should and should not take, and how far it should go? What will the global language be to speak AI?

As AI learning progresses, every time AI turns to ask its creators, “Are we there yet?”,  how will we know what to say?

image: Stephen Barling flickr.com/photos/cripsyduck (CC BY-NC 2.0)