Tag Archives: technology

The “new normal” is not inevitable. (1/2)

Today Keir Starmer talked about us having more control in our lives. He said, “markets don’t give you control – that is almost literally their point.”

This week we’ve seen it embodied in a speech given by Oracle co-founder Larry Ellison, at their Financial Analyst Meeting 2024. He said that AI is on the verge of ushering in a new era of mass behavioural surveillance, of police and citizens. Oracle he suggested, would be the technological backbone for such applications, keeping everyone “on their best behaviour” through constant real-time machine-learning-powered monitoring.(LE FAQs 1:09:00).

Ellison’s sense of unquestionable entitlement to decide that *he* should be the company to control how all citizens (and police) behave-by-design, and his omission of any consideration of any democratic mandate for that, should shock us. Not least because he’s wrong in some of his claims. (There is no evidence that having a digital dystopia makes this difference in school safety, in particular given the numbers of people known to the school).

How does society trust in this direction of travel in how our behaviour is shaped and how corporations impose their choices? How can a government promise society more control in our lives and yet enable a digital environment, which plays a large part in our everyday life, of which we seem to have ever less control?

The new government sounds keen on public infrastructure investment as a route to structural transformation. But the risk is that the cost constraints mean they seek the results expected from the same plays in an industrial development strategy of old, but now using new technology and tools. It’s a big mistake, huge. And nothing less than national democracy is at stake because individuals cannot meaningfully hold corporations to account. The economic and political context in which an industrial strategy is defined is now behind paywalls and without parliamentary consensus, oversight or societal legitimacy in a formal democratic environment. The nature of the constraints on businesses’ power were once more tangible and localised and their effects easier to see in one place. Power has moved away from government to corporations considerably in the time Labour was out of it. We are now more dependent on being users of multiple private techno-solutions to everyday things often that we hate using, from paying for the car park, to laundry, to job hunting. All with indirect effects on national security, on service provsion at scale, as well as direct everyday effects for citizens and increasingly, our disempowerment and lack of agency in our own lives.

LinkedIn this week first chose not to ask users at all in what has become the OpenAI modus operandi of take first, and ask forgiveness later to grab our personal data from the platform to train “content creation AI models.” (And then it did a U-turn.)

Data Protection law is supposed to offer people protection from such misuse, but without enforcement, ever more companies are starting to copy each other on rinse and repeat.

The Convention 108 requires respect for rights and fundamental freedoms, and in particular the right to privacy, and special categories of data which may not be processed automatically unless domestic law provides appropriate safeguards. It must be obtained fairly. In the cases of many emerging [Generative] AI companies have disregarded fundamentals of European DP laws: Purpose limitation for incompatible uses, no relationship between the subject and company, lack of accuracy or even being actively offered a right to object when in fact it should be a consent basis, means there is unfair and unlawful processing. If we are to accept that anyone at all can take any personal data online and use it for an entirely different purpose to turn into commercial products ignoring all this, then frankly the Data Protection authorities may as well close. Are these commercial interests simply so large that they believe they can get away with steam-rollering over democratic voice and human rights, as well as the rule of (data protection) law? Unless there is consistent ‘cease and desist’ type enforcement, it seems to be rapidly becoming the new normal.

If instead regulators were to bring meaningful enforcement that is dissuasive as the law is supposed to be, what would change? What will shift practice on facial recognition in the world as foreseen by Larry Ellison, and shift public policy to sourcing responsibly? How is democracy to be saved from technocratic authoritarianism going global? If the majority of people in living in democracies are never asked for their views, and have changes imposed on their lives that they do not want how do we raise a right to object and take control?

While the influence in in our political systems of institutions, such as Oracle, and their financial interests are in growing ever more-, and ever larger -, AI models, the interests of our affected communities are not represented in state decisions at national or global levels.

While tools are being built to resist content scraping from artists, what is there for faces and facts, or even errors, about our lives?

Ted Chiang asked in the New Yorker in 2023, whether an alternative is possible to the  current direction of travel. “Some might say that it’s not the job of A.I. to oppose capitalism. That may be true, but it’s not the job of A.I. to strengthen capitalism, either. Yet that is what it currently does.” The greatest current risk of AI is not what we imagine from I, Robot he suggested, but “A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value.”

I remember giving a debate talk as a teen thirty years ago, about the risk of rising sea levels in Vanuatu. It is a reality that causes harm. Satellite data indicates the sea level has risen there by about 6 mm per year since 1993. Someone told me in an interesting recent Twitter exchange, when it comes to climate impacts and AI that they are, “not a proponent of ‘reducing consumption'”.  The reality of climate change in the UK today, only hints at the long-term consequences for the world from the effects of migration and extreme weather events. Only by restraining consumption might we change those knock-on effects of climate change in anything like the necessary timeframe, or the next thirty years worsen more quickly.

But instead of hearing meaningful assessment of what we need in public policy, we hear politicians talk about growth as what they want and that bigger can only be better. More AI. More data centres. What about more humanity in this machine-led environment?

 While some argue for an alternative, responsible, rights respecting path as the only sustainable path forwards, like Meeri Haataja, Chief Executive and Founder, Saidot, Helsinki, Finland in his August letter to the FT,  some of the largest companies appear to be suggesting this week, publishing ads in various European press, that they seem to struggle to follow the law (like Meta), so Europe needs a new one. And to paraphrase, they suggest it’s for our own good.

And that’s whether we like it, or not. You might not choose to use any of their Meta products but might still be being used by them, using your activity online to to create advertising market data that can be sold. Researchers at the University of Oxford analysed a million smartphone apps and found that “the average one contains third‑party code from 10 different companies that facilitates this kind of tracking. Nine out of 10 of them sent data to Google.  Four out of 10 of them sent data to Facebook. In the case of Facebook, many of them sent data automatically without the individual having the opportunity to say no to it.” Reuse for training AI is far more explicit and wrong and “it is for Meta [or any other company] to ensure and demonstrate ongoing compliance.” We have rights, and the rule of law, and at stake are our democratic processes.

“The balance of the interests of working people” must include respect for their fundamental rights and freedoms in the digital environment as well as supporting the interests of economic growth and they are mutually achievable not mutually exclusive.

But in the UK we put the regulator on a leash in 2015, constrained by a duty towards “economic growth”. It’s a constraint that should not apply to industry and market regulators, like the ICO.

While some companies plead to be let off with their bad behaviour, others expect to profit from encouraging the state to increase the monitoring of ours or asking that the law be written ever in their favour. Regulators need to stand tall, and stand up to it and the government needs to remove their leash.

Even in 1959, Labour MPs included housewives concerned with being misled by advertisers and manufacturers (06:40). Many of the electoral issues have stayed the same in over 65 years.  But the power grab going on in this era of information age is unprecedented.

We need not accept this techno-authoritarianism as a “new normal” and inevitable. If indeed as Starmer concluded, “Britain belongs to you“, then it needs MPs to act like it and defend fundamental rights and freedoms to uphold values like the rule of law, even with companies who believe it does not apply to them. With a growing swell of nationalism and plenty who may not believe Britain belongs to all of us, but that, ‘Tomorrow belongs to Me‘, it is indeed a time when “great forces demand a decisive government prepared to face the future.”


See also Part 2: Farming out our Children. AI AI Oh.

The video referenced above is of Dr. Sasha Luccioni, the research scientist and climate lead at HuggingFace, an open-source community and machine-learning platform for AI developers, and is part 1 of the TED Radio Hour episode, Our tech has a climate problem.

Farming out our children. AI AI Oh. (2/2)

Today Keir Starmer talked about us having more control in our lives. “Taking back control is a Labour argument”, he said. So let’s see it in education tech policy where parents told us in 2018, less than half felt they had sufficient control of their child’s digital footprint.

Not only has the UK lost control of which companies control large parts of the state education infrastructure and its delivery, the state is *literally* giving away control of our children’s lives recorded in identifiable data at national level, and since 2012 included giving it to journalists, think tanks, and companies.

Why it matters is less about the data per se, but what is done with it without our permission and how that affects our lives.

Politicians’ love affair with AI (undefined) seems to be as ardent as under the previous government. The State appears to have chosen to further commercialise children’s lives in data, having announced towards the end of the school summer holidays that  the DfE and DSIT will give pupils’ assessment data to companies for AI product development. I get angry about this, because the data is badly misunderstood, and not a product to pass around but the stories of children’s lives in data, and that belongs to them to control.

Are we asking the right questions today about AI and education?  In 2016 in a post for Nesta, Sam Smith foresaw the algorithmic fiasco that would happen in the summer of 2020  pointing out that exam-marking algorithms like any other decisions, have unevenly distributed consequences. What prevents that happening daily but behind closed doors and in closed systems? The answer is, nothing.

Both the adoption of AI in education and education about AI is unevenly distributed. Driven largely by commercial interests, some are co-opting teaching unions for access to the sector, others more cautious, have focused on the challenges of bias and discrimination and plagiarism. As I recently wrote in Schools Week, the influence of corporate donors and their interests in shaping public sector procurement, such as the Tony Blair Institute’s backing by Oracle owner Larry Ellison, therefore demands scrutiny.

Should society allow its public sector systems and laws to be shaped primarily to suit companies? The users of the systems are shaped by how those companies work, so who keeps the balance in check?

In a 2021 reflection here on World Children’s Day, I asked the question, Man or Machine, who shapes my child? Three years later, I am still concerned about the failure to recognize and address the question of redistribution of not only pupils’ agency but teachers’ authority; from individuals to companies (pupils and the teacher don’t decide what is ‘right’ you do next, the ‘computer’ does). From public interest institutions to companies (company X determines the curriculum content of what the computer does and how, not the school). And from State to companies (accountability for outcomes falls through the gap in outsourcing activity to the AI company).

Why it matters, is that these choices do not only influence how we are teaching and learning, but how children feel about it and develop.

The human response to surveillance (and that is what much of AI relies on, massive data-veillance and dashboards) is a result of the chilling effect of being ‘watched‘ by known or unknown persons behind the monitoring. We modify our behaviours to be compliant to their expectations. We try not to stand out from the norm, or to protect ourselves from resulting effects.

The second reason we modify our behaviours is to be compliant with the machine itself. Thanks to the lack of a responsible human in the interaction mediated by the AI tool, we are forced to change what we do to comply with what the machine can manage. How AI is changing human behaviour is not confined to where we walk, meet, play and are overseen in out- or indoor spaces. It is in how we respond to it, and ultimately, how we think.

In the simplest examples, using voice assistants shapes how children speak, and in prompting generative AI applications, we can see how we are forced to adapt how we think to put the questions best suited to getting the output we want. We are changing how we behave to suit machines. How we change behaviour is therefore determined by the design of the company behind the product.

There is limited public debate yet on the effects of this for education, on how children act, interact, and think using machines, and no consensus in the UK education sector whether it is desirable to introduce these companies and their steering that bring changes in teaching and learning and to the future of society, as a result.

And since then in 2021, I would go further. The neo-liberal approach to education and its emphasis on the efficiency of human capital and productivity, on individualism and personalisation,  all about producing ‘labour market value’, and measurable outcomes, is commonly at the core of AI in teaching and learning platforms.

Many tools dehumanise children into data dashboards, rank and spank their behaviours and achivements, punish outliers and praise norms, and expect nothing but strict adherence to rules (sometimes incorrect ones, like mistakes in maths apps). As some companies have expressly said, the purpose of this is to normalise such behaviours ready to be employees of the future, and the reason their tools are free is to normalise their adoption for life.

AI by the normalisation of values built-in by design to tools, is even seen by some as encouraging fascistic solutions to social problems.

But the purpose of education is not only about individual skills and producing human capital to exploit.  Education is a vital gateway to rights and the protection of a democratic society. Education must not only be about skills as an economic driver when talking about AI and learners in terms of human capital, but include rights, championing the development of a child’s personality to their fullest potential, and intercultural understanding, digital citizenship on dis-/misinformation, discrimination and the promotion and protection of democracy and the natural world. “It shall promote understanding, tolerance and friendship among nations, racial or religious groups, and shall further the activities of the United Nations for the maintenance of peace.”

Peter Kyle, the UK DSIT’s Secretary of State said last week, that, “more than anything else, it is growth that will shape those young people’s future.” But what will be used to power all this growth in AI, at what environmental and social costs, and will we get a say?

Don’t forget, in this project announcement the Minister said, “This is the first of many projects that will transform how we see and use public sector data.” That’s our data, about us. And when it comes to schools, that’s not only the millions of learners who’ve left already but who are school children today. Are we really going to accept turning them into data fodder for AI without a fight? As Michael Rosen summed up so perfectly in 2018, “First they said they needed data about the children  to find out what they’re learning… then the children became data.”  If this is to become the new normal, where is the mechanism for us to object? And why this, now, in such a hurry?

Purpose limitation should also prevent retrospective reuse of learners’ records and data, but it hasn’t so far on general identifying and sensitive data distribution from the NPD at national level or from edTech in schools. The project details, scant as they are, suggest parents were asked for consent in this particular pilot, but the Faculty AI notice seems legally weak for schools, and when it comes to using pupil data for building into AI products the question is whether consent can ever be valid — since it cannot be withdrawn once given, and the nature of being ‘freely given’ is affected by the power imbalance.

So far there is no field to record an opt out in any schools’ Information Management Systems though many discussions suggest it would be relatively straightforward to make it happen. However it’s important to note their own DSIT public engagement work on that project says that opt-in is what those parents told the government they would expect. And there is a decade of UK public engagement on data telling government opt-in is what we want.

The regulator has been silent so far on the DSIT/DfE announcement despite lack of fair processing and failures on Articles 12, 13 and 14 of the GDPR being one of the key findings in its 2020 DfE audit. I can use a website to find children’s school photos, scraped without our permission. What about our school records?

Will the government consult before commercialising children’s lives in data to feed AI companies and ‘the economy’ or any of the other “many projects that will transform how we see and use public sector data“?  How is it different from the existing ONS, ADR, or SAIL databank access points and processes? Will the government evaluate the impact on child development, behaviour or mental health of increasing surveillance in schools? Will MPs get an opt-in or even -out, of the commercialisation of their own school records?

I don’t know about ‘Britain belongs to us‘, but my own data should.


See also Part 1: The New Normal is Not Inevitable.

Automated suspicion is always on

In the Patrick Ness trilogy, Chaos Walking, the men can hear each others’ every thought but not the women.

That exposure of their bodily data and thought, means almost impossible privacy,  and no autonomy over their own bodily control of movement or of action. Any man that tries to block access to their thoughts is treated with automatic suspicion.

It has been on my mind since last week’s get together at FIPR. We were tasked before the event to present what we thought would be the greatest risk to rights [each pertinent to the speaker’s focus area] in the next five years.

Wendy Grossman said at the event and in her blog, “I’d look at the technologies being deployed around European and US borders to surveil migrants. Migrants make easy targets for this type of experimentation because they can’t afford to protest and can’t vote. “Automated suspicion,” Euronews.next calls it. That habit of mind is dangerous.” Those tools often focus on control of humans’ bodies. They infringe on freedom of movement.

In education, technology companies sell automated suspicion detection tools to combat plagiarism and cheating in exams. Mood detection to spot outliers in concentration. Facial detection to bar the excluded from premises or the lunch queue, or normalise behavioural anomalies, control physical attendance and mental presence. Automated suspicion is the opposite of building trusted human relationships.

I hadn’t had much space to think in the weeks before the event, between legislation, strategic litigation and overdue commitments to reports, events, and to others. But on reflection, I failed to explain why the topic area I picked above all others matters. It really matters.

It is the combination of a growth of children’s bodily data processing and SafetyTech deployed in schools. It’s not only because such tools normalise the surveillance of everything children do, send, share or search for on a screen, or that many enable the taking of covert webcam photos,  or even the profiles and labels it can create on terrorism and extremism or that can out LGBTQ+ teens. But that at its core, lies automated suspicion and automated control. Not only of bodily movement and actions, but of thought. Without any research or challenge to what that does to child development or their experience of social interactions and of authority.

First let’s take suspicion.

Suspicion of harms to self, harms to others, harms from others.

The software / systems / tools inspect the text or screen content the users enter into  devices (including text the users delete and text before it is encrypted) assuming a set of risks all of the time. When a potential risk is detected, the tools can capture and store a screenshot of the users’ screen. Depending on the company design and option bought, human company moderators may or may not first review the screenshots (recorded on a rolling basis also ‘without’ any trigger so as to have context ahead of the event) and text captures to verify the triggered events before sending to the school’s designated safeguarding lead. An estimated 1% of all triggered material might be sent on to a school to review and choose whether or not to act on. But regardless of that, the children’s data (including screenshots, text, and redacted text) may be stored for more than a year by the company before being deleted. Even content not seen as necessary but, “content which poses no risk on its own but is logged in case it becomes relevant in the future”.

Predictive threat, automated suspicion

In-school technology is not only capturing what is done by children but what they say they do, or might do, or think of doing. SafetyTech enables companies and school staff to police what children do and what they think, and it is quite plainly designed to intervene in actions and thoughts before things happen. It is predictively policing pupils in schools.

Safeguarding-in-schools systems were already one of my greatest emerging concerns but I suspect coinciding with recent wars, that the keywords in topics seen as connected to the Prevent programme will find a match rate at an all time high since 2016 and the risks it brings due to being wrong will have increased with it. But while we have now got various company CEOs talking about shared concerns, not least outing LGBTQ students as the CDT reported this year in the U.S. and a whistleblower who wanted to talk about the sensitive content the staff can see from their company side, there is not yet appetite to fix this across the sector. The ICO returned our case for sectoral attention, with no enforcement. DfE guidance still ignores the at home, out of hours contexts and those among the systems that can enable school staff or company staff to take photos of the children and no one might know. We’ve had lawyers write letters and submitted advice in consultations and yet it’s ignored to date.

Remember the fake bomb detectors that were golf ball machines? That’s the potential scenario we’ve got in education in “safeguarding in schools” tech. Automated decision making in black boxes that no one has publicly tested, no one can see inside, and we’ve no data on its discriminatory effects through language matching or any effective negative or false positives, and the harms it is or is not causing. We’ve risk averse institutions made vulnerable to scams. It may be utterly brilliant technology, with companies falling over independent testing that proves it ‘works’. I’ve just not seen any.

Some companies themselves say they need better guidance and agree there are significant gaps. Opendium, one leading provider of internet filtering and monitoring solutions, blogged about views expressed at a 2019 conference held by the Police Service’s Counter Terrorism Internet Referral Unit that schools need better advice .

Freedom of Thought

But it’s not just about what children do, but any mention of what they *might* do or their opinions of themselves, others or anything else. We have installed systems of thought surveillance into schools, looking for outliers or ‘extremists’ in different senses, and in its now everyday sense, underpinned by the Prevent programme and British Values. These systems do not only expose and create controls of children’s behaviours in what they do, but in their thoughts, their searches, what they type and share, send, or even, don’t and delete.

Susie Algere, human rights lawyer, describes, Freedom of Thought as, “protected absolutely in international human rights law. This means that, if an activity interferes with our right to think for ourselves inside our heads (the so-called “forum internum”) it can never be justified for any reason. The right includes three elements:

the right to keep our thoughts private
the right to keep our thoughts free from manipulation, and
the right not to be penalised for our thoughts.”

These SafetyTech systems don’t respect any of that. They infringe on freedom of thought.

Bodily data and contextual collapse

Depending on the company, SafetyTech may be built on keyword matching technology commonly used in the gaming tech industry.

Gaming data collected from children is a whole field in its own right – bodily data from haptics, and neuro data. Personal data from immersive environments that in another sector would be classified clearly as “health” data, and in the gaming sector too, will fall under the same “special category” or “sensitive data” due to its nature, not its context. But it is being collected at scale by companies that aren’t used to dealing with the demands of professional confidentiality and concept of ‘first do no harm’ that the health sector are founded on. Perhaps we’re not quite at the everyday for everyone in society, Ready Player One stage yet, but for those in communities who are creating a vast amount of data about themselves the questions over its oversight its retention, and perhaps its redistribution with authorities in particular with policing should be of urgent consideration. And those tools are on the way into the classroom.

At school level the enormous growth in the transfer of bodily data is not yet haptics but of bodily harm. A vast sector has grown up to support the digitisation of children’s safety, physical harms noticed by staff on children picked up at home, or accidents and incidents recorded at school. Often including marking full body outlines with where the injury has been.

The issues here again, are in part created by taking this data  beyond the physical environment of a child’s direct care and beyond the digital firewalls of child protection agencies and professionals. There are no clear universal policies on sealed records. ie not releasing the data of children-at-risk or those who undergo a name change, once it’s been added into school information management systems or into commercial company products like CPOMS, MyConcern, or Tootoot.

Similarly there is no clear national policy on the onward distribution into the National Pupil Database of the records of children in need (CiN) of child protection, which in my opinion, are inadequately shielded. The CIN census is a statutory social care data return made by every Local Authority to the Department for Education (DfE). It captures information about all children who have been referred to children’s social care regardless of whether further action is taken or not.

As of September 2022, there were only 70 individuals flagged for shielding and that includes both current and former pupils in the entire database. There were 23 shielded pupil records collected by the Department via the 2022 January censuses alone (covering early years, schools and alternative provision).

No statement or guidance is given direct to settings about excluding children from returns to the DfE. As of September 2022, there were 2,538,656 distinct CiN (any ‘child in need’ referred to children’s social care services within the year) / LAC ([state] looked after child) child records (going back to 2006), regardless of at-risk status, able to be matched to some home address information via other sources, (non CiN / LAC) all included in the NPD. The data is highly highly sensitive and detailed, including “categories of abuse” not only monitoring and capturing what has been done to children, but what is done by children.

Always on, always watching

The challenge for rights work in this sector is not primarily a technical problem but one of mindset. Do you think this is what schools are for? Are they aligned with the aims of education? One SafetyTech company CEO at a conference certainly marketed their tool as something that employers want children to get used to, to normalise the gaze of authority and monitoring of your attention span. In real Black Mirror stuff, you could almost hear him say, “their eyeballs belong to me for fifteen million merits”.

Monitoring in-class attendance is moving not only towards checking are you physically in school,  but are you present in focus as well.

Education is moving towards an always-on mindset for many, whether it be data monitoring and collection with the stated aims of personalising learning or the claims by companies that have trialed mood and emotion tech on pupils in England. Facial scanning is sold as a way of seeing if the class mood is “on point” with learning. Are they ‘engaged’?  After Pippa King spotted a live-trial in the wild starting in UK schools, we at Defend Digital Me had a chat with one company CEO who agreed after discussion, and the ICO blogpost on ’emotion tech’ hype, to stop that product rollout and cut it altogether from their portfolio. Under the EU AI Act it would soon be banned too, to protect children from its harms (children in the UK included, were Britain still under EU laws but now post-Brexit, they’re not).

The Times Education Commission reported in 2021 that Priya Lakhani told one of the Education Commission’s oral evidence sessions that Century Tech, “decided against using bone-mapping software to track pupils’ emotions through the cameras on their computers. Teachers were unhappy about pupils putting their cameras on for safeguarding reasons but there were also moral problems with supplying such technology to autocratic regimes around the world.”

But would you even consider this in an educational context at all?

Apps that blame and shame behaviours using RAG scores exposed to peers on wall projected charts are certainly already here. How long before such ’emotion’ and ‘mood’ tech emerges in Britain seeking a market beyond the ban in the EU, joined up with that which can blame and shame for lapses in concentration?

Is this simply the world now, that children are supposed to normalize third-party bodily surveillance and behavioural nudge?

That same kind of thinking in ‘estimation’ ‘safety’ and ‘blame’ might well be seen soon in eye scanning drivers in “advanced driver distraction warning systems”. Drivers staying ‘on track’ may be one area we will be expected to get used to monitoring our eyeballs, but will it be used to differentiate and discriminate between drivers for insurance purposes, or redirect blame for accidents? What about monitoring workers at computer desks, with smoking breaks and distraction costing you in your wage packet?

Body and Mind belong ‘on track’ and must be overseen

This routine monitoring of your face is expanding at pace in policing but policing the everyday to restrict access is going to affect the average person potentially far more than the use of facial detection and recognition in every public space. Your face is your passport and the computer can say no. Age as the gatekeeper of identity to participation and public and private spaces is already very much here online and will be expanded online in the UK by the Online Safety Act (noting other countries have realised its flaws and foolishness). Age verification and age assurance if given any weight, will inevitably lead to the balkanisation of the Internet, to throttling of content through prioritisation of who is permitted to do or see what, and control ofy content moderation.

In UK night clubs age verification is being normalised through facial recognition. Soon the only permitted Digital ID in what are (for now) purposes limited to rental and employment checks, will be the accredited government ID if the Data Protection and Digital Information Bill passes as drafted. But scope creep will inevitably move from what is possible, to what is required, across every aspect of our lives where identity is made an obligation for proof of eligibility.

Why all this matters is that we see a direction  of travel over and over again. Once “the data” is collected and retained there is an overwhelming desire down the line to say, well now we’ve got it, how can we use it? Increasingly that means joining it all up. And then passing it around to others. And the DPDI Bill takes away the safeguards around that over time (See KC opinion para 20, p.6).

It is something data protection law and lack of enforcement are already failing to protect us from adequately, because excessive data retention should be impossible under the data minimisation principle and purpose limitation, but controllers argue linked data ‘is not new data’. What we should see instead in enforcement is against the excessive retention of data that creates ‘new knowledge’ that goes beyond our reasonable expectations we see the government and companies gaining ever greater power to intervene in the lives of the data subjects, the people. The draft new law does the opposite.

Who decides what ‘on track’ looks like?

School SafetyTech is therefore the current embodiment of my greatest areas of concern for children’s rights in educational settings right now. Because it is an overlapping tech that monitors both what you do when, and claims to be able to put the thinking behind it in context. Tools in schools are moving towards prediction and interventions and the combinations of bodily control, thought, mood and emotion. They are shifting from on the server to on device and go with you everywhere your phone goes. ‘Interventions’ bring a whole new horizon of the potential infringements of rights and outcomes and questions of who decides what can be used for what purposes in a classroom, in loco parentis.

Filtering and monitoring technology in school “safetyTech”, blocks content and profiles the user over time. This monitoring of bodily behaviours, monitoring actions and thoughts, leads to staff acting on automated suspicion. It can lead to imposing control of bodily movement and of thoughts and actions. It’s adopted at scale for millions of children and students across the UK. It’s without oversight or published universal safety standards.

This is not a single technology, it’s a market and a mindset.

Who decides what is ‘suitable’, ‘on track’, and where ‘intervention’ is required is built into design?  It is not a problem of technology causing harm, but social and political choices and values embodied in technology that can be used to cause harm. For example in identifying and enabling the persecution of Muslim students that are fasting during Ramadan, based on their dining records. In the UK we have all the same tools already in place.

Who does any technology serve? is a question we have not yet resolved in education in England. The best interests of the child, the teacher, the institution, the State or company that built it?  Interests and incentives may overlap or may be contradictory. But who decides, and who is given the knowledge of how that was decided? As tech is becoming increasingly designed to run without any human intervention the effects of the automated decisions, in turn, can be significant, and happen at speed and scale.

Patrick Ness coined the phrase,”The Noise is a man unfiltered, and without a filter, a man is just chaos walking”. Controlling chaos may be a desirable government aim, but at what cost to whose freedoms?

On #IWD2022 gender bias in #edTech

I’m a mother of three girls at secondary school. For international women’s day 2022 I’ve been thinking about the role of school technology in my life.

Could some of it be improved to stop baking-in gender discrimination norms to home-school relationships?

Families come in all shapes and sizes and not every family has defined Mum and Dad roles. I wonder if edTech could be better at supporting families if it offered the choice of a multi-parent-per-child relationship by-default?

School-home communications rarely come home in school bags anymore, but digitally, and routinely sent to one-parent-per-child. If something needs actioned, it’s typically going to one parent, not both. The design of digital tools can lock-in the responsibility for action to a single nominated person. Schools send the edTech company the ‘pupil parent contact’ email, but, at least in my experience, don’t ever ask what that should be after it’s been collected once. (And don’t do a good job of communicating data rights each time before doing so either, but that’s another story.)

Whether it’s about learning updates with report cards about the child, or weekly newsletters, changes of school clubs, closures, events or other ‘things you should know’ I filter emails I get daily from a number of different email accounts for relevance, and forward them on to Dad.

To administer cashless payments to school for contributions to art, cooking, science and technology lessons, school trips, other extras or to manage my child’s lunch money, there is a single email log-in and password for a parent role allocated to the child’s account.

And it might be just my own unrepresentative circle of friends, but it’s usually Mum who’s on the receiving end of demands at all hours.

In case of illness, work commitments, otherwise being unable to carry on as usual, it’s no longer as easy for a second designated parent role to automatically pick up or share the responsibilities.

One common cashless payment system’s approach does permit more than one parent role, but it’s manual and awkward to set up. “For a second parent to have access it is necessary for the school to send a second letter with a second temporary username and password combo to activate a second account. In short, the only way to do this is to ask your school.”

Some messaging services allow a school-to-multiple-parent email, but the message itself often forms an individual not group thread with the teacher, i.e designed for a class not a family.

Some might suggest it is easy enough to set up automatic email forwarding, but again this pushes back the onus onto the parent and doesn’t solve the problem of only one person able to perform transactions.

I wonder if one-way communications tools offered a second email address by default what difference it would make to overall parental engagement?

What if for financial management edTech permitted an option to have a ‘temporary re-route’ to another email address, or default second role with notification to the other something had been paid?

Why can’t one parent, once confirmed with secure access to the child-parent account, add a second parent role? These need not be the parent, but another relation managing the outgoing money. You can only make outgoing payments to the school, or withdraw money to the same single bank account it comes from, so fraud isn’t likely.

I wonder what research would look like at each of these tools, to assess whether there is a gender divide built into default admin?

What could it improve in work-life balance for staff and families, if emails were restricted to send or receive in preferred time windows?

Technology can be amazing and genuinely make life easier for some. But not everyone fits the default and I believe the defaults are rarely built to best suit users, but rather the institutions that procure them. In many cases edTech aren’t working well for the parents that make up their main user base.

If I were designing these, they’d be school not third-party cloud based, and distributed systems, centred on the child. I think we can do better, not only for women, but everyone.


PS When my children come home from school today, I’ll be showing them the Gender Pay Gap Bot @PayGapApp thread with explanations of mode, mean and median and worth a look.

Facebook View and Ray-Ban glasses: here’s looking at your kid

Ray-Ban (EssilorLuxxotica) is selling glasses with ‘Facebook View’. The questions have already been asked whether they  can be lawful in Europe, including in the UK, in particular in regards to enabling the processing of children’s personal data without consent.

The Italian data authority has asked the company to explain via the Irish regulator:

  • the legal basis on which Facebook processes personal data;
  • the measures in place to protect people recorded by the glasses, children in particular,
  • questions of anonymisation of the data collected; and
  • the voice assistant connected to the microphone in the glasses.

While the first questions in Europe may be bound to data protection law and privacy, there are also questions of why Facebook has gone ahead despite Google Glass that was removed from the market in 2013. You can see a pair displayed in a surveillance exhibit at the Victoria and Albert museum (September 2021).

We can’t wait to see the world from your perspective“, says Ray-ban Chief Wearables Officer Rocco Basilico in the promotional video together with Mark Zuckerberg.  I bet. But not as much as Facebook.

With cameras and microphones built-in, up to around 30 videos or 500 photos can be stored on the glasses, and shared with Facebook companion app. While the teensy light on a corner is supposed to be an indicator that recording is in progress, the glasses look much like any other and indistinguishable in the Ray-ban range. You can even buy them as prescription glasses, which intrigues me as to how that recording looks on playback, or shared via the companion apps.

While the Data Policy doesn’t explicitly mention Facebook View in the wording on how it uses data to “personalise and improve our Products,” and the privacy policy is vague on Facebook View, it seems pretty clear that Facebook will use the video capture to enhance its product development in augmented reality.

We believe this is an important step on the road to developing the ultimate augmented reality glasses“, says Mark Zuckerberg.(05:46)

The company needs a lawful basis to be able to process the data it receives for those purposes. It determines those purposes, and is therefore a data controller for that processing.

In the supplemental policy the company says that “Facebook View is intended solely for users who are 13 or older.” Data Protection law does not care about the age of the product user, but it does regulate under what basis a child’s data may be processed and that may be the user, setting up an account. It is also concerned about the data of the children who are recorded. By recognising  the legal limitations on who can be an account owner, it has a bit of a self-own here on what the law says on children’s data.

Personal privacy may have weak protection in data protection laws that offer the wearer exemptions for domestic** or journalistic purposes, but neither the user nor the company can avoid the fact that processing video and audio recordings may be without (a) adequately informing people whose data is processed or (b) appropriate purpose limitation for any processing that Facebook the company performs, across all of its front end apps and platforms or back-end processes.

I’ve asked Facebook how I would, as a parent or child, be able to get a wearer to destroy a child’s images and video or voice recorded in a public space, to which I did not consent. How would I get to see that content once held by Facebook, or request its processing be restricted by the company, or user, or the data destroyed?

Testing the Facebook ‘contact our DPO’ process as if I were a regular user, fails. It has sent me round the houses via automated forms.

Facebook is clearly wrong here on privacy grounds but if you can afford the best in the world on privacy law, why would you go ahead anyway? Might they believe after nearly twenty years of privacy invasive practice and a booming bottom line, that there is no risk to reputation, no risk to their business model, and no real risk to the company from regulation?

It’s an interesting partnership since Ray-Ban has no history in understanding privacy. Facebook has a well known controversial one.  Reputational risk shared, will not be reputational risk halved. And EssilorLuxottica has a share price to consider.  I wonder if they carried out any due diligence risk assessment for their investors?

If and when enforcement catches up and the product is withdrawn, regulators must act as the FTC did on the development of the product (in that case algorithms) from “ill gotten data”. (In the Matter of Everalbum and Paravision Commission File No. 1923172).

Destroy the data, destroy the knowledge gained, and remove it from any product development to  date.  All “Affected Work Product.”

Otherwise any penalty Facebook will get from this debacle, will be just the cost of doing business to have bought itself a very nice training dataset for its AR product development.

Ray-Ban of course, will take all the reputational hit if found enabling strangers to take covert video of our kids. No one expects any better from Facebook.  After all, we all know, Facebook takes your privacy, seriously.


Reference:  Rynes: On why your ring video doorbell may make you a controller under GDPR.

https://medium.com/golden-data/rynes-e78f09e34c52 (Golden Data, 2019)

Judgment of the Court (Fourth Chamber), 11 December 2014 František Ryneš v Úřad pro ochranu osobních údajů Case C‑212/13. Case file

exhibits from the Victoria and Albert museum (September 2021)

Thoughts on the Online Harms White Paper (I)

“Whatever the social issue we want to grasp – the answer should always begin with family.”

Not my words, but David Cameron’s. Just five years ago, Conservative policy was all about “putting families at the centre of domestic policy-making.”

Debate on the Online Harms White Paper, thanks in part to media framing of its own departmental making, is almost all about children. But I struggle with the debate that leaves out our role as parents almost entirely, other than as bereft or helpless victims ourselves.

I am conscious wearing my other hat of defenddigitalme, that not all families are the same, and not all children have families. Yet it seems counter to conservative values,  for a party that places the family traditionally at the centre of policy, to leave out or abdicate parents of responsibility for their children’s actions and care online.

Parental responsibility cannot be outsourced to tech companies, or accept it’s too hard to police our children’s phones. If we as parents are concerned about harms, it is our responsibility to enable access to that which is not, and be aware and educate ourselves and our children on what is. We are aware of what they read in books. I cast an eye over what they borrow or buy. I play a supervisory role.

Brutal as it may be, the Internet is not responsible for suicide. It’s just not that simple. We cannot bring children back from the dead. We certainly can as society and policy makers, try and create the conditions that harms are not normalised, and do not become more common.  And seek to reduce risk. But few would suggest social media is a single source of children’s mental health issues.

What policy makers are trying to regulate is in essence, not a single source of online harms but 2.1 billion users’ online behaviours.

It follows that to see social media as a single source of attributable fault per se, is equally misplaced. A one-size-fits-all solution is going to be flawed, but everyone seems to have accepted its inevitability.

So how will we make the least bad law?

If we are to have sound law that can be applied around what is lawful,  we must reduce the substance of debate by removing what is already unlawful and has appropriate remedy and enforcement.

Debate must also try to be free from emotive content and language.

I strongly suspect the language around ‘our way of life’ and ‘values’ in the White Paper comes from the Home Office. So while it sounds fair and just, we must remember reality in the background of TOEIC, of Windrush, of children removed from school because their national records are being misused beyond educational purposes. The Home Office is no friend of child rights, and does not foster the societal values that break down discrimination and harm. It instead creates harms of its own making, and division by design.

I’m going to quote Graham Smith, for I cannot word it better.

“Harms to society, feature heavily in the White Paper, for example: content or activity that:

“threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration.”

Similarly:

“undermine our democratic values and debate”;

“encouraging us to make decisions that could damage our health, undermining our respect and tolerance for each other and confusing our understanding of what is happening in the wider world.”

This kind of prose may befit the soapbox or an election manifesto, but has no place in or near legislation.”

[Cyberleagle, April 18, 2019,Users Behaving Badly – the Online Harms White Paper]

My key concern in this area is that through a feeling of ‘it is all awful’ stems the sense that ‘all regulation will be better than now’, and  comes with a real risk of increasing current practices that would not be better than now, and in fact need fixing.

More monitoring

The first, is today’s general monitoring of school children’s Internet content for risk and harms, which creates unintended consequences and very real harms of its own — at the moment, without oversight.

In yesterday’s House of Lords debate, Lord Haskel, said,

“This is the practicality of monitoring the internet. When the duty of care required by the White Paper becomes law, companies and regulators will have to do a lot more of it. ” [April 30, HOL]

The Brennan Centre yesterday published its research on the spend by US schools purchasing social media monitoring software from 2013-18, and highlighted some of the issues:

Aside from anecdotes promoted by the companies that sell this software, there is no proof that these surveillance tools work [compared with other practices]. But there are plenty of risks. In any context, social media is ripe for misinterpretation and misuse.” [Brennan Centre for Justice, April 30, 209]

That monitoring software focuses on two things —

a) seeing children through the lens of terrorism and extremism, and b) harms caused by them to others, or as victims of harms by others, or self-harm.

It is the near same list of ‘harms’ topics that the White Paper covers. Co-driven by the same department interested in it in schools — the Home Office.

These concerns are set in the context of the direction of travel of law and policy making, its own loosening of accountability and process.

It was preceded by a House of Commons discussion on Social Media and Health, lead by the former Minister for Digital, Culture, Media and Sport who seems to feel more at home in that sphere, than in health.

His unilateral award of funds to the Samaritans for work with Google and Facebook on a duty of care, while the very same is still under public consultation, is surprising to say the least.

But it was his response to this question, which points to the slippery slope such regulations may lead. The Freedom of Speech champions should be most concerned not even by what is potentially in any legislation ahead, but in the direction of travel and debate around it.

“Will he look at whether tech giants such as Amazon can be brought into the remit of the Online Harms White Paper?

He replied, that “Amazon sells physical goods for the most part and surely has a duty of care to those who buy them, in the same way that a shop has a responsibility for what it sells. My hon. Friend makes an important point, which I will follow up.”

Mixed messages

The Center for Democracy and Technology recommended in its 2017 report, Mixed Messages? The Limits of Automated Social Media Content Analysis, that the use of automated content analysis tools to detect or remove illegal content should never be mandated in law.

Debate so far has demonstrated broad gaps between what is wanted, in knowledge, and what is possible. If behaviours are to be stopped because they are undesirable rather than unlawful, we open up a whole can of worms if not done with the greatest attention to  detail.

Lord Stevenson and Lord McNally both suggested that pre-legislative scrutiny of the Bill, and more discussion would be positive. Let’s hope it happens.

Here’s my personal first reflections on the Online Harms White Paper discussion so far.

Six suggestions:

Suggestion one: 

The Law Commission Review, mentioned in the House of Lords debate,  may provide what I have been thinking of crowd sourcing and now may not need to. A list of laws that the Online Harms White Paper related discussion reaches into, so that we can compare what is needed in debate versus what is being sucked in. We should aim to curtail emotive discussion of broad risk and threat that people experience online. This would enable the themes which are already covered in law to be avoided, and focus on the gaps.  It would make for much tighter and more effective legislation. For example, the Crown Prosecution Service offers Guidelines on prosecuting cases involving communications sent via social media, but a wider list of law is needed.

Suggestion two:
After (1) defining what legislation is lacking, definitions must be very clear, narrow, and consistent across other legislation. Not for the regulator to determine ad-hoc and alone.

Suggestion three:
If children’s rights are at to be so central in discussion on this paper, then their wider rights must including privacy and participation, access to information and freedom of speech must be included in debate. This should include academic research-based evidence of children’s experience online when making the regulations.

Suggestion four:
Internet surveillance software in schools should be publicly scrutinised. A review should establish the efficacy, boundaries and oversight of policy and practice regards Internet monitoring for harms and not embed even more, without it. Boundaries should be put into legislation for clarity and consistency.

Suggestion five:
Terrorist activity or child sexual exploitation and abuse (CSEA) online are already unlawful and should not need additional Home Office powers. Great caution must be exercised here.

Suggestion six: 
Legislation could and should encapsulate accountability and oversight for micro-targeting and algorithmic abuse.


More detail behind my thinking, follows below, after the break. [Structure rearranged on May 14, 2019]


Continue reading Thoughts on the Online Harms White Paper (I)

Women Leading in AI — Challenging the unaccountable and the inevitable

Notes [and my thoughts] from the Women Leading in AI launch event of the Ten Principles of Responsible AI report and recommendations, February 6, 2019.

Speakers included Ivana Bartoletti (GemServ), Jo Stevens MP, Professor Joanna J Bryson, Lord Tim Clement-Jones, Roger Taylor (Centre for Data Ethics and Innovation, Chair), Sue Daley (techUK), Reema Patel, Nuffield Foundation and Ada Lovelace Institute.

Challenging the unaccountable and the ‘inevitable’ is the title of the conclusion of the Women Leading in AI report Ten Principles of Responsible AI, launched this week, and this makes me hopeful.

“There is nothing inevitable about how we choose to use this disruptive technology. […] And there is no excuse for failing to set clear rules so that it remains accountable, fosters our civic values and allows humanity to be stronger and better.”

Ivana Bartoletti, co-founder of Women Leading in AI, began the event, hosted at the House of Commons by Jo Stevens, MP for Cardiff Central, and spoke brilliantly of why it matters right now.

Everyone’s talking about ethics, she said, but it has limitations. I agree with that. This was by contrast very much a call to action.

It was nearly impossible not to cheer, as she set out without any of the usual bullshit, the reasons why we need to stop “churning out algorithms which discriminate against women and minorities.”

Professor Joanna J Bryson took up multiple issues, such as why

  • innovation, ‘flashes in the pan’ are not sustainable and not what we’re looking for things in that work for us [society].
  • The power dynamics of data, noting Facebook, Google et al are global assets, and are also global problems, and flagged the UK consultation on taxation open now.
  • And that it is critical that we do not have another nation with access to all of our data.

She challenged the audience to think about the fact that inequality is higher now than it has been since World War I. That the rich are getting richer and that imbalance of not only wealth, but of the control individuals have in their own lives, is failing us all.

This big picture thinking while zooming in on detailed social, cultural, political and tech issues, fascinated me most that evening. It frustrated the man next to me apparently, who said to me at the end, ‘but they haven’t addressed anything on the technology.’

[I wondered if that summed up neatly, some of why fixing AI cannot be a male dominated debate. Because many of these issues for AI, are not of the technology, but of people and power.] 

Jo Stevens, MP for Cardiff Central, hosted the event and was candid about politicians’ level of knowledge and the need to catch up on some of what matters in the tech sector.

We grapple with the speed of tech, she said. We’re slow at doing things and tech moves quickly. It means that we have to learn quickly.

While discussing how regulation is not something AI tech companies should fear, she suggested that a constructive framework whilst protecting society against some of the problems we see is necessary and just, because self-regulation has failed.

She talked about their enquiry which began about “fake news” and disinformation, but has grown to include:

  • wider behavioural economics,
  • how it affects democracy.
  • understanding the power of data.
  • disappointment with social media companies, who understand the power they have, and fail to be accountable.

She wants to see something that changes the way big business works, in the way that employment regulation challenged exploitation of the workforce and unsafe practices in the past.

The bias (conscious or unconscious) and power imbalance has some similarity with the effects on marginalised communities — women, BAME, disabilities — and she was looking forward to see the proposed solutions, and welcomed the principles.

Lord Clement-Jones, as Chair of the Select Committee on Artificial Intelligence, picked up the values they had highlighted in the March 2018 report, Artificial Intelligence, AI in the UK: ready, willing and able?

Right now there are so many different bodies, groups in parliament and others looking at this [AI / Internet / The Digital World] he said, so it was good that the topic is timely, front and centre with a focus on women, diversity and bias.

He highlighted, the importance of maintaining public trust. How do you understand bias? How do you know how algorithms are trained and understand the issues? He fessed up to being a big fan of DotEveryone and their drive for better ‘digital understanding’.

[Though sometimes this point is over complicated by suggesting individuals must understand how the AI works, the consensus of the evening was common sensed — and aligned with the Working Party 29 guidance — that data controllers must ensure they explain clearly and simply to individuals, how the profiling or automated decision-making process works, and what its effect is for them.]

The way forward he said includes:

  • Designing ethics into algorithms up front.
  • Data audits need to be diverse in order to embody fairness and diversity in the AI.
  • Questions of the job market and re-skilling.
  • The enforcement of ethical frameworks.

He also asked how far bodies will act, in different debates. Deciding who decides on that is still a debate to be had.

For example, aware of the social credit agenda and scoring in China, we should avoid the same issues. He also agreed with Joanna, that international cooperation is vital, and said it is important that we are not disadvantaged in this global technology. He expected that we [the Government Office for AI] will soon promote a common set of AI ethics, at the G20.

Facial recognition and AI are examples of areas that require regulation for safe use of the tech and to weed out those using it for the wrong purposes, he suggested.

However, on regulation he held back. We need to be careful about too many regulators he said. We’ve got the ICO, FCA, CMA, OFCOM, you name it, we’ve already got it, and they risk tripping over one another. [What I thought as CDEI was created para 31.]

We [the Lords Committee] didn’t suggest yet another regulator for AI, he said and instead the CDEI should grapple with those issues and encourage ethical design in micro-targeting for example.

Roger Taylor (Chair of the CDEI), — after saying it felt as if the WLinAI report was like someone had left their homework on his desk,  supported the concept of the WLinAI principles are important, and  agreed it was time for practical things, and what needs done.

Can our existing regulators do their job, and cover AI? he asked, suggesting new regulators will not be necessary. Bias he rightly recognised, already exists in our laws and bodies with public obligations, and in how AI is already operating;

  • CVs sorting. [problematic IMO > See Amazon, US teachers]
  • Policing.
  • Creditworthiness.

What evidence is needed, what process is required, what is needed to assure that we know how it is actually operating? Who gets to decide to know if this is fair or not? While these are complex decisions, they are ultimately not for technicians, but a decision for society, he said.

[So far so good.]

Then he made some statements which were rather more ambiguous. The standards expected of the police will not be the same as those for marketeers micro targeting adverts at you, for example.

[I wondered how and why.]

Start up industries pay more to Google and Facebook than in taxes he said.

[I wondered how and why.]

When we think about a knowledge economy, the output of our most valuable companies is increasingly ‘what is our collective truth? Do you have this diagnosis or not? Are you a good credit risk or not? Even who you think you are — your identity will be controlled by machines.’

What can we do as one country [to influence these questions on AI], in what is a global industry? He believes, a huge amount. We are active in the financial sector, the health service, education, and social care — and while we are at the mercy of large corporations, even large corporations obey the law, he said.

[Hmm, I thought, considering the Google DeepMind-Royal Free agreement that didn’t, and venture capitalists not renowned for their ethics, and yet advise on some of the current data / tech / AI boards. I am sceptical of corporate capture in UK policy making.]

The power to use systems to nudge our decisions, he suggested, is one that needs careful thought. The desire to use the tech to help make decisions is inbuilt into what is actually wrong with the technology that enables us to do so. [With this I strongly agree, and there is too little protection from nudge in data protection law.]

The real question here is, “What is OK to be owned in that kind of economy?” he asked.

This was arguably the neatest and most important question of the evening, and I vigorously agreed with him asking it, but then I worry about his conclusion in passing, that he was, “very keen to hear from anyone attempting to use AI effectively, and encountering difficulties because of regulatory structures.

[And unpopular or contradictory a view as it may be, I find it deeply ethically problematic for the Chair of the CDEI to be held by someone who had a joint-venture that commercially exploited confidential data from the NHS without public knowledge, and its sale to the Department of Health was described by the Public Accounts Committee, as a “hole and corner deal”. That was the route towards care.data, that his co-founder later led for NHS England. The company was then bought by Telstra, where Mr Kelsey went next on leaving NHS Engalnd. The whole commodification of confidentiality of public data, without regard for public trust, is still a barrier to sustainable UK data policy.]

Sue Daley (Tech UK) agreed this year needs to be the year we see action, and the report is a call to action on issues that warrant further discussion.

  • Business wants to do the right thing, and we need to promote it.
  • We need two things — confidence and vigilance.
  • We’re not starting from scratch, and talked about GDPR as the floor not the ceiling. A starting point.

[I’m not quite sure what she was after here, but perhaps it was the suggestion that data regulation is fundamental in AI regulation, with which I would agree.]

What is the gap that needs filled she asked? Gap analysis is what we need next and avoid duplication of effort —need to avoid complexity and duplicity of work with other bodies. If we can answer some of the big, profound questions need to be addressed to position the UK as the place where companies want to come to.

Sue was the only speaker that went on to talk about the education system that needs to frame what skills are needed for a future world for a generation, ‘to thrive in the world we are building for them.’

[The Silicon Valley driven entrepreneur narrative that the education system is broken, is not an uncontroversial position.]

She finished with the hope that young people watching BBC icons the night before would see, Alan Turing [winner of the title] and say yes, I want to be part of that.

Listening to Reema Patel, representative of the Ada Lovelace Institute, was the reason I didn’t leave early and missed my evening class. Everything she said resonated, and was some of the best I have heard in the recent UK debate on AI.

  • Civic engagement, the role of the public is as yet unclear with not one homogeneous, but many publics.
  • The sense of disempowerment is important, with disconnect between policy and decisions made about people’s lives.
  • Transparency and literacy are key.
  • Accountability is vague but vital.
  • What does the social contract look like on people using data?
  • Data may not only be about an individual and under their own responsibility, but about others and what does that mean for data rights, data stewardship and articulation of how they connect with one another, which is lacking in the debate.
  • Legitimacy; If people don’t believe it is working for them, it won’t work at all.
  • Ensuring tech design is responsive to societal values.

2018 was a terrible year she thought. Let’s make 2019 better. [Yes!]


Comments from the floor and questions included Professor Noel Sharkey, who spoke about the reasons why it is urgent to act especially where technology is unfair and unsafe and already in use. He pointed to Compass (Durham police), and predictive policing using AI and facial recognition, with 5% accuracy, and that the Met was not taking these flaws seriously. Liberty produced a strong report on it out this week.

Caroline, from Women in AI echoed my own comments on the need to get urgent review in place of these technologies used with children in education and social care. [in particular where used for prediction of child abuse and interventions in family life].

Joanna J Bryson added to the conversation on accountability, to say people are not following existing software and audit protocols,  someone just needs to go and see if people did the right thing.

The basic question of accountability, is to ask if any flaw is the fault of a corporation, of due diligence, or of the users of the tool? Telling people that this is the same problem as any other software, makes it much easier to find solutions to accountability.

Tim Clement-Jones asked, on how many fronts can we fight on at the same time? If government has appeared to exempt itself from some of these issues, and created a weak framework for itself on handing data, in the Data Protection Act — critically he also asked, is the ICO adequately enforcing on government and public accountability, at local and national levels?

Sue Daley also reminded us that politicians need not know everything, but need to know what the right questions are to be asking? What are the effects that this has on my constituents, in employment, my family? And while she also suggested that not using the technology could be unethical, a participant countered that it’s not the worst the thing to have to slow technology down and ensure it is safe before we all go along with it.

My takeaways of the evening, included that there is a very large body of women, of whom attendees were only a small part, who are thinking, building and engineering solutions to some of these societal issues embedded in policy, practice and technology. They need heard.

It was genuinely electric and empowering, to be in a room dominated by women, women reflecting diversity of a variety of publics, ages, and backgrounds, and who listened to one another. It was certainly something out of the ordinary.

There was a subtle but tangible tension on whether or not  regulation beyond what we have today is needed.

While regulating the human behaviour that becomes encoded in AI, we need to ensure ethics of human behaviour, reasonable expectations and fairness are not conflated with the technology [ie a question of, is AI good or bad] but how it is designed, trained, employed, audited, and assess whether it should be used at all.

This was the most effective group challenge I have heard to date, counter the usual assumed inevitability of a mythical omnipotence. Perhaps Julia Powles, this is the beginnings of a robust, bold, imaginative response.

Why there’s not more women or people from minorities working in the sector, was a really interesting if short, part of the discussion. Why should young women and minorities want to go into an environment that they can see is hostile, in which they may not be heard, and we still hold *them* responsible for making work work?

And while there were many voices lamenting the skills and education gaps, there were probably fewer who might see the solution more simply, as I do. Schools are foreshortening Key Stage 3 by a year, replacing a breadth of subjects, with an earlier compulsory 3 year GCSE curriculum which includes RE, and PSHE, but means that at 12, many children are having to choose to do GCSE courses in computer science / coding, or a consumer-style iMedia, or no IT at all, for the rest of their school life. This either-or content, is incredibly short-sighted and surely some blend of non-examined digital skills should be offered through to 16 to all, at least in parallel importance with RE or PSHE.

I also still wonder, about all that incredibly bright and engaged people are not talking about and solving, and missing in policy making, while caught up in AI. We need to keep thinking broadly, and keep human rights at the centre of our thinking on machines. Anaïs Nin wrote over 70 years ago about the risks of growth in technology to expand our potential for connectivity through machines, but diminish our genuine connectedness as people.

“I don’t think the [American] obsession with politics and economics has improved anything. I am tired of this constant drafting of everyone, to think only of present day events”.

And as I wrote about nearly 3 years ago, we still seem to have no vision for sustainable public policy on data, or establishing a social contract for its use as Reema said, which underpins the UK AI debate. Meanwhile, the current changing national public policies in England on identity and technology, are becoming catastrophic.

Challenging the unaccountable and the ‘inevitable’ in today’s technology and AI debate, is an urgent call to action.

I look forward to hearing how Women Leading in AI plan to make it happen.


References:

Women Leading in AI website: http://womenleadinginai.org/
WLiAI Report: 10 Principles of Responsible AI
@WLinAI #WLinAI

image credits 
post: creative commons Mark Dodds/Flickr
event photo:  / GemServ

Policy shapers, product makers, and profit takers (2)

Corporate capture

Companies are increasingly in controlling positions of the tech narrative in the press. They are funding neutral third-sector orgs’ and think tanks’ research. Supporting organisations advising on online education. Closely involved in politics. And sit increasingly, within the organisations set up to lead the technology vision, advising government on policy and UK data analytics, or on social media, AI and ethics.

It is all subject to corporate capture.

But is this healthy for UK public policy and the future not of an industry sector, but a whole technology, when it comes to AI?

If a company’s vital business interests seem unfazed by the risk and harm they cause to individuals — from people who no longer trust the confidentiality of the system to measurable harms — why should those companies sit on public policy boards set up to shape the ethics they claim we need, to solve the problems and restore loss of trust that these very same companies are causing?

We laud people in these companies as co-founders and forward thinkers on new data ethics institutes. They are invited to sit on our national boards, or create new ones.

What does that say about the entire board’s respect for the law which the company breached? It is hard not to see it signal acceptance of the company’s excuses or lack of accountability.

Corporate accountability

The same companies whose work has breached data protection law, multiple ways, seemingly ‘by accident’ on national data extractions, are those companies that cross the t’s and dot the i’s on even the simplest conference call, and demand everything is said in strictest confidence. Meanwhile their everyday business practices ignore millions of people’s lawful rights to confidentiality.

The extent of commercial companies’ influence on these boards is  opaque. To allow this ethics bandwagon to be driven by the corporate giants surely eschews genuine rights-based values, and long-term integrity of the body they appear to serve.

I am told that these global orgs must be in the room and at the table, to use the opportunity to make the world a better place.

These companies already have *all* the opportunity. Not only monopoly positions on their own technology, but the datasets at scale which underpin it, excluding new entrants to the market. Their pick of new hires from universities. The sponsorship of events. The political lobbying. Access to the media. The lawyers. Bottomless pockets to pay for it all. And seats at board tables set up to shape UK policy responses.

It’s a struggle for power, and a stake in our collective future. The status quo is not good enough for many parts of society, and to enable Big Tech or big government to maintain that simply through the latest tools, is a missed chance to reshape for good.

You can see it in many tech boards’ make up, and pervasive white male bias. We hear it echoed in London think tank conferences, even independent tech design agencies, or set out in some Big Tech reports. All seemingly unconnected, but often funded by the same driving sources.

These companies are often those that made it worse to start with, and the very ethics issues the boards have been set up to deal with, are at the core of their business models and of their making.

The deliberate infiltration of influence on online safety policy for children, or global privacy efforts is very real, explicitly set out in the #FacebookEmails, for example.

We will not resolve these fundamental questions, as long as the companies whose business depend on them, steer national policy. The odds will be ever in their favour.

At the same time, some of these individuals are brilliant. In all senses.

So what’s the answer. If they are around the table, what should the UK public expect of their involvement, and ensure in whose best interests it is? How do we achieve authentic accountability?

Whether it be social media, data analytics, or AI in public policy, can companies be safely permitted to be policy shapers if they wear all the hats; product maker, profit taker, *and* process or product auditor?

Creating Authentic Accountability

At minimum we must demand responsibility for their own actions from board members who represent or are funded by companies.

  1. They must deliver on their own product problems first before being allowed to suggest solutions to societal problems.
  2. There should be credible separation between informing policy makers, and shaping policy.
  3. There must be total transparency of funding sources across any public sector boards, of members, and those lobbying them.
  4. Board members must be meaningfully held accountable for continued company transgressions on rights and freedoms, not only harms.
  5. Oversight of board decision making must be decentralised, transparent and available to scrutiny and meaningful challenge.

While these new bodies may propose solutions that include public engagement strategies, transparency, and standards, few propose meaningful oversight. The real test is not what companies say in their ethical frameworks, but in what they continue to do.

If they fail to meet legal or regulatory frameworks, minimum accountability should mean no more access to public data sets and losing positions of policy influence.

Their behaviour needs to go above and beyond meeting the letter of the law, scraping by or working around rights based protections. They need to put people ahead of profit and self interests. That’s what ethics should mean, not be a PR route to avoid regulation.

As long as companies think the consequences of their platforms and actions are tolerable and a minimal disruption to their business model, society will be expected to live with their transgressions, and our most vulnerable will continue to pay the cost.


This is part 2 of thoughts on Policy shapers, product makers, and profit takers — data and AI. Part 1 is here.

The Future of Data in Public Life

What is means to be human is going to be different. That was the last word of a panel of four excellent speakers, and the sparkling wit and charm of chair Timandra Harkness, at tonight’s Turing Institute event, hosted at the British Library, on the future of data.

The first speaker, Bernie Hogan, of the Oxford Internet Institute, spoke of Facebook’s emotion experiment,  and the challenges of commercial companies ownership and concentrations of knowledge, as well as their decisions controlling what content you get to see.

He also explained simply what an API is in human terms. Like a plug in a socket and instead of electricity, you get a flow of data, but the data controller can control which data can come out of the socket.

And he brilliantly brought in a thought what would it mean to be able to go back in time to the Nuremberg trials, and regulate not only medical ethics, but the data ethics of indirect and computational use of information. How would it affect today’s thinking on AI and machine learning and where we are now?

“Available does not mean accessible, transparent does not mean accountable”

Charles from the Bureau of Investigative Journalism, who had also worked for Trinity Mirror using data analytics, introduced some of the issues that large datasets have for the public.

  • People rarely have the means to do any analytics well.
  • Even if open data are available, they are not necessarily accessible due to the volume of data to access, and constraints of common software (such as excel) and time constraints.
  • Without the facts they cannot go see a [parliamentary] representative or community group to try and solve the problem.
  • Local journalists often have targets for the number of stories they need to write, and target number of Internet views/hits to meet.

Putting data out there is only transparency, but not accountability if we cannot turn information into knowledge that can benefit the public.

“Trust, is like personal privacy. Once lost, it is very hard to restore.”

Jonathan Bamford, Head of Parliamentary and Government Affairs at the ICO, took us back to why we need to control data at all. Democracy. Fairness. The balance of people’s rights,  like privacy, and Freedom-of-Information, and the power of data holders. The awareness that power of authorities and companies will affect the lives of ordinary citizens. And he said that even early on there was a feeling there was a need to regulate who knows what about us.

The third generation of Data Protection law he said, is now more important than ever to manage the whole new era of technology and use of data that did not exist when previous laws were made.

But, he said, the principles stand true today. Don’t be unfair. Use data for the purposes people expect. Security of data matters. As do rights to see the data people hold about us.  Make sure data are relevant, accurate, necessary and kept for a sensible amount of time.

And even if we think that technology is changing, he argued, the principles will stand, and organisations need to consider these principles before they do things, considering privacy as a fundamental human right by default, and data protection by design.

After all, we should remember the Information Commissioner herself recently said,

“privacy does not have to be the price we pay for innovation. The two can sit side by side. They must sit side by side.

It’s not always an easy partnership and, like most relationships, a lot of energy and effort is needed to make it work. But that’s what the law requires and it’s what the public expects.”

“We must not forget, evil people want to do bad things. AI needs to be audited.”

Joanna J. Bryson was brilliant her multifaceted talk, summing up how data will affect our lives. She explained how implicit biases work, and how we reason, make decisions and showed up how we think in some ways  in Internet searches. She showed in practical ways, how machine learning is shaping our future in ways we cannot see. And she said, firms asserting that doing these things fairly and openly and that regulation no longer fits new tech, “is just hoo-hah”.

She talked about the exciting possibilities and good use of data, but that , “we must not forget, evil people want to do bad things. AI needs to be audited.” She summed up, we will use data to predict ourselves. And she said:

“What is means to be human is going to be different.”

That is perhaps the crux of this debate. How do data and machine learning and its mining of massive datasets, and uses for ‘prediction’, affect us as individual human beings, and our humanity?

The last audience question addressed inequality. Solutions like transparency, subject access, accountability, and understanding biases and how we are used, will never be accessible to all. It needs a far greater digital understanding across all levels of society.   How can society both benefit from and be involved in the future of data in public life? The conclusion was made, that we need more faith in public institutions working for people at scale.

But what happens when those institutions let people down, at scale?

And some institutions do let us down. Such as over plans for how our NHS health data will be used. Or when our data are commercialised without consent breaking data protection law. Why do 23 million people not know how their education data are used? The government itself does not use our data in ways we expect, at scale. School children’s data used in immigration enforcement fails to be fair, is not the purpose for which it was collected, and causes harm and distress when it is used in direct interventions including “to effect removal from the UK”, and “create a hostile environment.” There can be a lack of committment to independent oversight in practice, compared to what is promised by the State. Or no oversight at all after data are released. And ethics in researchers using data are inconsistent.

The debate was less about the Future of Data in Public Life,  and much more about how big data affects our personal lives. Most of the discussion was around how we understand the use of our personal information by companies and institutions, and how will we ensure democracy, fairness and equality in future.

The question went unanswered from an audience member, how do we protect ourselves from the harms we cannot see, or protect the most vulnerable who are least able to protect themselves?

“How can we future proof data protection legislation and make sure it keeps up with innovation?”

That audience question is timely given the new Data Protection Bill. But what legislation means in practice, I am learning rapidly, can be very different from what is in the written down in law.

One additional tool in data privacy and rights legislation is up for discussion, right now,  in the UK. If it matters to you, take action.

NGOs could be enabled to make complaints on behalf of the public under article 80 of the General Data Protection Regulation (GDPR). However, the government has excluded that right from the draft UK Data Protection Bill launched last week.

“Paragraph 53 omits from Article 80, representation of data subjects, where provided for by Member State law” from paragraph 1 and paragraph 2,” [Data Protection Bill Explanatory notes, paragraph 681 p84/112]. 80 (2) gives members states the option to provide for NGOs to take action independently on behalf of many people that may have been affected.

If you want that right, a right others will be getting in other countries in the EU, then take action. Call your MP or write to them. Ask for Article 80, the right to representation, in UK law. We need to ensure that our human rights continue to be enacted and enforceable to the maximum, if, “what is means to be human is going to be different.”

For the Future of Data, has never been more personal.

The Queen’s Speech, Information Society Services and GDPR

The Queen’s Speech promised new laws to ensure that the United Kingdom retains its world-class regime protecting personal data. And the government proposes a new digital charter to make the United Kingdom the safest place to be online for children.

Improving online safety for children should mean one thing. Children should be able to use online services without being used by them and the people and organisations behind it. It should mean that their rights to be heard are prioritised in decisions about them.

As Sir Tim Berners-Lee is reported as saying, there is a need to work with companies to put “a fair level of data control back in the hands of people“. He rightly points out that today terms and conditions are “all or nothing”.

There is a gap in discussions that we fail to address when we think of consent to terms and conditions, or “handing over data”. It is that this assumes that these are always and can be always, conscious acts.

For children the question of whether accepting Ts&Cs giving them control and whether it is meaningful becomes even more moot. What are the agreeing to? Younger children cannot give free and informed consent. After all most privacy policies standardly include phrases such as, “If we sell all or a portion of our business, we may transfer all of your information, including personal information, to the successor organization,” which means in effect that “accepting” a privacy policy today, is effectively a blank cheque for anything tomorrow.

The GDPR requires terms and conditions to be laid out in policies that a child can understand.

The current approach to legislation around children and the Internet is heavily weighted towards protection from seen threats. The threats we need to give more attention to, are those unseen.

By 2024 more than 50% of home Internet traffic will be used by appliances and devices, rather than just for communication and entertainment…The IoT raises huge questions on privacy and security, that have to be addressed by government, corporations and consumers. (WEF, 2017)

Our lives as measured in our behaviours and opinions, purchases and likes, are connected by trillions of sensors. My parents may have described using the Internet as going online. Today’s online world no longer means our time is spent ‘on the computer’, but being online, all day every day. Instead of going to a desk and booting up through a long phone cable, we have wireless computers in our pockets and in our homes, with functionality built-in to enable us to do other things; make a phonecall, make toast, and play. In a smart city surrounded by sensors under pavements, in buildings, cameras and tracking everywhere we go, we are living ever more inside an overarching network of cloud computers that store our data. And from all that data decisions are made, which adverts to show us, on which network sites, what we get offered and do not, and our behaviours and our conscious decision-making may be nudged quite invisibly.

Data about us, whether uniquely identifiable or not, is all too often collected passively, IP Address, linked sign-ins that extract friends lists, and some decide if we can either use the thing or not. It’s part of the deal. We get the service, they get to trade our identity, like Top Trumps, behind the scenes. But we often don’t see it, and under GDPR, there should be no contractual requirement as part of consent. I.e. agree or don’t get the service, is not an option.

From May 25, 2018 there will be special “conditions applicable to child’s consent in relation to information society services,” in Data Protection law which are applicable to the collection of data.

As yet, we have not had debate in the UK what that means in concrete terms, and if we do not soon, we risk it becoming an afterthought that harms more than helps protect children’s privacy, and therefore their digital identity.

I think of five things needed by policy shapers to tackle it:

  • In depth understanding of what ‘online’ and the Internet mean
  • Consistent understanding of what threat models and risk are connected to personal data, which today are underestimated
  • A grasp of why data privacy training is vital to safeguarding
    Confront the idea that user regulation as a stand-alone step will create a better online experience for users, when we know that perceived problems are created by providers or other site users
  • Siloed thinking that fails to be forward thinking or join the dots of tactics across Departments into cohesive inclusive strategy

If the government’s new “major new drive on internet safety” involves the world’s largest technology companies in order to make the UK the “safest place in the world for young people to go online,” then we must also ensure that these strategies and papers join things up and above all, a technical knowledge of how the Internet works needs to join the dots of risks and benefits in order to form a strategy that will actually make children safe, skilled and see into their future.

When it comes to children, there is a further question over consent and parental spyware. Various walk-to-school apps, lauded by the former Secretary of State two years running, use spyware and can be used without a child’s consent. Guardian Gallery, which could be used to scan for nudity in photos on anyone’s phone that the ‘parent’ phone holder has access to install it on, can be made invisible on the ‘child’ phone. Imagine this in coercive relationships.

If these technologies and the online environment are not correctly assessed with regard to “online safety” threat models for all parts of our population, then they fail to address the risk for the most vulnerable who need it.

What will the GDPR really mean for online safety improvement? What will it define as online services for remuneration in the IoT? And who will be considered as children, “targeted at” or “offered to”?

An active decision is required in the UK. Will 16 remain the default age needed for consent to access Information Society Services, or will we adopt 13 which needs a legal change?

As banal as these questions sound they need close attention paid, and clarity, between now and May 25, 2018 if the UK is to be GDPR ready for providers of online services to know who and how they should treat Internet access, participation and age [parental] verification.

How will the “controller” make “reasonable efforts to verify in such cases that consent is given or authorised by the holder of parental responsibility over the child”, and “taking into consideration available technology”.

These are fundamental questions of what the Internet is and means to people today. And if the current government approach to security is anything to go by, safety will not mean what we think it will mean.

It will matter how these plans join up. Age verification was not being considered in UK law in relation to how we would derogate GDPR, even as late as in October 2016 despite age verification requirements already in the Digital Economy Bill. It shows a lack of joined up digital thinking across our government and needs addressed with urgency to get into the next Parliamentary round.

In recent draft legislation I am yet to see the UK government address Internet rights and safety for young people as anything other than a protection issue, treating the online space in the same way as offline, irl, focused on stranger danger, and sexting.

The UK Digital Strategy commits to the implementation of the General Data Protection Regulation by May 2018, and frames it as a business issue, labelling data as “a global commodity” and as such, its handling is framed solely as a requirements needed to ensure “that our businesses can continue to compete and communicate effectively around the world” and that adoption “will ensure a shared and higher standard of protection for consumers and their data.”

The Digital Economy Bill, despite being a perfect vehicle for this has failed to take on children’s rights, and in particular the requirements of GDPR for consent at all. It was clear if we were to do any future digital transactions we need to level up to GDPR, not drop to the lowest common denominator between that and existing laws.

It was utterly ignored. So were children’s rights to have their own views heard in the consultation to comment on the GDPR derogations for children, with little chance for involvement from young people’s organisations, and less than a monthto respond.

We must now get this right in any new Digital Strategy and bill in the coming parliament.