Google Family Link for Under 13s: children’s privacy friend or faux?

“With the Family Link app from Google, you can stay in the loop as your kid explores on their Android* device. Family Link lets you create a Google Account for your kid that’s like your account, while also helping you set certain digital ground rules that work for your family — like managing the apps your kid can use, keeping an eye on screen time, and setting a bedtime on your kid’s device.”


John Carr shared his blog post about the Google Family Link today which was the first I had read about the new US account in beta. In his post, with an eye on GDPR, he asks, what is the right thing to do?

What is the Family Link app?

Family Link requires a US based google account to sign up, so outside the US we can’t read the full details. However from what is published online, it appears to offer the following three key features:

“Approve or block the apps your kid wants to download from the Google Play Store.

Keep an eye on screen time. See how much time your kid spends on their favorite apps with weekly or monthly activity reports, and set daily screen time limits for their device. “

and

“Set device bedtime: Remotely lock your kid’s device when it’s time to play, study, or sleep.”

From the privacy and disclosure information it reads that there is not a lot of difference between a regular (over 13s) Google account and this one for under 13s. To collect data from under 13s it must be compliant with COPPA legislation.

If you google “what is COPPA” the first result says, The Children’s Online Privacy Protection Act (COPPA) is a law created to protect the privacy of children under 13.”

But does this Google Family Link do that? What safeguards and controls are in place for use of this app and children’s privacy?

What data does it capture?

“In order to create a Google Account for your child, you must review the Disclosure (including the Privacy Notice) and the Google Privacy Policy, and give consent by authorizing a $0.30 charge on your credit card.”

Google captures the parent’s verified real-life credit card data.

Google captures child’s name, date of birth and email.

Google captures voice.

Google captures location.

Google may associate your child’s phone number with their account.

And lots more:

Google automatically collects and stores certain information about the services a child uses and how a child uses them, including when they save a picture in Google Photos, enter a query in Google Search, create a document in Google Drive, talk to the Google Assistant, or watch a video in YouTube Kids.

What does it offer over regular “13+ Google”?

In terms of general safeguarding, it doesn’t appear that SafeSearch is on by default but must be set and enforced by a parent.

Parents should “review and adjust your child’s Google Play settings based on what you think is right for them.”

Google rightly points out however that, “filters like SafeSearch are not perfect, so explicit, graphic, or other content you may not want your child to see makes it through sometimes.”

Ron Amadeo at Arstechnica wrote a review of the Family Link app back in February, and came to similar conclusions about added safeguarding value:

“Other than not showing “personalized” ads to kids, data collection and storage seems to work just like in a regular Google account. On the “Disclosure for Parents” page, Google notes that “your child’s Google Account will be like your own” and “Most of these products and services have not been designed or tailored for children.” Google won’t do any special content blocking on a kid’s device, so they can still get into plenty of trouble even with a monitored Google account.”

Your child will be able to share information, including photos, videos, audio, and location, publicly and with others, when signed in with their Google Account. And Google wants to see those photos.

There’s some things that parents cannot block at all.

Installs of app updates can’t be controlled, so leave a questionable grey area. Many apps are built on classic bait and switch – start with a free version and then the upgrade contains paid features. This is therefore something to watch for.

“Regardless of the approval settings you choose for your child’s purchases and downloads, you won’t be asked to provide approval in some instances, such as if your child: re-downloads an app or other content; installs an update to an app (even an update that adds content or asks for additional data or permissions); or downloads shared content from your Google Play Family Library. “

The child “will have the ability to change their activity controls, delete their past activity in “My Activity,” and grant app permissions (including things like device location, microphone, or contacts) to third parties”.

What’s in it for children?

You could argue that this gives children “their own accounts” and autonomy. But why do they need one at all? If I give my child a device on which they can download an app, then I approve it first.

If I am not aware of my under 13 year old child’s Internet time physically, then I’m probably not a parent who’s going to care to monitor it much by remote app either. Is there enough insecurity around ‘what children under 13 really do online’, versus what I see or they tell me as a parent, that warrants 24/7 built-in surveillance software?

I can use safe settings without this app. I can use a device time limiting app without creating a Google account for my child.

If parents want to give children an email address, yes, this allows them to have a device linked Gmail account to which you as a parent, cannot access content. But wait a minute, what’s this. Google can?

Google can read their mails and provide them “personalised product features”. More detail is probably needed but this seems clear:

“Our automated systems analyze your child’s content (including emails) to provide your child personally relevant product features, such as customized search results and spam and malware detection.”

And what happens when the under 13s turn 13? It’s questionable that it is right for Google et al. to then be able draw on a pool of ready-made customers’ data in waiting. Free from COPPA ad regulation. Free from COPPA privacy regulation.

Google knows when the child reaches 13 (the set-up requires a child’s date of birth, their first and last name, and email address, to set up the account). And they will inform the child directly when they become eligible to sign up to a regular account free of parental oversight.

What a birthday gift. But is it packaged for the child or Google?

What’s in it for Google?

The parental disclosure begins,

“At Google, your trust is a priority for us.”

If it truly is, I’d suggest they revise their privacy policy entirely.

Google’s disclosure policy also makes parents read a lot before you fully understand the permissions this app gives to Google.

I do not believe Family Link gives parents adequate control of their children’s privacy at all nor does it protect children from predatory practices.

While “Google will not serve personalized ads to your child“, your child “will still see ads while using Google’s services.”

Google also tailors the Family Link apps that the child sees, (and begs you to buy) based on their data:

“(including combining personal information from one service with information, including personal information, from other Google services) to offer them tailored content, such as more relevant app recommendations or search results.”

Contextual advertising using “persistent identifiers” is permitted under COPPA, and is surely a fundamental flaw. It’s certainly one I wouldn’t want to see duplicated under GDPR. Serving up ads that are relevant to the content the child is using, doesn’t protect them from predatory ads at all.

Google captures geolocators and knows where a child is and builds up their behavioural and location patterns. Google, like other online companies, captures and uses what I’ve labelled ‘your synthesised self’; the mix of online and offline identity and behavioural data about a user. In this case, the who and where and what they are doing, are the synthesised selves of under 13 year old children.

These data are made more valuable by the connection to an adult with spending power.

The Google Privacy Policy’s description of how Google services generally use information applies to your child’s Google Account.

Google gains permission via the parent’s acceptance of the privacy policy, to pass personal data around to third parties and affiliates. An affiliate is an entity that belongs to the Google group of companies. Today, that’s a lot of companies.

Google’s ad network consists of Google services, like Search, YouTube and Gmail, as well as 2+ million non-Google websites and apps that partner with Google to show ads.

I also wonder if it will undo some of the previous pro-privacy features on any linked child’s YouTube account if Google links any logged in accounts across the Family Link and YouTube platforms.

Is this pseudo-safe use a good thing?

In practical terms, I’d suggest this app is likely to lull parents into a false sense of security. Privacy safeguarding is not the default set up.

It’s questionable that Google should adopt some sort of parenting role through an app. Parental remote controls via an app isn’t an appropriate way to regulate whether my under 13 year old is using their device, rather than sleeping.

It’s also got to raise questions about children’s autonomy at say, 12. Should I as a parent know exactly every website and app that my child visits? What does that do for parental-child trust and relations?

As for my own children I see no benefit compared with letting them have supervised access as I do already.  That is without compromising my debit card details, or under a false sense of safeguarding. Their online time is based on age appropriate education and trust, and yes I have to manage their viewing time.

That said, if there are people who think parents cannot do that, is the app a step forward? I’m not convinced. It’s definitely of benefit to Google. But for families it feels more like a sop to adults who feel a duty towards safeguarding children, but aren’t sure how to do it.

Is this the best that Google can do by children?

In summary it seems to me that the Family Link app is a free gift from Google. (Well, free after the thirty cents to prove you’re a card-carrying adult.)

It gives parents three key tools: App approval (accept, pay, or block), Screen-time surveillance,  and a remote Switch Off of child’s access.

In return, Google gets access to a valuable data set – a parent-child relationship with credit data attached – and can increase its potential targeted app sales. Yet Google can’t guarantee additional safeguarding, privacy, or benefits for the child while using it.

I think for families and child rights, it’s a false friend. None of these tools per se require a Google account. There are alternatives.

Children’s use of the Internet should not mean they are used and their personal data passed around or traded in hidden back room bidding by the Internet companies, with no hope of control.

There are other technical solutions to age verification and privacy too.

I’d ask, what else has Google considered and discarded?

Is this the best that a cutting edge technology giant can muster?

This isn’t designed to respect children’s rights as intended under COPPA or ready for GDPR, and it’s a shame they’re not trying.

If I were designing Family Link for children, it would collect no real identifiers. No voice. No locators. It would not permit others access to voice or images or need linked. It would keep children’s privacy intact, and enable them when older, to decide what they disclose. It would not target personalised apps/products  at children at all.

GDPR requires active, informed parental consent for children’s online services. It must be revocable, personal data must collect the minimum necessary and be portable. Privacy policies must be clear to children. This, in terms of GDPR readiness, is nowhere near ‘it’.

Family Link needs to re-do their homework. And this isn’t a case of ‘please revise’.

Google is a multi-billion dollar company. If they want parental trust, and want to be GDPR and COPPA compliant, they should do the right thing.

When it comes to child rights, companies must do or do not. There is no try.


image source: ArsTechnica

Notes on Not the fake news

Notes and thoughts from Full Fact’s event at Newspeak House in London on 27/3 to discuss fake news, the misinformation ecosystem, and how best to respond. The recording is here. The contributions and questions part of the evening began from 55.55.


What is fake news? Are there solutions?

1. Clickbait: celebrity pull to draw online site visitors towards traffic to an advertising model – kill the business model
2. Mischief makers: Deceptive with hostile intent – bots, trolls, with an agenda
3. Incorrectly held views: ‘vaccinations cause autism’ despite the evidence to the contrary. How can facts reach people who only believe what they want to believe?

Why does it matter? The scrutiny of people in power matters – to politicians, charities, think tanks – as well as the public.

It is fundamental to remember that we do in general believe that the public has a sense of discernment, however there is also a disconnect between an objective truth and some people’s perception of reality. Can this conflict be resolved? Is it necessary to do so? If yes, when is it necessary to do so and who decides that?

There is a role for independent tracing of unreliable information, its sources and its distribution patterns and identifying who continues to circulate fake news even when asked to desist.

Transparency about these processes is in the public interest.

Overall, there is too little public understanding of how technology and online tools affect behaviours and decision-making.

The Role of Media in Society

How do you define the media?
How can average news consumers distinguish between self-made and distributed content compared with established news sources?
What is the role of media in a democracy?
What is the mainstream media?
Does the media really represent what I want to understand? > Does the media play a role in failure of democracy if news is not representative of all views? > see Brexit, see Trump
What are news values and do we have common press ethics?

New problems in the current press model:

Failure of the traditional media organisations in fact checking; part of the problem is that the credible media is under incredible pressure to compete to gain advertising money share.

Journalism is under resourced. Verification skills are lacking and tools can be time consuming. Techniques like reverse image search, and verification take effort.

Press releases with numbers can be less easily scrutinised so how do we ensure there is not misinformation through poor journalism?

What about confirmation bias and reinforcement?

What about friends’ behaviours? Can and should we try to break these links if we are not getting a fair picture? The Facebook representative was keen to push responsibility for the bubble entirely to users’ choices. Is this fair given the opacity of the model?
Have we cracked the bubble of self-reinforcing stories being the only stories that mutual friends see?
Can we crack the echo chamber?
How do we start to change behaviours? Can we? Should we?

The risk is that if people start to feel nothing is trustworthy, we trust nothing. This harms relations between citizens and state, organisations and consumers, professionals and public and between us all. Community is built on relationships. Relationships are built on trust. Trust is fundamental to a functioning society and economy.

Is it game over?

Will Moy assured the audience that there is no need to descend into blind panic and there is still discernment among the public.

Then, it was asked, is perhaps part of the problem that the Internet is incapable in its current construct to keep this problem at bay? Is part of the solution re-architecturing and re-engineering the web?

What about algorithms? Search engines start with word frequency and neutral decisions but are now much more nuanced and complex. We really must see how systems decide what is published. Search engines provide but also restrict our access to facts and ‘no one gets past page 2 of search results’. Lack of algorithmic transparency is an issue, but will not be solved due to commercial sensitivities.

Fake news creation can be lucrative. Mangement models that rely on user moderation or comments to give balance can be gamed.

Are there appropriate responses to the grey area between trolling and deliberate deception through fake news that is damaging? In what context and background? Are all communities treated equally?

The question came from the audience whether the panel thought regulation would come from the select committee inquiry. The general response was that it was unlikely.

What are the solutions?

The questions I came away thinking about went unanswered, because I am not sure there are solutions as long as the current news model exists and is funded in the current way by current players.

I believe one of the things that permits fake news is the growing imbalance of money between the big global news distributors and independent and public interest news sources.

This loss of balance, reduces our ability to decide for ourselves what we believe and what matters to us.

The monetisation of news through its packaging in between advertising has surely contaminated the news content itself.

Think of a Facebook promoted post – you can personalise your audience to a set of very narrow and selective characteristics. The bubble that receives that news is already likely to be connected by similar interest pages and friends and the story becomes self reinforcing, showing up in  friends’ timelines.

A modern online newsroom moves content on the webpage around according to what is getting the most views and trending topics in a list encourage the viewers to see what other people are reading, and again, are self reinforcing.

There is also a lack of transparency of power. Where we see a range of choices from which we may choose to digest a range of news, we often fail to see one conglomerate funder which manages them all.

The discussion didn’t address at all the fundamental shift in “what is news” which has taken place over the last twenty years. In part, I believe the responsibility for the credibility level of fake news in viewers lies with 24/7 news channels. They have shifted the balance of content from factual bulletins, to discussion and opinion. Now while the news channel is seen as a source of ‘news’ much of the time, the content is not factual, but opinion, and often that means the promotion and discussion of the opinions of their paymaster.

Most simply, how should I answer the question that my ten year old asks – how do I know if something on the Internet is true or not?

Can we really say it is up to the public to each take on this role and where do we fit the needs of the vulnerable or children into that?

Is the term fake news the wrong approach and something to move away from? Can we move solutions away from target-fixation ‘stop fake news’ which is impossible online, but towards what the problems are that fake news cause?

Interference in democracy. Interference in purchasing power. Interference in decision making. Interference in our emotions.

These interferences with our autonomy is not something that the web is responsible for, but the people behind the platforms must be accountable for how their technology works.

In the mean time, what can we do?

“if we ever want the spread of fake news to stop we have to take responsibility for calling out those who share fake news (real fake news, not just things that feel wrong), and start doing a bit of basic fact-checking ourselves.” [IB Times, Eliot Higgins is the founder of Bellingcat]

Not everyone has the time or capacity to each do that. As long as today’s imbalance of money and power exists, truly independent organisations like Bellingcat and FullFact have an untold value.


The billed Google and Twitter speakers were absent because they were invited to a meeting with the Home Secretary on 28/3. Speakers were Will Moy, Director of Jenni Sargent Managing Director of , Richard Allan, Facebook EMEA Policy Director and the event was chaired by Bill Thompson.

Information society services: Children in the GDPR, Digital Economy Bill & Digital Strategy

In preparation for The General Data Protection Regulation (GDPR) there  must be an active UK decision about policy in the coming months for children and the Internet – provision of ‘Information Society Services’. The age of consent for online content aimed at children from May 25, 2018 will be 16 by default unless UK law is made to lower it.

Age verification for online information services in the GDPR, will mean capturing parent-child relationships. This could mean a parent’s email or credit card unless there are other choices made. What will that mean for access to services for children and to privacy? It is likely to offer companies an opportunity for a data grab, and mean privacy loss for the public, as more data about family relationships will be created and collected than the content provider would get otherwise.

Our interactions create a blended identity of online and offline attributes which I suggested in a previous post, create synthesised versions of our selves raises questions on data privacy and security.

The goal may be to protect the physical child. The outcome will mean it simultaneously expose children and parents to risks that we would not otherwise be put through increased personal data collection. By increasing the data collected, it increases the associated risks of loss, theft, and harm to identity integrity. How will legislation balance these risks and rights to participation?

The UK government has various work in progress before then, that could address these questions:

But will they?

As Sonia Livingstone wrote in the post on the LSE media blog about what to expect from the GDPR and its online challenges for children:

“Now the UK, along with other Member States, has until May 2018 to get its house in order”.

What will that order look like?

The Digital Strategy and Ed Tech

The Digital Strategy commits to changes in National Pupil Data  management. That is, changes in the handling and secondary uses of data collected from pupils in the school census, like using it for national research and planning.

It also means giving data to commercial companies and the press. Companies such as private tutor pupil matching services, and data intermediaries. Journalists at the Times and the Telegraph.

Access to NPD via the ONS VML would mean safe data use, in safe settings, by safe (trained and accredited) users.

Sensitive data — it remains to be seen how DfE intends to interpret ‘sensitive’ and whether that is the DPA1998 term or lay term meaning ‘identifying’ as it should — will no longer be seen by users for secondary uses outside safe settings.

However, a grey area on privacy and security remains in the “Data Exchange” which will enable EdTech products to “talk to each other”.

The aim of changes in data access is to ensure that children’s data integrity and identity are secure.  Let’s hope the intention that “at all times, the need to preserve appropriate privacy and security will remain paramount and will be non-negotiable” applies across all closed pupil data, and not only to that which may be made available via the VML.

This strategy is still far from clear or set in place.

The Digital Strategy and consumer data rights

The Digital Strategy commits under the heading of “Unlocking the power of data in the UK economy and improving public confidence in its use” to the implementation of the General Data Protection Regulation by May 2018. The Strategy frames this as a business issue, labelling data as “a global commodity” and as such, its handling is framed solely as a requirements needed to ensure “that our businesses can continue to compete and communicate effectively around the world” and that adoption “will ensure a shared and higher standard of protection for consumers and their data.”

The GDPR as far as children goes, is far more about protection of children as people. It focuses on returning control over children’s own identity and being able to revoke control by others, rather than consumer rights.

That said, there are data rights issues which are also consumer issues and  product safety failures posing real risk of harm.

Neither The Digital Economy Bill nor the Digital Strategy address these rights and security issues, particularly when posed by the Internet of Things with any meaningful effect.

In fact, the chapter Internet of Things and Smart Infrastructure [ 9/19]  singularly miss out anything on security and safety:

“We want the UK to remain an international leader in R&D and adoption of IoT. We are funding research and innovation through the three year, £30 million IoT UK Programme.”

There was much more thoughtful detail in the 2014 Blackett Review on the IoT to which I was signposted today after yesterday’s post.

If it’s not scary enough for the public to think that their sex secrets and devices are hackable, perhaps it will kill public trust in connected devices more when they find strangers talking to their children through a baby monitor or toy. [BEUC campaign report on #Toyfail]

“The internet-connected toys ‘My Friend Cayla’ and ‘i-Que’ fail miserably when it comes to safeguarding basic consumer rights, security, and privacy. Both toys are sold widely in the EU.”

Digital skills and training in the strategy doesn’t touch on any form of change management plans for existing working sectors in which we expect to see machine learning and AI change the job market. This is something the digital and industrial strategy must be addressing hand in glove.

The tactics and training providers listed sound super, but there does not appear to be an aspirational strategy hidden between the lines.

The Digital Economy Bill and citizens’ data rights

While the rest of Europe in this legislation has recognised that a future thinking digital world without boundaries, needs future thinking on data protection and empowered citizens with better control of identity, the UK government appears intent on taking ours away.

To take only one example for children, the Digital Economy Bill in Cabinet Office led meetings was explicit about use for identifying and tracking individuals labelled under “Troubled Families” and interventions with them. Why, when consent is required to work directly with people, that consent is being ignored to access their information is baffling and in conflict with both the spirit and letter of GDPR. Students and Applicants will see their personal data sent to the Student Loans Company without their consent or knowledge. This overrides the current consent model in place at UCAS.

It is baffling that the government is pursuing the Digital Economy Bill data copying clauses relentlessly, that remove confidentiality by default, and will release our identities in birth, marriage and death data for third party use without consent through Chapter 2, the opening of the Civil Registry, without any safeguards in the bill.

Government has not only excluded important aspects of Parliamentary scrutiny in the bill, it is trying to introduce “almost untrammeled powers” (paragraph 21), that will “very significantly broaden the scope for the sharing of information” and “specified persons”  which applies “whether the service provider concerned is in the public sector or is a charity or a commercial organisation” and non-specific purposes for which the information may be disclosed or used. [Reference: Scrutiny committee comments]

Future changes need future joined up thinking

While it is important to learn from the past, I worry that the effort some social scientists put into looking backwards,  is not matched by enthusiasm to look ahead and making active recommendations for a better future.

Society appears to have its eyes wide shut to the risks of coercive control and nudge as research among academics and government departments moves in the direction of predictive data analysis.

Uses of administrative big data and publicly available social media data for example, in research and statistics, needs further new regulation in practice and policy but instead the Digital Economy Bill looks only at how more data can be got out of Department silos.

A certain intransigence about data sharing with researchers from government departments is understandable. What’s the incentive for DWP to release data showing its policy may kill people?

Westminster may fear it has more to lose from data releases and don’t seek out the political capital to be had from good news.

The ethics of data science are applied patchily at best in government, and inconsistently in academic expectations.

Some researchers have identified this but there seems little will to action:

 “It will no longer be possible to assume that secondary data use is ethically unproblematic.”

[Data Horizons: New forms of Data for Social Research, Elliot, M., Purdam, K., Mackey, E., School of Social Sciences, The University Of Manchester, 2013.]

Research and legislation alike seem hell bent on the low hanging fruit but miss out the really hard things. What meaningful benefit will it bring by spending millions of pounds on exploiting these personal data and opening our identities to risk just to find out whether X course means people are employed in Y tax bracket 5 years later, versus course Z where everyone ends up self employed artists? What ethics will be applied to the outcomes of those questions asked and why?

And while government is busy joining up children’s education data throughout their lifetimes from age 2 across school, FE, HE, into their HMRC and DWP interactions, there is no public plan in the Digital Strategy for the coming 10 to 20 years employment market, when many believe, as do these authors in American Scientific, “around half of today’s jobs will be threatened by algorithms. 40% of today’s top 500 companies will have vanished in a decade.”

What benefit will it have to know what was, or for the plans around workforce and digital skills list ad hoc tactics, but no strategy?

We must safeguard jobs and societal needs, but just teaching people to code is not a solution to a fundamental gap in what our purpose will be, and the place of people as a world-leading tech nation after Brexit. We are going to have fewer talented people from across the world staying on after completing academic studies, because they’re not coming at all.

There may be investment in A.I. but where is the investment in good data practices around automation and machine learning in the Digital Economy Bill?

To do this Digital Strategy well, we need joined up thinking.

Improving online safety for children in The Green Paper on Children’s Internet Safety should mean one thing:

Children should be able to use online services without being used and abused by them.

This article arrived on my Twitter timeline via a number of people. Doteveryone CEO Rachel Coldicutt summed up various strands of thought I started to hear hints of last month at #CPDP2017 in Brussels:

“As designers and engineers, we’ve contributed to a post-thought world. In 2017, it’s time to start making people think again.

“We need to find new ways of putting friction and thoughtfulness back into the products we make.” [Glanceable truthiness, 30.1.2017]

Let’s keep the human in discussions about technology, and people first in our products

All too often in technology and even privacy discussions, people have become ‘consumers’ and ‘customers’ instead of people.

The Digital Strategy may seek to unlock “the power of data in the UK economy” but policy and legislation must put equal if not more emphasis on “improving public confidence in its use” if that long term opportunity is to be achieved.

And in technology discussions about AI and algorithms we hear very little about people at all.  Discussions I hear seem siloed instead into three camps: the academics, the designers and developers,  the politicians and policy makers.  And then comes the lowest circle, ‘the public’ and ‘society’.

It is therefore unsurprising that human rights have fallen down the ranking of importance in some areas of technology development.

It’s time to get this house in order.

Information. Society. Services. Children in the Internet of Things.

In this post, I think out loud about what improving online safety for children in The Green Paper on Children’s Internet Safety means ahead of the General Data Protection Regulation in 2018. Children should be able to use online services without being used and abused by them. If this regulation and other UK Government policy and strategy are to be meaningful for children, I think we need to completely rethink the State approach to what data privacy means in the Internet of Things.
[listen on soundcloud]


Children in the Internet of Things

In 1979 Star Trek: The Motion Picture created a striking image of A.I. as Commander Decker merged with V’Ger and the artificial copy of Lieutenant Ilia, blending human and computer intelligence and creating an integrated, synthesised form of life.

Ten years later, Sir Tim Berners-Lee wrote his proposal and created the world wide web, designing the way for people to share and access knowledge with each other through networks of computers.

In the 90s my parents described using the Internet as spending time ‘on the computer’, and going online meant from a fixed phone point.

Today our wireless computers in our homes, pockets and school bags, have built-in added functionality to enable us to do other things with them at the same time; make toast, play a game, and make a phone call, and we live in the Internet of Things.

Although we talk about it as if it were an environment of inanimate appliances,  it would be more accurate to think of the interconnected web of information that these things capture, create and share about our interactions 24/7, as vibrant snapshots of our lives, labelled with retrievable tags, and stored within the Internet.

Data about every moment of how and when we use an appliance, is captured at a rapid rate, or measured by smart meters, and shared within a network of computers. Computers that not only capture data but create, analyse and exchange new data about the people using them and how they interact with the appliance.

In this environment, children’s lives in the Internet of Things no longer involve a conscious choice to go online. Using the Internet is no longer about going online, but being online. The web knows us. In using the web, we become part of the web.

Our children, to the computers that gather their data, have simply become extensions of the things they use about which data is gathered and sold by the companies who make and sell the things. Things whose makers can even choose who uses them or not and how. In the Internet of things,  children have become things of the Internet.

A child’s use of a smart hairbrush will become part of the company’s knowledge base how the hairbrush works. A child’s voice is captured and becomes part of the database for the development training of the doll or robot they play with.

Our biometrics, measurements of the unique physical parts of our identities, provides a further example of the recent offline-self physically incorporated into banking services. Over 1 million UK children’s biometrics are estimated to be used in school canteens and library services through, often compulsory, fingerprinting.

Our interactions create a blended identity of online and offline attributes.

The web has created synthesised versions of our selves.

I say synthesised not synthetic, because our online self is blended with our real self and ‘synthetic’ gives the impression of being less real. If you take my own children’s everyday life as an example,  there is no ‘real’ life that is without a digital self.  The two are inseparable. And we might have multiple versions.

Our synthesised self is not only about our interactions with appliances and what we do, but who we know and how we think based on how we take decisions.

Data is created and captured not only about how we live, but where we live. These online data can be further linked with data about our behaviours offline generated from trillions of sensors and physical network interactions with our portable devices. Our synthesised self is tracked from real life geolocations. In cities surrounded by sensors under pavements, in buildings, cameras, mapping and tracking everywhere we go, our behaviours are converted into data, and stored inside an overarching network of cloud computers so that our online lives take on life of their own.

Data about us, whether uniquely identifiable on its own or not, is created and collected actively and passively. Online site visits record IP Address and use linked platform log-ins that can even extract friends lists without consent or affirmative action from them.

Using a tool like Privacy Badger from EEF gives you some insight into how many sites create new data about online behaviour once that synthesised self logs in, then tracks your synthesised self across the Internet. How you move from page to page, with what referring and exit pages and URLs, what adverts you click on or ignore,  platform types, number of clicks, cookies, invisible on page gifs and web beacons. Data that computers see, interpret and act on better than us.

Those synthesised identities are tracked online,  just as we move about a shopping mall offline.

Sir Tim Berners-Lee said this week, there is a need to put “a fair level of data control back in the hands of people.” It is not a need but vital to our future flourishing, very survival even. Data control is not about protecting a list of information or facts about ourselves and our identity for its own sake, it is about choosing who can exert influence and control over our life, our choices, and future of democracy.

And while today that who may be companies, it is increasingly A.I. itself that has a degree of control over our lives, as decisions are machine made.

Understanding how the Internet uses people

We get the service, the web gets our identity and our behaviours. And in what is in effect a hidden slave trade, they get access to use our synthesised selves in secret, and forever.

This grasp of what the Internet is, what the web is, is key to getting a rounded view of children’s online safety. Namely, we need to get away from the sole focus of online safeguarding as about children’s use of the web, and also look at how the web uses children.

Online services use children to:

  • mine, and exchange, repackage, and trade profile data, offline behavioural data (location, likes), and invisible Internet-use behavioural data (cookies, website analytics)
  • extend marketing influence in human decision-making earlier in life, even before children carry payment cards of their own,
  • enjoy the insights of parent-child relationships connected by an email account, sometimes a credit card, used as age verification or in online payments.

What are the risks?

Exploitation of identity and behavioural tracking not only puts our synthesised child at risk from exploitation, it puts our real life child’s future adult identity and data integrity at risk. If we cannot know who holds the keys to our digital identity, how can we trust that systems and services will be fair to us, not discriminate or defraud. Or not make errors that we cannot understand in order to correct?

Leaks, loss and hacks abound and manufacturers are slow to respond. Software that monitors children can also be used in coercive control. Organisations whose data are insecure, can be held to ransom. Children’s products should do what we expect them to and nothing more, there should be “no surprises” how data are used.

Companies tailor and target their marketing activity to those identity profiles. Our data is sold on in secret without consent to data brokers we never see, who in turn sell us on to others who monitor, track and target our synthesised selves every time we show up at their sites, in a never-ending cycle.

And from exploiting the knowledge of our synthesised self, decisions are made by companies, that target their audience, select which search results or adverts to show us, or hide, on which network sites, how often, to actively nudge our behaviours quite invisibly.

Nudge misuse is one of the greatest threats to our autonomy and with it democratic control of the society we live in. Who decides on the “choice architecture” that may shape another’s decisions and actions, and on what ethical basis?  once asked these authors who now seem to want to be the decision makers.

Thinking about Sir Tim Berners-Lee’s comments today on things that threaten the web, including how to address the loss of control over our personal data, we must frame it not a user-led loss of control, but autonomy taken by others; by developers, by product sellers, by the biggest ‘nudge controllers’ the Internet giants themselves.

Loss of identity is near impossible to reclaim. Our synthesised selves are sold into unending data slavery and we seem powerless to stop it. Our autonomy and with it our self worth, seem diminished.

How can we protect children better online?

Safeguarding must include ending data slavery of our synthesised self. I think of five things needed by policy shapers to tackle it.

  1. Understanding what ‘online’ and the Internet mean and how the web works – i.e. what data does a visit to a web page collect about the user and what happens to that data?
  2. Threat models and risk must go beyond the usual irl protection issues. Those  posed by undermining citizens’ autonomy, loss of public trust, of control over our identity, misuse of nudge, and how some are intrinsic to the current web business model, site users or government policy are unseen are underestimated.
  3. On user regulation (age verification / filtering) we must confront the idea that as a stand-alone step  it will not create a better online experience for the user, when it will not prevent the misuse of our synthesised selves and may increase risks – regulation of misuse must shift the point of responsibility
  4. Meaningful data privacy training must be mandatory for anyone in contact with children and its role in children’s safeguarding
  5. Siloed thinking must go. Forward thinking must join the dots across Departments into cohesive inclusive digital strategy and that doesn’t just mean ‘let’s join all of the data, all of the time’
  6. Respect our synthesised selves. Data slavery includes government misuse and must end if we respect children’s rights.

In the words of James T. Kirk, “the human adventure is just beginning.”

When our synthesised self is an inseparable blend of offline and online identity, every child is a synthesised child. And they are people. It is vital that government realises their obligation to protect rights to privacy, provision and participation under the Convention of the Rights of the Child and address our children’s real online life.

Governments, policy makers, and commercial companies must not use children’s offline safety as an excuse in a binary trade off to infringe on those digital rights or ignore risk and harm to the synthesised self in law, policy, and practice.

If future society is to thrive we must do all that is technologically possible to safeguard the best of what makes us human in this blend; our free will.


Part 2 follows with thoughts specific to the upcoming regulations, Digital Economy Bill andDigital Strategy

References:

[1] Internet of things WEF film, starting from 19:30

“What do an umbrella, a shark, a houseplant, the brake pads in a mining truck and a smoke detector all have in common?  They can all be connected online, and in this example, in this WEF film, they are.

“By 2024 more than 50% of home Internet traffic will be used by appliances and devices, rather than just for communication and entertainment…The IoT raises huge questions on privacy and security, that have to be addressed by government, corporations and consumers.”

[2] The government has today announced a “major new drive on internet safety”  [The Register, Martin, A. 27.02.2017]

[3] GDPR page 38 footnote (1) indicates the definition of Information Society Services as laid out in the Directive (EU) 2015/1535 of the European Parliament and of the Council of 9 September 2015 laying down a procedure for the provision of information in the field of technical regulations and of rules on Information Society services (OJ L 241, 17.9.2015, p. 1 and Annex 1)

image source: Startrek.com

A vanquished ghost returns as details of distress required in NHS opt out

It seems the ugly ghosts of care.data past were alive and well at NHS Digital this Christmas.

Old style thinking, the top-down patriarchal ‘no one who uses a public service should be allowed to opt out of sharing their records. Nor can people rely on their record being anonymised,‘ that you thought was vanquished, has returned with a vengeance.

The Secretary of State for Health, Jeremy Hunt, has reportedly  done a U-turn on opt out of the transfer of our medical records to third parties without consent.

That backtracks on what he said in Parliament on January 25th, 2014 on opt out of anonymous data transfers, despite the right to object in the NHS constitution [1].

So what’s the solution? If the new opt out methods aren’t working, then back to the old ones and making Section 10 requests? But it seems the Information Centre isn’t keen on making that work either.

All the data the HSCIC holds is sensitive and as such, its release risks patients’ significant harm or distress [2] so it shouldn’t be difficult to tell them to cease and desist, when it comes to data about you.

But how is NHS Digital responding to people who make the effort to write directly?

Someone who “got a very unhelpful reply” is being made to jump through hoops.

If anyone asks that their hospital data should not be used in any format and passed to third parties, that’s surely for them to decide.

Let’s take the case study of a woman who spoke to me during the whole care.data debacle who had been let down by the records system after rape. Her NHS records subsequently about her mental health care were inaccurate, and had led to her being denied the benefit of private health insurance at a new job.

Would she have to detail why selling her medical records would cause her distress? What level of detail is fair and who decides? The whole point is, you want to keep info confidential.

Should you have to state what you fear? “I have future distress, what you might do to me?” Once you lose control of data, it’s gone. Based on past planning secrecy and ideas for the future, like mashing up health data with retail loyalty cards as suggested at Strata in November 2013 [from 16:00] [2] no wonder people are sceptical. 

Given the long list of commercial companies,  charities, think tanks and others that passing out our sensitive data puts at risk and given the Information Centre’s past record, HSCIC might be grateful they have only opt out requests to deal with, and not millions of medical ethics court summonses. So far.

HSCIC / NHS Digital has extracted our identifiable records and has given them away, including for commercial product use, and continues give them away, without informing us. We’ve accepted Ministers’ statements and that a solution would be found. Two years on, patience wears thin.

“Without that external trust, we risk losing our public mandate and then cannot offer the vital insights that quality healthcare requires.”

— Sir Nick Partridge on publication of the audit report of 10% of 3,059 releases by the HSCIC between 2005-13

— Andy WIlliams said, “We want people to be certain their choices will be followed.”

Jeremy Hunt said everyone should be able to opt out of having their anonymised data used. David Cameron did too when the plan was  announced in 2012.

In 2014 the public was told there should be no more surprises. This latest response is not only a surprise but enormously disrespectful.

When you’re trying to rebuild trust, assuming that we accept that ‘is’ the aim, you can’t say one thing, and do another.  Perhaps the Department for Health doesn’t like the public answer to what the public wants from opt out, but that doesn’t make the DH view right.

Perhaps NHS Digital doesn’t want to deal with lots of individual opt out requests, that doesn’t make their refusal right.

Kingsley Manning recognised in July 2014, that the Information Centre “had made big mistakes over the last 10 years.” And there was “a once-in-a-generation chance to get it right.”

I didn’t think I’d have to move into the next one before they fix it.

The recent round of 2016 public feedback was the same as care.data 1.0. Respect nuanced opt outs and you will have all the identifiable public interest research data you want. Solutions must be better for other uses, opt out requests must be respected without distressing patients further in the process, and anonymous must mean  anonymous.

Pseudonymised data requests that go through the DARS process so that a Data Sharing Framework Contract and Data Sharing Agreement are in place are considered to be compliant with the ICO code of practice – fine, but they are not anonymous. If DARS is still giving my family’s data to Experian, Harvey Walsh, and co, despite opt out, I’ll be furious.

The [Caldicott 2] Review Panel found “that commissioners do not need dispensation from confidentiality, human rights & data protection law.

Neither do our politicians, their policies or ALBs.


[1] https://www.england.nhs.uk/ourwork/tsd/ig/ig-fair-process/further-info-gps/

“A patient can object to their confidential personal information from being disclosed out of the GP Practice and/or from being shared onwards by the HSCIC for non-direct care purposes (secondary purposes).”

[2] Minimum Mandatory Measures http://www.nationalarchives.gov.uk/documents/information-management/cross-govt-actions.pdf p7

The perfect storm: three bills that will destroy student data privacy in England

Lords have voiced criticism and concern at plans for ‘free market’ universities, that will prioritise competition over collaboration and private interests over social good. But while both Houses have identified the institutional effects, they are yet to discuss the effects on the individuals of a bill in which “too much power is concentrated in the centre”.

The Higher Education and Research Bill sucks in personal data to the centre, as well as power. It creates an authoritarian panopticon of the people within the higher education and further education systems. Section 1, parts 72-74 creates risks but offers no safeguards.

Applicants and students’ personal data is being shifted into a  top-down management model, at the same time as the horizontal safeguards for its distribution are to be scrapped.

Through deregulation and the building of a centralised framework, these bills will weaken the purposes for which personal data are collected, and weaken existing requirements on consent to which the data may be used at national level. Without amendments, every student who enters this system will find their personal data used at the discretion of any future Secretary of State for Education without safeguards or oversight, and forever. Goodbye privacy.

Part of the data extraction plans are for use in public interest research in safe settings with published purpose, governance, and benefit. These are well intentioned and this year’s intake of students will have had to accept that use as part of the service in the privacy policy.

But in addition and separately, the Bill will permit data to be used at the discretion of the Secretary of State, which waters down and removes nuances of consent for what data may or may not be used today when applicants sign up to UCAS.

Applicants today are told in the privacy policy they can consent separately to sharing their data with the Student Loans company for example. This Bill will remove that right when it permits all Applicant data to be used by the State.

This removal of today’s consent process denies all students their rights to decide who may use their personal data beyond the purposes for which they permit its sharing.

And it explicitly overrides the express wishes registered by the 28,000 applicants, 66% of respondents to a 2015 UCAS survey, who said as an example, that they should be asked before any data was provided to third parties for student loan applications (or even that their data should never be provided for this).

Not only can the future purposes be changed without limitation,  by definition, when combined with other legislation, namely the Digital Economy Bill that is in the Lords at the same time right now, this shift will pass personal data together with DWP and in connection with HMRC data expressly to the Student Loans Company.

In just this one example, the Higher Education and Research Bill is being used as a man in the middle. But it will enable all data for broad purposes, and if those expand in future, we’ll never know.

This change, far from making more data available to public interest research, shifts the balance of power between state and citizen and undermines the very fabric of its source of knowledge; the creation and collection of personal data.

Further, a number of amendments have been proposed in the Lords to clause 9 (the transparency duty) which raise more detailed privacy issues for all prospective students, concerns UCAS share.

Why this lack of privacy by design is damaging

This shift takes away our control, and gives it to the State at the very time when ‘take back control’ is in vogue. These bills are building a foundation for a data Brexit.

If the public does not trust who will use it and why or are told that when they provide data they must waive any rights to its future control, they will withhold or fake data. 8% of applicants even said it would put them off applying through UCAS at all.

And without future limitation, what might be imposed is unknown.

This shortsightedness will ultimately cause damage to data integrity and the damage won’t come in education data from the Higher Education Bill alone. The Higher Education and Research Bill is just one of three bills sweeping through Parliament right now which build a cumulative anti-privacy storm together, in what is labelled overtly as data sharing legislation or is hidden in tucked away clauses.

The Technical and Further Education Bill – Part 3

In addition to entirely new Applicant datasets moving from UCAS to the DfE in clauses 73 and 74 of the  Higher Education and Research Bill,  Apprentice and FE student data already under the Secretary of State for Education will see potentially broader use under changed purposes of Part 3 of the Technical and Further Education Bill.

Unlike the Higher Education and Research Bill, it may not fundamentally changing how the State gathers information on further education, but it has the potential to do so on use.

The change is a generalisation of purposes. Currently, subsection 1 of section 54 refers to “purposes of the exercise of any of the functions of the Secretary of State under Part 4 of the Apprenticeships, Skills, Children and Learning Act 2009”.

Therefore, the government argues, “it would not hold good in circumstances where certain further education functions were transferred from the Secretary of State to some combined authorities in England, which is due to happen in 2018.”<

This is why clause 38 will amend that wording to “purposes connected with further education”.

Whatever the details of the reason, the purposes are broader.

Again, combined with the Digital Economy Bill’s open ended purposes, it means the Secretary of State could agree to pass these data on to every other government department, a range of public bodies, and some private organisations.

The TFE BIll is at Report stage in the House of Commons on January 9, 2017.

What could go possibly go wrong?

These loose purposes, without future restrictions, definitions of third parties it could be given to or why, or clear need to consult the public or parliament on future scope changes, is a  repeat of similar legislative changes which have resulted in poor data practices using school pupil data in England age 2-19 since 2000.

Policy makers should consider whether the intent of these three bills is to give out identifiable, individual level, confidential data of young people under 18, for commercial use without their consent? Or to journalists and charities access? Should it mean unfettered access by government departments and agencies such as police and Home Office Removals Casework teams without any transparent register of access, any oversight, or accountability?

These are today’s uses by third-parties of school children’s individual, identifiable and sensitive data from the National Pupil Database.

Uses of data not as statistics, but named individuals for interventions in individual lives.

If the Home Secretaries past and present have put international students at the centre of plans to cut migration to the tens of thousands and government refuses to take student numbers out of migration figures, despite them being seen as irrelevant in the substance of the numbers debate, this should be deeply worrying.

Will the MOU between the DfE and the Home Office Removals Casework team be a model for access to all student data held at the Department for Education, even all areas of public administrative data?

Hoping that the data transfers to the Home Office won’t result in the deportation of thousands we would not predict today, may be naive.

Under the new open wording, the Secretary of State for Education might even  decide to sell the nation’s entire Technical and Further Education student data to Trump University for the purposes of their ‘research’ to target marketing at UK students or institutions that may be potential US post-grad applicants. The Secretary of State will have the data simply because she “may require [it] for purposes connected with further education.”

And to think US buyers or others would not be interested is too late.

In 2015 Stanford University made a request of the National Pupil Database for both academic staff and students’ data. It was rejected. We know this only from the third party release register. Without any duty to publish requests, approved users or purposes of data release, where is the oversight for use of these other datasets?

If these are not the intended purposes of these three bills, if there should be any limitation on purposes of use and future scope change, then safeguards and oversight need built into the face of the bills to ensure data privacy is protected and avoid repeating the same again.

Hoping that the decision is always going to be, ‘they wouldn’t approve a request like that’ is not enough to protect millions of students privacy.

The three bills are a perfect privacy storm

As other Europeans seek to strengthen the fundamental rights of their citizens to take back control of their personal data under the GDPR coming into force in May 2018, the UK government is pre-emptively undermining ours in these three bills.

Young people, and data dependent institutions, are asking for solutions to show what personal data is held where, and used by whom, for what purposes. That buys in the benefit message that builds trust showing what you said you’d do with my data, is what you did with my data. [1] [2]

Reality is that in post-truth politics it seems anything goes, on both sides of the Pond. So how will we trust what our data is used for?

2015-16 advice from the cross party Science and Technology Committee suggested data privacy is unsatisfactory, “to be left unaddressed by Government and without a clear public-policy position set out“.  We hear the need for data privacy debated about use of consumer data, social media, and on using age verification. It’s necessary to secure the public trust needed for long term public benefit and for economic value derived from data to be achieved.

But the British government seems intent on shortsighted legislation which does entirely the opposite for its own use: in the Higher Education Bill, the Technical and Further Education Bill and in the Digital Economy Bill.

These bills share what Baroness Chakrabarti said of the Higher Education Bill in its Lords second reading on the 6th December, “quite an achievement for a policy to combine both unnecessary authoritarianism with dangerous degrees of deregulation.”

Unchecked these Bills create the conditions needed for catastrophic failure of public trust. They shift ever more personal data away from personal control, into the centralised control of the Secretary of State for unclear purposes and use by undefined third parties. They jeopardise the collection and integrity of public administrative data.

To future-proof the immediate integrity of student personal data collection and use, the DfE reputation, and public and professional trust in DfE political leadership, action must be taken on safeguards and oversight, and should consider:

  • Transparency register: a public record of access, purposes, and benefits to be achieved from use
  • Subject Access Requests: Providing the public ways to access copies of their own data
  • Consent procedures should be strengthened for collection and cannot say one thing, and do another
  • Ability to withdraw consent from secondary purposes should be built in by design, looking to GDPR from 2018
  • Clarification of the legislative purpose of intended current use by the Secretary of State and its boundaries shoud be clear
  • Future purpose and scope change limitations should require consultation – data collected today must not used quite differently tomorrow without scrutiny and ability to opt out (i.e. population wide registries of religion, ethnicity, disability)
  • Review or sunset clause

If the legislation in these three bills pass without amendment, the potential damage to privacy will be lasting.


[1] http://www.parliament.uk/business/publications/written-questions-answers-statements/written-question/Commons/2016-07-15/42942/  Parliamentary written question 42942 on the collection of pupil nationality data in the school census starting in September 2016:   “what limitations will be placed by her Department on disclosure of such information to (a) other government departments?”

Schools Minister Nick Gibb responded on July 25th 2016: ”

“These new data items will provide valuable statistical information on the characteristics of these groups of children […] “The data will be collected solely for internal Departmental use for the analytical, statistical and research purposes described above. There are currently no plans to share the data with other government Departments”

[2] December 15, publication of MOU between the Home Office  Casework Removals Team and the DfE, reveals “the previous agreement “did state that DfE would provide nationality information to the Home Office”, but that this was changed “following discussions” between the two departments.” http://schoolsweek.co.uk/dfe-had-agreement-to-share-pupil-nationality-data-with-home-office/ 

The agreement was changed on 7th October 2016 to not pass nationality data over. It makes no mention of not using the data within the DfE for the same purposes.

DeepMind or DeepMined? NHS public data, engagement and regulation repackaged

A duty of confidentiality and the regulation of medical records are as old as the hills. Public engagement on attitudes in this in context of the NHS has been done and published by established social science and health organisations in the last three years. So why is Google DeepMind (GDM) talking about it as if it’s something new? What might assumed consent NHS-wide mean in this new context of engagement? Given the side effects for public health and medical ethics of a step-change towards assumed consent in a commercial product environment, is this ‘don’t be evil’ shift to ‘do no harm’ good enough?  Has Regulation failed patients?
My view from the GDM patient and public event, September 20.

Involving public and patients

Around a hundred participants joined the Google DeepMind public and patient event,  in September after which Paul Wicks gave his view in the BMJ afterwards, and rightly started with the fact the event was held in the aftermath of some difficult questions.

Surprisingly, none were addressed in the event presentations. No one mentioned data processing failings, the hospital Trust’s duty of confidentiality, or criticisms in the press earlier this year. No one talked about the 5 years of past data from across the whole hospital or monthly extracts that were being shared and had first been extracted for GDM use without consent.

I was truly taken aback by the sense of entitlement that came across. The decision by the Trust to give away confidential patient records without consent earlier in 2015/16 was either forgotten or ignored and until the opportunity for questions,  the future model was presented unquestioningly. The model for an NHS-wide hand held gateway to your records that the announcement this week embeds.

What matters on reflection is that the overall reaction to this ‘engagement’ is bigger than the one event, bigger than the concepts of tools they could hypothetically consider designing, or lack of consent for the data already used.

It’s a massive question of principle, a litmus test for future commercial users of big, even national population-wide public datasets.

Who gets a say in how our public data are used? Will the autonomy of the individual be ignored as standard, assumed unless you opt out, and asked for forgiveness with a post-haste opt out tacked on?

Should patients just expect any hospital can now hand over all our medical histories in a free-for-all to commercial companies and their product development without asking us first?

Public and patient questions

Where data may have been used in the algorithms of the DeepMind black box, there was a black hole in addressing patient consent.

Public engagement with those who are keen to be involved, is not a replacement for individual permission from those who don’t want to be, and who expected a duty of patient-clinician confidentiality.

Tellingly, the final part of the event tried to be a capture our opinions on how to involve the public. Right off the bat the first question was one of privacy. Most asked questions about issues raised to date, rather than looking to design the future. Ignoring those and retrofitting a one-size fits all model under the banner of ‘engagement’ won’t work until they address concerns of those people they have already used and the breach of trust that now jeopardises people’s future willingness to be involved, not only in this project, but potentially other research.

This event should have been a learning event for Google which is good at learning and uses people to do it both by man and machine.

But from their post-media reaction after  this week’s announcement it seems not all feedback or lessons learned are welcome.

Google DeepMind executives were keen to use patient case studies and had patients themselves do the most talking, saying how important data is to treat kidney and eyecare, which I respect greatly. But there was very little apparent link why their experience was related to Google DeepMind at all or products created to date.

Google DeepMind has the data from every patient in the hospital in recent years, not only patients affected by this condition and not data from the people who will be supported directly by this app.

Yet GoogleDeepMind say this is “direct care” not research. Hard to be for direct care when you are no longer under the hospital’s care. Implied consent for use of sensitive health data, needs to be used in alignment with the purposes for which it was given. It must be fair and lawful.

If data users don’t get that, or won’t accept it, they should get out of healthcare and our public data right now. Or heed advice of critical friends and get it right to be trustworthy in future. .

What’s the plan ahead?

Beneath the packaging, this came across as a pitch on why Google DeepMind should get access to paid-for-by-the-taxpayer NHS patient data. They have no clinical background or duty of care. They say they want people to be part of a rigorous process, including a public/patient panel, but it’s a process they clearly want to shape and control, and for a future commercial model. Can a public panel be truly independent, and ethical, if profit plays a role?

Of course it’s rightly exciting for healthcare to see innovation and drives towards better clinical care, but not only the intent but how it gets done matters. This matters because it’s not a one-off.

The anticipation in the room of ‘if only we could access the whole NHS data cohort’ was tangible in the room, and what a gift it would be to commercial companies and product makers. Wrapped in heart wrenching stories. Stories of real-patients, with real-lives who genuinely want improvement for all. Who doesn’t want that? But hanging on the coat tails of Mr Suleyman were a range of commmercial companies and third party orgs asking for the same.

In order to deliver those benefits and avoid its risks there is well-established framework of regulation and oversight of UK  practitioners and use of medical records and in medical devices and tools: the General Medical Council, the Health and Social Care Information Centre (Now called ‘NHS Digital’), Confidentiality Advisory Group (CAG)and more, all have roles to play.

Google DeepMind and the Trusts have stepped outwith that framework and been playing catch up not only with public involvement, but also with MHRA regulatory approval.

One of the major questions is around the invisibility of data science decisions that have direct interventions in people’s life and death.

The ethics of data sciences in which decisions are automated, requires us to “guard against dangerous assumptions that algorithms are near-perfect, or more perfect than human judgement.”  (The Opportunities and Ethics of Big Data. [1])

If Google DeepMind now plans to share their API widely who will proof their tech? Who else gets to develop something similar?

Don’t be evil 2.0

Google DeepMind appropriated ‘do no harm’ as the health event motto, echoing the once favored Google motto ‘don’t be evil’.

However, they really needed to address that the fragility of some patients’ trust in their clinicians has been harmed already, before DeepMind has even run an algorithm on the data, simply because patient data was given away with patients’ permission.

A former Royal Free patient spoke to me at the event and said they were shocked to have to have first read in the papers that their confidential medical records had been given to Google without their knowledge. Another said his mother had been part of the cohort and has concerns. Why weren’t they properly informed? The public engagement work they should to my mind be doing, is with the London hospital individual patients whose data they have already been using without their consent, explaining why they got their confidential medical records without telling them, and addressing their questions and real concerns. Not at a flash public event.

I often think in the name, they just left off the ‘e’. They are Google. We are the deep mined. That may sound flippant but it’s not the intent. It’s entirely serious. Past patient data was handed over to mine, in order to think about building a potential future tool.

There was a lot of if, future, ambition, and sweeping generalisations and ‘high-level sketches’ of what might be one day. You need moonshots to boost discovery, but losing patient trust even of a few people, cannot be a casualty we should casually accept. For the company there is no side effect. For patients, it could last a lifetime.

If you go back to the roots of health care, you could take the since misappropriated Hippocratic Oath and quote not only, as Suleyman did, “do no harm” , but the next part. “I will not play God.”

Patriarchal top down Care.data was a disastrous model of engagement that confused communication with ‘tell the public loudly and often what we want to happen, what we think best, and then disregard public opinion.’ A model that doesn’t work.

The recent public engagement event on the National Data Guardian work consent models certainly appear from the talks to be learning those lessons. To get it wrong in commercial use, will be disastrous.

The far greater risk from this misadventure is not company  reputation, which seems to be top among Google DeepMind’s greatest concern. The risk that Google DeepMind seems prepared to take is one that is not at its cost, but that of public trust in the hospitals and NHS brand, public health, and its research.

Commercial misappropriation of patient data without consent could set back restoration of public trust and work towards a better model that has been work-in-progress since care.data car crash of 2013.

You might be able to abdicate responsibility if you think you’re not the driver. But where does the buck stop for contributory failure?

All this, says Google DeepMind, is nothing new, but Google isn’t other companies and this is a massive pilot move by a corporate giant into first appropriating and then brokering access to NHS-wide data to make an as-yet opaque private profit.  And being paid by the hospital trust to do so. Creating a data-sharing access infrastructure for the Royal Free is product development and one that had no permission to use 5 years worth of patient records to do so.

The care.data catastrophe may have damaged public trust and data access for public interest research for some time, but it did so doing commercial interests a massive favour. An assumption of ‘opt out’ rather than ‘opt in’ has become the NHS model. If the boundaries are changing of what is assumed under that, do the public still have no say in whether that is satisfactory? Because it’s not.

This example should highlight why an opt out model of NHS patient data is entirely unsatisfactory and cannot continue for these uses.

Should boundaries be in place?

So should boundaries in place in the NHS before this spreads. Hell yes. If as Mustafa said, it’s not just about developing technology but the process, regulatory and governance landscapes, then we should be told why their existing use of patient data intended for the Streams app development steam-rollered through those existing legal and ethical landscapes we have today. Those frameworks exist to preserve patients from quacks and skullduggery.

This then becomes about the duty of the controller and rights of the patient. It comes back to what we release, not only how it is used.

Can a panel of highly respected individuals intervene to embed good ethics if plans conflict with the purpose of making money from patients? Where are the boundaries between private and public good? Where they quash consent, where are its limitations and who decides? What boundaries do hospital trusts think they have on the duty of confidentiality?

It is for the hospitals as the data controllers from information received through their clinicians that responsibility lies.

What is next for Trusts? Giving an entire hospital patient database to supermarket pharmacies, because they too might make a useful tool? Mash up your health data with your loyalty card? All under assumed consent because product development is “direct care” because it’s clearly not research? Ethically it must be opt in.

App development is not using data for direct care. It is in product development. Post-truth packaging won’t fly. Dressing up the donkey by simply calling it by another name, won’t transform it into a unicorn, no matter how much you want to believe in it.

“In some sense I recognise that we’re an exceptional company, in other senses I think it’s important to put that in the wider context and focus on the patient benefit that we’re obviously trying to deliver.” [TechCrunch, November 22]

We’ve heard the cry, to focus on the benefit before. Right before care.data  failed to communicate to 50m people what it was doing with their health records. Why does Google think they’re different? They don’t. They’re just another company normalising this they say.

The hospitals meanwhile, have been very quiet.

What do patients want?

This was what Google DeepMind wanted to hear in the final 30 minutes of the event, but didn’t get to hear as all the questions were about what have you done so far and why?

There is already plenty of evidence what the public wants on the use of their medical records, from public engagement work that has already been done around NHS health data use from workshops and surveys since 2013. Public opinion is pretty clear. Many say companies should not get NHS records for commercial exploitation without consent at all (in the ESRC public dialogues on data in 2013, the Royal Statistical Society’s data trust deficit with lessons for policy makers work with Ipsos MORI in 2014, and the Wellcome Trust one-way mirror work in 2016 as well of course as the NHS England care.data public engagement workshops in 2014).

mirror

All those surveys and workshops show the public have consistent levels of concern about having a lack of control over who has access to their NHS data for what purposes and unlimited scope or future, and commercial purposes of their data is a red-line for many people.

A red-line which this Royal Free Google DeepMind project appeared to want to wipe out as if it had never been drawn at all.

I am sceptical that Google DeepMind has not done their research into existing public opinion on health data uses and research.

Those studies in public engagement already done by leading health and social science bodies state clearly that commercial use is a red line for some.

So why did they cross it without consent? Tell me why I should trust the hospitals to get this right with this company but trust you not to get it wrong with others. Because Google’s the good guys?

If this event and thinking ‘let’s get patients to front our drive towards getting more data’ sought to legitimise what they and these London hospitals are already getting wrong, I’m not sure that just ‘because we’re Google’ being big, bold and famous for creative disruption, is enough. This is a different game afoot. It will be a game-changer for patient rights to privacy if this scale of commercial product exploitation of identifiable NHS data becomes the norm at a local level to decide at will. No matter how terrific the patient benefit should be, hospitals can’t override patient rights.

If this steamrollers over consent and regulations, what next?

Regulation revolutionised, reframed or overruled

The invited speaker from Patients4Data spoke in favour of commercial exploitation as a benefit for the NHS but as Paul Wicks noted, was ‘perplexed as to why “a doctor is worried about crossing the I’s and dotting the T’s for 12 months (of regulatory approval)”.’

Appropriating public engagement is one thing. Appropriating what is seen as acceptable governance and oversight is another. If a new accepted model of regulation comes from this, we can say goodbye to the old one.  Goodbye to guaranteed patient confidentiality. Goodbye to assuming your health data are not open to commercial use.  Hello to assuming opt out of that use is good enough instead.

Trusted public regulatory and oversight frameworks exist for a reason. But they lag behind the industry and what some are doing. And if big players can find no retribution in skipping around them and then being approved in hindsight there’s not much incentive to follow the rules from the start. As TechCrunch suggested after the event, this is all “pretty standard playbook for tech firms seeking to workaround business barriers created by regulation.”

Should patients just expect any hospital can now hand over all our medical histories in a free-for-all to commercial companies without asking us first? It is for the Information Commissioner to decide whether the purposes of product design were what patients expected their data to be used for, when treated 5 years ago.

The state needs to catch up fast. The next private appropriation of the regulation of  AI collaboration oversight, has just begun. Until then, I believe civil society will not be ‘pedalling’ anything, but I hope will challenge companies cheek by jowl in any race to exploit personal confidential data and universal rights to privacy [2] by redesigning regulation on company terms.

Let’s be clear. It’s not direct care. It’s not research. It’s product development. For a product on which the commercial model is ‘I don’t know‘. How many companies enter a 5 year plan like that?

Benefit is great. But if you ignore the harm you are doing in real terms to real lives and only don’t see it because they’ve not talked to you, ask yourself why that is, not why you don’t believe it matters.

There should be no competition in what is right for patient care and data science and product development. The goals should be the same. Safe uses of personal data in ways the public expect, with no surprises. That means consent comes first in commercial markets.


[1] Olivia Varley-Winter, Hetan Shah, ‘The opportunities and ethics of big data: practical priorities for a national Council of Data Ethics.’ Theme issue ‘The ethical impact of data science’ compiled and edited by Mariarosaria Taddeo and Luciano Floridi. [The Royal Society, Volume 374, issue 2083]

[2] Universal rights to privacy: Upcoming Data Protection legislation (GDPR) already in place and enforceable from May 25, 2018 requires additional attention to fair processing, consent, the right to revoke it, to access one’s own and seek redress for inaccurate data. “The term “child” is not defined by the GDPR. Controllers should therefore be prepared to address these requirements in notices directed at teenagers and young adults.”

The Rights of the Child: Data policy and practice about children’s confidential data will impinge on principles set out in the United Nations Convention on the Rights of the Child, Article 12, the right to express views and be heard in decisions about them and Article 16 a right to privacy and respect for a child’s family and home life if these data will be used without consent. Similar rights that are included in the common law of confidentiality.

Article 8 of the Human Rights Act 1998 incorporating the European Convention on Human Rights Article 8.1 and 8.2 that there shall be no interference by a  public authority on the respect of private and family life that is neither necessary or proportionate.

Judgment of the Court of Justice of the European Union in the Bara case (C‑201/14) (October 2015) reiterated the need for public bodies to legally and fairly process personal data before transferring it between themselves. Trusts need to respect this also with contractors.

The EU Charter of Fundamental Rights, Article 52 also protects the rights of individuals about data and privacy and Article 52 protects the essence of these freedoms.

Data for Policy: Ten takeaways from the conference

The knowledge and thinking on changing technology, the understanding of the computing experts and those familiar with data, must not stay within conference rooms and paywalls.

What role do data and policy play in a world of post-truth politics and press? How will young people become better informed for their future?

The data for policy conference this week, brought together some of the leading names in academia and a range of technologists, government representatives, people from the European Commission, and other global organisations, Think Tanks, civil society groups, companies, and individuals interested in data and statistics. Beyond the UK, speakers came from several other countries in Europe, from the US, South America and Australia.

The schedule was ambitious and wide-ranging in topics. There was brilliant thinking and applications of ideas. Theoretical and methodological discussions were outnumbered by the presentations that included practical applications or work in real-life scenarios using social science data, humanitarian data, urban planning, public population-wide administrative data from health, finance, documenting sexual violence and more. This was good.

We heard about lots of opportunities and applied projects where large datasets are being used to improve the world. But while I always come away from these events having learned something and encouraged to learn more about those I didn’t, I do wonder if the biggest challenges in data and policy aren’t still the simplest.

No matter how much information we have, we must use it wisely. I’ve captured ten takeaways of things I would like to see follow. This may not have been the forum for it.

Ten takeaways on Data-for-Policy

1. Getting beyond the Bubble

All this knowledge must reach beyond the bubble of academia, beyond a select few white-male-experts-in well off parts of the world, and get into the hands and heads of the many. Ways to do this must include reducing the cost or changing  pathways of academic print access. Event and conference fees are also a  barrier to many.

2. Context of accessibility and control

There is little discussion of the importance of context. The nuance of most of these subjects was too much for the length of the sessions but I didn’t hear any single session mention threats to data access and trust in data collection posed by surveillance or state censorship or restriction of access to data or information systems, or the editorial control of knowledge and news by Facebook and co. There was no discussion of the influence of machine manipulators, how bots change news or numbers and create fictitious followings.

Policy makers and public are influenced by the media, post-truth or not. Policy makers in the UK government recently wrote in response to challenge over a Statutory Instrument that if Mums-net wasn’t kicking up  a fuss then they believed the majority of the public were happy. How are policy makers being influenced by press or social media statistics without oversight or regulating for their accuracy?

Increasing data and technology literacy in policy makers, is going to go far beyond improving an understanding of data science.

3. Them and Us

I feel a growing disconnect between those ‘in the know’ and those in ‘the public’. Perhaps that is a side-effect of my own understanding growing about how policy is made, but it goes wider. Those who talked about ‘the public’ did so without mention that attendees are all part of that public. Big data, are often our data. We are the public.

Vast parts of the population feel left behind already by policy and government decision-making; divided by income, Internet access, housing, life opportunites, and the ability to realise our dreams.

How policy makers address this gulf in the short and long term both matter as a foundation for what data infrastructure we have access to, how well trusted it is, whose data are included and who is left out of access to the information or decision-making using it.

Researchers prevented from accessing data held by government departments, perhaps who fear it will be used to criticise rather than help improve policy of the day, may be limiting our true picture of some of this divide and its solutions.

Equally data that is used to implement top-down policy without public involvement, seems a shame to ignore public opinion. I would like to have asked, does GDS in its land survey work searching for free school sites include people surveys asking, do you want a free school in your area at all?

4. There is no neutral

Global trust in politics is in tatters. Trust in the media is as bad. Neither appear to be interested across the world in doing much to restore their integrity.

All the wisdom in the world could not convince a majority in the 23rd June referendum, that the UK should remain in the European Union. This unspoken context was perhaps an aside to most of the subjects of the conference which went beyond the UK,  but we cannot ignore that the UK is deep in political crisis in the world, and at home the Opposition seems to have gone into a tailspin.

What role do data and evidence have in post-truth politics?

It was clear in discussion, that if I mentioned technology and policy in a political context, eyes started to glaze over. Politics should not interfere with the public interest, but it does and cannot be ignored. In fact it is short term political terms and needs for long term vision that are perhaps most at-odds in making good data policy plans.

The concept of public good, is not uncomplicated. It is made more complex still if you factor in changes over time, and cannot ignore that Trump or Turkey are not fictitious backdrops considering who decides what the public good and policy priorities should be.

Researchers’ role in shaping public good is not only about being ethical in their own research, but having the vision to have safeguards in place for how the knowledge they create are used.

5. Ethics is our problem, but who has the solution?

While many speakers touched on the common themes of ethics and privacy in data collection and analytics, saying this is going to be one of our greatest challenges, few address how, and who is taking responsibility and accountability for making it happen in ways that are not left to big business and profit making decision-takers.

It appears from last year, that ethics played a more central role. A year later we now have two new ethical bodies in the UK, at the UK Statistics Authority and at the Turing Institute. How they will influence the wider ethics issues in data science remains to be seen.

Legislation and policy are not keeping pace with the purchasing power or potential of the big players, the Googles and Amazons and Microsofts, and a government that sees anything resulting in economic growth as good, is unlikely to be willing to regulate it.

How technology can be used and how it should be used still seems a far off debate that no one is willing to take on and hold policy makers to account for. Implementing legislation and policy underpinned with ethics must serve as a framework for giving individuals insight into how decisions about them were reached by machines, or the imbalance of power that commercial companies and state agencies have in our lives that comes from insights through privacy invasion.

6. Inclusion and bias

Clearly this is one event in a world of many events that address similar themes, but I do hope that the unequal balance in representation across the many diverse aspects of being human are being addressed elsewhere.  A wider audience must be inclusive. The talk by Jim Waldo on retaining data accuracy while preserving privacy was interesting as it showed how deidentified data can create bias in results if data is very different from the original. Gaps in data, especially using big population data which excludes certain communities, wasn’t something I heard discussed as much.

7.Commercial data sources

Government and governmental organisations appear to be starting to give significant weight to the use of commercial data and social media data sources. I guess any data seen as ‘freely available’ that can be mined seems valuable. I wonder however how this will shape the picture of our populations, with what measures of validity and  whether data are comparable and offer reproducability.

These questions will matter in shaping policy and what governments know about the public. And equally, they must consider those communities whether in the UK or in other countries, that are not represented in these datasets and how these bias decision-making.

8. Data is not a panacea for policy making

Overall my take away is the important role that data scientists have to remind policy makers that data is only information. Nothing new. We may be able to access different sources of data in different ways, and process it faster or differently from the past, but we cannot rely on data of itself to solve the universal problems of the human condition. Data must be of good integrity to be useful and valuable. Data must be only one part of the library of resources to be used in planning policy. The limitations of data must also be understood. The uncertainties and unknowns can be just as important as evidence.

9. Trust and transparency

Regulation and oversight matter but cannot be the only solutions offered to concerns about shaping what is possible to do versus what should be done. Talking about protecting trust is not enough. Organisations must become more trustworthy if trust levels are to change; through better privacy policies, through secure data portability and rights to revoke consent and delete outdated data.

10. Young people and involvement in their future

What inspired me most were the younger attendees presenting posters, especially the PhD student using data to provide evidence of sexual violence in El Salvador and their passion for improving lives.

We are still not talking about how to protect and promote privacy in the Internet of Things, where sensors on every street corner in Smart Cities gather data about where we have been, what we buy and who we are with. Even our children’s toys send data to others.

I’m still as determined to convince policy makers that young people’s data privacy and digital self-awareness must be prioritised.

Highlighting the policy and practice failings in the niche area of the National Pupil Database serves only to get ideas from others how  policy and practice could be better. 20 million school children’s records is not a bad place to start to make data practice better.

The questions that seem hardest to move forward are the simplest: how to involve everyone in what data and policy may bring for future and not leave out certain communities through carelessness.

If the public is not encouraged to understand how our own personal data are collected and used, how can we expect to grow great data scientists of the future? What uses of data put good uses at risk?

And we must make sure we don’t miss other things, while data takes up the time and focus of today’s policy makers and great minds alike.

cb-poster-for-web

care.data listening events and consultation: The same notes again?

If lots of things get said in a programme of events, and nothing is left around to read about it, did they happen?

The care.data programme 2014-15 listening exercise and action plan has become impossible to find online. That’s OK, you might think, the programme has been scrapped. Not quite.

You can give your views online until September 7th on the new consultation, “New data security standards and opt-out models for health and social care”  and/or attend the new listening events, September 26th in London, October 3rd in Southampton and October 10th in Leeds.

The Ministerial statement on July 6, announced that NHS England had taken the decision to close the care.data programme after the review of data security and consent by Dame Fiona Caldicott, the National Data Guardian for Health and Care.

But the same questions are being asked again around consent and use of your medical data, from primary and secondary care. What a very long questionnaire asks is in effect,  do you want to keep your medical history private? You can answer only Q 15 if you want.

Ambiguity again surrounds what constitutes “de-identified” patient information.

What is clear is that public voice seems to have been deleted or lost from the care.data programme along with the feedback and brand.

People spoke up in 2014, and acted. The opt out that 1 in 45 people chose between January and March 2014 was put into effect by the HSCIC in April 2016. Now it seems, that might be revoked.

We’ve been here before.  There is no way that primary care data can be extracted without consent without it causing further disruption and damage to public trust and public interest research.  The future plans for linkage between all primary care data and secondary data and genomics for secondary uses, is untenable without consent.

Upcoming events cost time and money and will almost certainly go over the same ground that hours and hours were spent on in 2014. However if they do achieve a meaningful response rate, then I hope the results will not be lost and will be combined with those already captured under the ‘care.data listening events’ responses.  Will they have any impact on what consent model there may be in future?

So what we gonna do? I don’t know, whatcha wanna do? Let’s do something.

Let’s have accredited access and security fixed. While there may now be a higher transparency and process around release, there are still problems about who gets data and what they do with it.

Let’s have clear future scope and control. There is still no plan to give the public rights to control or delete data if we change our minds who can have it or for what purposes. And that is very uncertain. After all, they might decide to privatise or outsource the whole thing as was planned for the CSUs. 

Let’s have answers to everything already asked but unknown. The questions in the previous Caldicott review have still to be answered.

We have the possibility to  see health data used wisely, safely, and with public trust. But we seem stuck with the same notes again. And the public seem to be the last to be invited to participate and views once gathered, seem to be disregarded. I hope to be proved wrong.

Might, perhaps, the consultation deliver the nuanced consent model discussed at public listening exercises that many asked for?

Will the care.data listening events feedback summary be found, and will its 2014 conclusions and the enacted opt out be ignored? Will the new listening event view make more difference than in 2014?

Is public engagement, engagement, if nobody hears what was said?

Mum, are we there yet? Why should AI care.

Mike Loukides drew similarities between the current status of AI and children’s learning in an article I read this week.

The children I know are always curious to know where they are going, how long will it take, and how they will know when they get there. They ask others for guidance often.

Loukides wrote that if you look carefully at how humans learn, you see surprisingly little unsupervised learning.

If unsupervised learning is a prerequisite for general intelligence, but not the substance, what should we be looking for, he asked. It made me wonder is it also true that general intelligence is a prerequisite for unsupervised learning? And if so, what level of learning must AI achieve before it is capable of recursive self-improvement? What is AI being encouraged to look for as it learns, what is it learning as it looks?

What is AI looking for and how will it know when it gets there?

Loukides says he can imagine a toddler learning some rudiments of counting and addition on his or her own, but can’t imagine a child developing any sort of higher mathematics without a teacher.

I suggest a different starting point. I think children develop on their own, given a foundation. And if the foundation is accompanied by a purpose — to understand why they should learn to count, and why they should want to — and if they have the inspiration, incentive and  assets they’ll soon go off on their own, and outstrip your level of knowledge. That may or may not be with a teacher depending on what is available, cost, and how far they get compared with what they want to achieve.

It’s hard to learn something from scratch by yourself if you have no boundaries to set knowledge within and search for more, or to know when to stop when you have found it.

You’ve only to start an online course, get stuck, and try to find the solution through a search engine to know how hard it can be to find the answer if you don’t know what you’re looking for. You can’t type in search terms if you don’t know the right words to describe the problem.

I described this recently to a fellow codebar-goer, more experienced than me, and she pointed out something much better to me. Don’t search for the solution or describe what you’re trying to do, ask the search engine to find others with the same error message.

In effect she said, your search is wrong. Google knows the answer, but can’t tell you what you want to know, if you don’t ask it in the way it expects.

So what will AI expect from people and will it care if we dont know how to interrelate? How does AI best serve humankind and defined by whose point-of-view? Will AI serve only those who think most closely in AI style steps and language?  How will it serve those who don’t know how to talk about, or with it? AI won’t care if we don’t.

If as Loukides says, we humans are good at learning something and then applying that knowledge in a completely different area, it’s worth us thinking about how we are transferring our knowledge today to AI and how it learns from that. Not only what does AI learn in content and context, but what does it learn about learning?

His comparison of a toddler learning from parents — who in effect are ‘tagging’ objects through repetition of words while looking at images in a picture book — made me wonder how we will teach AI the benefit of learning? What incentive will it have to progress?

“the biggest project facing AI isn’t making the learning process faster and more efficient. It’s moving from machines that solve one problem very well (such as playing Go or generating imitation Rembrandts) to machines that are flexible and can solve many unrelated problems well, even problems they’ve never seen before.”

Is the skill to enable “transfer learning” what will matter most?

For AI to become truly useful, we need better as a global society to understand *where* it might best interface with our daily lives, and most importantly *why*.  And consider *who* is teaching and AI and who is being left out in the crowdsourcing of AI’s teaching.

Who is teaching AI what it needs to know?

The natural user interfaces for people to interact with today’s more common virtual assistants (Amazon’s Alexa, Apple’s Siri and Viv, Microsoft  and Cortana) are not just providing information to the user, but through its use, those systems are learning. I wonder what percentage of today’s  population is using these assistants, how representative are they, and what our AI assistants are being taught through their use? Tay was a swift lesson learned for Microsoft.

In helping shape what AI learns, what range of language it will use to develop its reference words and knowledge, society co-shapes what AI’s purpose will be —  and for AI providers to know what’s the point of selling it. So will this technology serve everyone?

Are providers counter-balancing what AI is currently learning from crowdsourcing, if the crowd is not representative of society?

So far we can only teach machines to make decisions based on what we already know, and what we can tell it to decide quickly against pre-known references using lots of data. Will your next image captcha, teach AI to separate the sloth from the pain-au-chocolat?

One of the task items for machine processing is better searches. Measurable goal driven tasks have boundaries, but who sets them? When does a computer know, if it’s found enough to make a decision. If the balance of material about the Holocaust on the web for example, were written by Holocaust deniers will AI know who is right? How will AI know what is trusted and by whose measure?

What will matter most is surely not going to be how to optimise knowledge transfer from human to AI — that is the baseline knowledge of supervised learning — and it won’t even be for AI to know when to use its skill set in one place and when to apply it elsewhere in a different context; so-called learning transfer, as Mike Loukides says. But rather, will AI reach the point where it cares?

  • Will AI ever care what it should know and where to stop or when it knows enough on any given subject?
  • How will it know or care if what it learns is true?
  • If in the best interests of advancing technology or through inaction  we do not limit its boundaries, what oversight is there of its implications?

Online limits will limit what we can reach in Thinking and Learning

If you look carefully at how humans learn online, I think rather than seeing  surprisingly little unsupervised learning, you see a lot of unsupervised questioning. It is often in the questioning that is done in private we discover, and through discovery we learn. Often valuable discoveries are made; whether in science, in maths, or important truths are found where there is a need to challenge the status quo. Imagine if Galileo had given up.

The freedom to think freely and to challenge authority, is vital to protect, and one reason why I and others are concerned about the compulsory web monitoring starting on September 5th in all schools in England, and its potential chilling effect. Some are concerned who  might have access to these monitoring results today or in future, if stored could they be opened to employers or academic institutions?

If you tell children do not use these search terms and do not be curious about *this* subject without repercussions, it is censorship. I find the idea bad enough for children, but for us as adults its scary.

As Frankie Boyle wrote last November, we need to consider what our internet history is:

“The legislation seems to view it as a list of actions, but it’s not. It’s a document that shows what we’re thinking about.”

Children think and act in ways that they may not as an adult. People also think and act differently in private and in public. It’s concerning that our private online activity will become visible to the State in the IP Bill — whether photographs that captured momentary actions in social media platforms without the possibility to erase them, or trails of transitive thinking via our web history — and third-parties may make covert judgements and conclusions about us, correctly or not, behind the scenes without transparency, oversight or recourse.

Children worry about lack of recourse and repercussions. So do I. Things done in passing, can take on a permanence they never had before and were never intended. If expert providers of the tech world such as Apple Inc, Facebook Inc, Google Inc, Microsoft Corp, Twitter Inc and Yahoo Inc are calling for change, why is the government not listening? This is more than very concerning, it will have disastrous implications for trust in the State, data use by others, self-censorship, and fear that it will lead to outright censorship of adults online too.

By narrowing our parameters what will we not discover? Not debate?  Or not invent? Happy are the clockmakers, and kids who create. Any restriction on freedom to access information, to challenge and question will restrict children’s learning or even their wanting to.  It will limit how we can improve our shared knowledge and improve our society as a result. The same is true of adults.

So in teaching AI how to learn, I wonder how the limitations that humans put on its scope — otherwise how would it learn what the developers want — combined with showing it ‘our thinking’ through search terms,  and how limitations on that if users self-censor due to surveillance, will shape what AI will help us with in future and will it be the things that could help the most people, the poorest people, or will it be people like those who programme the AI and use search terms and languages it already understands?

Who is accountable for the scope of what we allow AI to do or not? Who is accountable for what AI learns about us, from our behaviour data if it is used without our knowledge?

How far does AI have to go?

The leap for AI will be if and when AI can determine what it doesn’t know, and it sees a need to fill that gap. To do that, AI will need to discover a purpose for its own learning, indeed for its own being, and be able to do so without limitation from the that humans shaped its framework for doing so. How will AI know what it needs to know and why? How will it know, what it knows is right and sources to trust? Against what boundaries will AI decide what it should engage with in its learning, who from and why? Will it care? Why will it care? Will it find meaning in its reason for being? Why am I here?

We assume AI will know better. We need to care, if AI is going to.

How far are we away from a machine that is capable of recursive self-improvement, asks John Naughton in yesterday’s Guardian, referencing work by Yuval Harari suggesting artificial intelligence and genetic enhancements will usher in a world of inequality and powerful elites. As I was finishing this, I read his article, and found myself nodding, as I read the implications of new technology focus too much on technology and too little on society’s role in shaping it.

AI at the moment has a very broad meaning to the general public. Is it living with life-supporting humanoids?  Do we consider assistive search tools as AI? There is a fairly general understanding of “What is A.I., really?” Some wonder if we are “probably one of the last generations of Homo sapiens,” as we know it.

If the purpose of AI is to improve human lives, who defines improvement and who will that improvement serve? Is there a consensus on the direction AI should and should not take, and how far it should go? What will the global language be to speak AI?

As AI learning progresses, every time AI turns to ask its creators, “Are we there yet?”,  how will we know what to say?

image: Stephen Barling flickr.com/photos/cripsyduck (CC BY-NC 2.0)

Thinking to some purpose