Tag Archives: news

Pirates and their stochastic parrots

It’s a privilege to have a letter published in the FT as I do today, and thanks to the editors for all their work in doing so.

I’m a bit sorry that it lost the punchline which was supposed to bring a touch of AI humour about pirates and their stochastic parrots. And its rather key point was cut that,

“Nothing in current European laws, including Convention 108 for the UK, prevents companies developing AI lawfully.”

So for the record, and since it’s (£), my agreed edited version was:

“The multi-signatory open letter advertisement, paid for by Meta, entitled “Europe needs regulatory certainty on AI” (September 19) was fittingly published on International Talk Like a Pirate Day.

It seems the signatories believe they cannot do business in Europe without “pillaging” more of our data and are calling for new law.

Since many companies lobbied against the General Data Protection Regulation or for the EU AI Act to be weaker, or that the Council of Europe’s AI regulation should not apply to them, perhaps what they really want is approval to turn our data into their products without our permission.

Nothing in current European laws, including Convention 108 for the UK, prevents companies developing AI lawfully. If companies want more consistent enforcement action, I suggest Data Protection Authorities comply and act urgently to protect us from any pirates out there, and their greedy stochastic parrots. “

Prior to print they asked to cut out a middle paragraph too.

“In the same week, LinkedIn sneakily switched on a ‘use me for AI development’ feature for UK users without telling us (paused the next day); Larry Ellison suggested at Oracle’s Financial Analyst Meeting  that more AI should usher in an era of mass citizen surveillance, and our Department for Education has announced it will allow third parties to exploit school children’s assessment data for AI product building, and can’t rule out it will include personal data.”

It is in fact the cumulative effect around the recent flurry of AI activities by various parties, state and commercial, that deserves greater attention rather than being only about this Meta-led complaint. Who is grabbing what data and what infrastructure contracts and creating what state dependencies and strengths to what end game?  While some present the “AI race” as China or India versus the EU or the US to become AI “super powers”, is what “Silicon Valley” offers, their way is the only way, a better offer?

It’s not in fact, “Big Tech” I’m concerned about, but the arrogance of so many companies that in the middle of regulatory scrutiny  would align themselves with one that would rather put out PR that omits the fact they are under it, instead only calling for the law to be changed, and frankly misleading the public by suggesting it is all for our own good than talk about how this serves their own interests.

Who do they think they are to dictate what new laws must look like when they seem simply unwilling to stick to those we have?

Perhaps this open letter serves as a useful starting point to direct DPAs to the companies in need of most scrutiny around their data practices. They seem to be saying they want weaker laws or more enforcement. Some are already well known for challenging both. Who could forget Meta (Facebook’s) secret emotional contagion study involving children in which friends’ postings were moved to influence moods, or the case of letting third parties, including Cambridge Analytica access users’ data? Then there’s the data security issues or the fine over international transfers or the anti-trust issues. And there’s the legal problems with their cookies. And all of this built from humble beginnings by the same founder of Facemash “a prank website” to rate women as hot or not.

As Congressman Long reportedly told Zuckerberg in 2018, “You’re the guy to fix this. We’re not. You need to save your ship.”

The Meta-led ad called for “harmonisation enshrined in regulatory frameworks like the GDPR” and I absolutely agree. The DPAs need to stand tall and stand up to OpenAI and friends (ever dwindling in number so it seems) and reassert the basic, fundamental principles of data protection laws from the GDPR to Convention 108 to protect fundamental human rights. Our laws should do so whether companies like them or not. After all, it is often abuse of data rights by companies, and states, that populations need protection from.

Data protection ‘by design and by default’ is not optional under European data laws established for decades. It is not enough to argue that processing is necessary because you have chosen to operate your business in a particular way, nor a necessary part of your chosen methods.

The Netherlands DPA is right to say scraping is almost always unlawful. A legitimate interest cannot be simply plucked from thin air by anyone who is neither an existing data controller nor processor and has no prior relationship to the data subjects who have no reasonable expectation of their re-use of data online that was not posted for the purposes that the scraper has grabbed it and without any informed processing and offer of an opt out. Instead the only possible basis for this kind of brand new controller should be consent. Having to break the law, hardly screams ‘innovation’.

Regulators do not exist to pander to wheedling, but to independently uphold the law in a democratic society in order to protect people, not prioritise the creation of products:

  • Lawfulness, fairness and transparency.
  • Purpose limitation.
  • Data minimisation.
  • Accuracy.
  • Storage limitation.
  • Integrity and confidentiality (security)
    and
  • Accountability.

In my view, it is the lack of dissausive enforcement as part of checks-and-balances on big power like this, regardless of where it resides, that poses one of the biggest data-related threats to humanity.

Not AI, nor being “left out” of being used to build it for their profit.