By Philip Seargeant
Headlines about the threats of artificial intelligence (AI) tend to be full of killer robots, or fears that when they’re not on killing sprees, these same robots will be hoovering up human jobs. But a serious danger which gets surprisingly little media attention is the impact these new technologies are likely to have on freedom of expression. And, in particular, how they’re able to undermine some of the most foundational legal tenets that protect free speech.
Every time a new communications technology sweeps through society, it disrupts the balance that has previously been struck between social stability and individual liberty.
We’re currently living through this. Social media has made new forms of community networking, surveillance and public exposure possible, which have led to increased political polarisation, the rise of global populism and an epidemic of online harassment and bullying.
Amid all this, free speech has become a totemic issue in the culture wars, with its status both boosted and threatened by the societal forces unleashed by social media platforms.
Yet free speech debates tend to be caught up with arguments about “cancel culture” and the “woke” mindset. This risks overlooking the impact technology is having on how freedom of expression laws actually work.
In particular, the way that AI gives governments and tech companies the ability to censor expression with increasing ease, and at great scale and speed. This is a serious issue that I explore in my new book, The Future of Language.
The delicate balance of free speech
Some of the most important protections for free speech in liberal democracies such as the UK and the US rely on technicalities in how the law responds to the real-life actions of everyday citizens.
A key element of the current system relies on the fact that we, as autonomous individuals, have the unique ability to transform our ideas into words and communicate these to others. This may seem a rather unremarkable point. But the way the law currently works is based on this simple assumption about human social behaviour, and it’s something that AI threatens to undermine.
Free speech protections in many liberal societies rule against the use of “prior restraint” – that is, blocking an utterance before it’s been expressed.
The government, for instance, should not be able to prevent a newspaper from publishing a particular story, although it can prosecute it for doing so after publication if it thinks the story is breaking any laws. The use of prior restraint is already widespread in countries such as China, which have very different attitudes to the regulation of expression.
This is significant because, despite what tech libertarians such as Elon Musk may assert, no society in the world allows for absolute freedom of speech. There’s always a balance to be struck between protecting people from the real harm that language can cause (for example by defaming them), and safeguarding people’s right to express conflicting opinions and criticise those in power. Finding the right balance between these is one of the most challenging decisions a society is faced with.
AI and prior restraint
Given that so much of our communication today is mediated by technology, it is now extremely easy for AI assistance to be used to enact prior restraint, and to do so at great speed and massive scale. This would create circumstances in which that basic human ability to turn ideas into speech could be compromised, as and when a government (or social media exec) wishes it to be.
The UK’s recent Online Safety Act, for instance, as well plans in the US and Europe to use “upload filtering” (algorithmic tools for blocking certain content from being uploaded) as a way of screening for offensive or illegal posts, all encourage social media platforms to use AI to censor at source.
The rationale given for this is a practical one. With such a huge quantity of content being uploaded every minute of every day, it becomes extremely challenging for teams of humans to monitor everything. AI is a fast and far less expensive alternative.
But it’s also automated, unable to bring real-life experience to bear, and its decisions are rarely subject to public scrutiny. The consequences of this are that AI-driven filters can often lean towards censoring content which is neither illegal or offensive.
Free speech as we understand it today relies on specific legal processes of protection that have developed over centuries. It’s not an abstract idea, but one grounded in very particular social and legal practices.
Legislation that encourages content regulation by automation effectively dismisses these processes as technicalities. In doing so, it risks jeopardising the entire institution of free speech.
Free speech will always be an idea sustained by ongoing debate. There’s never a settled formula for defining what should be outlawed and what not. This is why determining what counts as acceptable and unacceptable needs to take place in open society and be subject to appeal.
While there are indications that some governments are beginning to acknowledge this in planning for the future of AI, it needs to be centre stage in all such plans.
Whatever role AI may play in helping to monitor online content, it mustn’t constrain our ability to argue among ourselves about what sort of society we’re trying to create.
Philip Seargeant is Senior Lecturer in Applied Linguistics, The Open University.
The Conversation arose out of deep-seated concerns for the fading quality of our public discourse and recognition of the vital role that academic experts could play in the public arena. Information has always been essential to democracy. It’s a societal good, like clean water. But many now find it difficult to put their trust in the media and experts who have spent years researching a topic. Instead, they listen to those who have the loudest voices. Those uninformed views are amplified by social media networks that reward those who spark outrage instead of insight or thoughtful discussion. The Conversation seeks to be part of the solution to this problem, to raise up the voices of true experts and to make their knowledge available to everyone. The Conversation publishes nightly at 9 p.m. on FlaglerLive.
Pogo says
@Error — required field empty
And so it goes
“…This is the way the world ends
This is the way the world ends
This is the way the world ends
Not with a bang but a whimper.”
The Hollow Men by T.S. Eliot
https://allpoetry.com/the-hollow-men
Dennis C Rathsam says
Free speech is already doomed in this country, and it has nothing to do with AI. CMBC, CNN already killed free speech. After TRUMPS victory in Iowa, the Communist News Network, & CMBC refused to air his speech! They did the same to TRUMP in N.H. Who are they to not show TRUMPS speech? After lying to the American people for over 4 years about Russian Collootion. Now they with hold the truth from the people, prompting who you should vote for. No wonder thier ratings are low. FOX, NEWSWEEK,The Dana show, kills them in the dinner hour, and the 8 oclock hours. Fake news at its best! What are they so afraid of? The FCC should levy a hefty fine for this. They are interfering with America ellection process. Whom ever you choose to support, the playing field should be free of this type of socialism.
The dude says
Can you translate to English?
I don’t speak gibberish…
Dennis C Rathsam says
NO gibberish here… just the TRUTH!
Bill C says
Funny- even AI couldn’t figure out what “Collootion” is.
Sherry says
Good Morning Dude. . . trying to interact with angry trolls like ole dennis is a waste of your valuable time and reasonable thought process. Happy Weekend!
dave says
“The goal for AI is to be able to do things such as recognize patterns, make decisions, and judge like humans.”
I can see it all now, “let me ask my computer” on what I should say and how I should address this issue.
BIG Neighbor says
The Ai threat is actually already at work as we’ve seen with the social dissonance through online communication and distributed dissemination of hyperbole trickling in as information. Municipalities continue to use social media platforms to put out PSA and sensitive messages on platforms like Instagram, Facebook and X. We’ve seen subscription fees to some platforms now charge a minor fee to counter the auto-bots from dominating their operations. I’ve received text messages from unidentified sources directing me to evacuate my home during extreme weather without a means to validate the source of the message back to my county emergency office. Although enterprises like X can act quickly to this moving threat, that’s not the case with the legal framework in this country. We can’t act quickly to counter anything in legislation and regulation other than to assault one another. And all the while our enemies bank on us continuing to struggle to catch up and pull ourselves together. Next Generation approaches to problem solving issues like these is the mechanism that “in theory” is the toolbox legislators put together to allow protections for matters like these to be abated….provided there is …interoperability and good faith among actors between the people and industry. Next GEN is the ability to reduce the complexity of consumer eco space built up over time so it is easier, open and safer to use to encourage public engagement while applying Ai agency responsibly in those ecosystem. Right now, data rights, metaproperties and intellectual protection under the Constitution as useful arts are my biggest concern. Legislation lags the momentum of technology and free enterprise, always has and now is almost a guarantee. So protect human dignity as “beings”rather than actors, restore faith in one. Otherwise the history narrative will read BIG Industry did nothing to protect individual privacy. Ethics is our new cornerstone, not legislation.
Sherry says
Thank you BN. . . excellent points about AI!
I have a great concern that very erratic Elon Musk wields such massive power over our Defense Department, Space Industrial complex, Starlink/internet communications, Twitter/AKA X, etc. Absolute Power Corrupts Absolutely! He has already interfered in the Russian war that is destroying Ukraine:
The author stated as Ukrainian submarine drones strapped with explosives approached the Russian fleet, they “lost connectivity and washed ashore harmlessly”.
The decision by Musk left Ukrainian officials begging him to turn the satellites back on. However, Musk’s decision was driven by an acute fear that Russia would respond to a Ukrainian attack on Crimea with nuclear weapons. His concern over a “mini-Peral Harbor”, did not come to pass in Crimea.
However, Musk has clarified that SpaceX did not deactivate anything. The Starlink regions were not activated. He added that the US government had ordered to activate Starlink all the way to Sevastopol to sink most of the Russian fleet at anchor. “If I had agreed to their request, then SpaceX would be explicitly complicit in a major act of war and conflict escalation,” Elon Musk said.