By Anthony Grayling and Brian Ball
New scientific understanding and engineering techniques have always impressed and frightened. No doubt they will continue to. OpenAI recently announced that it anticipates “superintelligence” – AI surpassing human abilities – this decade. It is accordingly building a new team, and devoting 20% of its computing resources to ensuring that the behaviour of such AI systems will be aligned with human values.
It seems they don’t want rogue artificial superintelligences waging war on humanity, as in James Cameron’s 1984 science fiction thriller, The Terminator (ominously, Arnold Schwarzenegger’s terminator is sent back in time from 2029). OpenAI is calling for top machine-learning researchers and engineers to help them tackle the problem.
But might philosophers have something to contribute? More generally, what can be expected of the age-old discipline in the new technologically advanced era that is now emerging?
To begin to answer this, it is worth stressing that philosophy has been instrumental to AI since its inception. One of the first AI success stories was
a 1956 computer program, dubbed the the Logic Theorist, created by Allen Newell and Herbert Simon. Its job was to prove theorems using propositions from Principia Mathematica, a 1910 a three-volume work by the philosophers Alfred North Whitehead and Bertrand Russell, aiming to reconstruct all of mathematics on one logical foundation.
Indeed, the early focus on logic in AI owed a great deal to the foundational debates pursued by mathematicians and philosophers.
One significant step was the German philosopher Gottlob Frege’s development of modern logic in the late 19th century. Frege introduced the use of quantifiable variables – rather than objects such as people – into logic. His approach made it possible to say not only, for example, “Joe Biden is president” but also to systematically express such general thoughts as that “there exists an X such that X is president”, where “there exists” is a quantifier, and “X” is a variable.
Other important contributors in the 1930s were the Austrian-born logician Kurt Gödel, whose theorems of completeness and incompleteness are about the limits of what one can prove, and Polish logician Alfred Tarski’s “proof of the indefinability of truth”. The latter showed that “truth” in any standard formal system cannot be defined within that particular system, so that arithmetical truth, for example, cannot be defined within the system of arithmetic.
Finally, the 1936 abstract notion of a computing machine by the British pioneer Alan Turing drew on such development and had a huge impact on early AI.
It might be said, however, that even if such good old fashioned symbolic AI was indebted to high-level philosophy and logic, the “second-wave” AI, based on deep learning, derives more from the concrete engineering feats associated with processing vast quantities of data.
Still, philosophy has played a role here too. Take large language models, such as the one that powers ChatGPT, which produces conversational text. They are enormous models, with billions or even trillions of parameters, trained on vast datasets (typically comprising much of the internet). But at their heart, they track – and exploit – statistical patterns of language use. Something very much like this idea was articulated by the Austrian philosopher Ludwig Wittgenstein in the middle of the 20th century: “the meaning of a word”, he said, “is its use in the language”.
But contemporary philosophy, and not just its history, is relevant to AI and its development. Could an LLM truly understand the language it processes? Might it achieve consciousness? These are deeply philosophical questions.
Science has so far been unable to fully explain how consciousness arises from the cells in the human brain. Some philosophers even believe that this is such a “hard problem” that is beyond the scope of science, and may require a helping hand of philosophy.
In a similar vein, we can ask whether an image generating AI could be truly creative. Margaret Boden, a British cognitive scientist and philosopher of AI, argues that while AI will be able to produce new ideas, it will struggle to evaluate them as creative people do.
She also anticipates that only a hybrid (neural-symbolic) architecture – one that uses both the logical techniques and deep learning from data – will achieve artificial general intelligence.
Human values
To return to OpenAI’s announcement, when prompted with our question about the role of philosophy in the age of AI, ChatGPT suggested to us that (amongst other things) it “helps ensure that the development and use of AI are aligned with human values”.
In this spirit, perhaps we can be allowed to propose that, if AI alignment is the serious issue that OpenAI believes it to be, it is not just a technical problem to be solved by engineers or tech companies, but also a social one. That will require input from philosophers, but also social scientists, lawyers, policymakers, citizen users and others.
Indeed, many people are worried about the rising power and influence of tech companies and their impact on democracy. Some argue we need a whole new way of thinking about AI – taking into account the underlying systems supporting the industry. The British barrister and author Jamie Susskind, for example, has argued it is time to build a “digital republic” – one which ultimately rejects the very political and economic system that has given tech companies so much influence.
Finally, let us briefly ask, how will AI affect philosophy? Formal logic in philosophy actually dates to Aristotle’s work in antiquity. In the 17th century. the German philosopher Gottfried Leibniz suggested that we may one day have a “calculus ratiocinator” – a calculating machine that would help us to derive answers to philosophical and scientific questions in a quasi-oracular fashion.
Perhaps we are now beginning to realise that vision, with some authors advocating a “computational philosophy” that literally encodes assumptions and derives consequences from them. This ultimately allows factual and/or value-oriented assessments of the outcomes.
For example, the PolyGraphs project simulates the effects of information sharing on social media. This can then be used to computationally address questions about how we ought to form our opinions.
Certainly, progress in AI has given philosophers plenty to think about; it may even have begun to provide some answers.
Anthony Grayling is Professor of Philosophy at Northeastern University London. Brian Ball is Associate Professor of Philosophy AI and Information Ethics at Northeastern University London.
The Conversation arose out of deep-seated concerns for the fading quality of our public discourse and recognition of the vital role that academic experts could play in the public arena. Information has always been essential to democracy. It’s a societal good, like clean water. But many now find it difficult to put their trust in the media and experts who have spent years researching a topic. Instead, they listen to those who have the loudest voices. Those uninformed views are amplified by social media networks that reward those who spark outrage instead of insight or thoughtful discussion. The Conversation seeks to be part of the solution to this problem, to raise up the voices of true experts and to make their knowledge available to everyone. The Conversation publishes nightly at 9 p.m. on FlaglerLive.
JW says
Yes, philosophy is important today although it dates back thousands of years. Western philosophy is based on Greek philosophers like Socrates, Plato and Aristotle. These days schools (in the US) do not have time to teach philosophy or any form of critical thinking or rational inquiry of knowledge. SAT tests will do.
If you ask an average person what AI stands for, you will be surprised. But I admit that Artificial Intelligence is confusing.
First, Artificial means: not natural, real or true. Second, Intelligence is a human mental capacity to think critical, rational or pragmatic; no such things as (gut) feelings or instinct.
The term AI is now exclusively used for the ability of digital computers or computer controlled robots to perform tasks commonly associated with intelligent beings (i.e people).
So, it is nothing more than a manipulative marketing tool, i.e the next smart phone. Yes, it may help us to become more productive and profitable but it will cost us a lot of jobs!
So, enjoy sports and entertainment, a two day work week and we can further limit education by banning more books, teaching fewer subjects (no history etc) and no culture and civics. No need to vote anymore! Wasn’t that what a presidential candidate promoted lately?
And let’s not forget that this all results in us, the people, to become increasingly ignorant and less intelligent. Looks boring to me!
Sherry says
A thought provoking article for sure.
Years ago as a professional recruiter who placed AI engineers, data base architects and analysts, programmers, etc. in Silicon Valley and Fortune 500 companies, I worried about the mindsets of those programming our future lives. I worried, in the 1990’s, about their extremely narrow and linear views on systems architecture. Unfortunately, when project managers wanted to “speed things along”/maximize profits, they always trimmed the actual systems “user” architects on the front end and the “quality assurance”/testers on the back end of software development. “Let’s just leave it to the programmers”. Time after time, I saw systems “go live” that were massively flawed. Often they were “NOT user friendly”. . . meaning they were not “normal human intuitive”. Often they had “bugs” that at best were annoying and at worst crashed the software entirely.
BUT. . . Here’s the rub! In the 1990’s those highly intelligent “geeks” were generally “ethical” and “trust worthy”. Many were from Asia and had strict upbringing regarding the protection of their honor and their family’s honor.
Here is my new worry which I would equate to a ticking “time bomb” sitting on top of my old worries. This is where the “extremely critical” role of personal character/morals/principles comes into play. What happens during the development of those AI systems in this new politically driven era of massive moral decay and corruption, which is rapidly becoming acceptable by the mainstream? If an AI system, which learns from itself, is programmed by an “evil genius”, hell bent on controlling power/money in an undiscoverable way. . . who/what is going to stop him? AI will most certainly propagate the inherit “evils”, while “perfecting” the deception and process. . . much to the detriment of humankind. Simply because AI has NO HUMAN SOUL!
Laurel says
Programmers are not end users and we end users are aggravated by this on a regular basis.
Adobe is genius software, but even intelligent users are occasionally stumped. By the time Microsoft figures it out, they change it. AI will probably work it out.
The future’s wide open!