From Turing to Searle: can there ever be a reliable test for AI consciousness?
Professor Jonathan Birch discusses the ethical and social implications of AI consciousness.
AI consciousness: a never-ending philosophical can of worms that leaves us with more queries than theories?
And even if we manage to define AI consciousness, is it feasible that there could ever be a reliable test for it?
This is the question that Professor Jonathan Birch of the London School of Economics discussed when I had the chance of hearing him at UCL a couple weeks ago.
AI consciousness - and the philosophical implications of the topic such as AI personhood, electronic rights, super-intelligence and machine sentience - are all areas of major interest to me. In this previous article I discuss how close we are to defining AI as conscious, and how we that will affect our relationship with it.
In his talk, Professor Birch puts forward two challenges present in the AI consciousness centrism puzzle.
“Challenge 1: Millions of users will soon misattribute human-like consciousness to AI friends, partners, and assistants on the basis of mimicry and role-play. We don’t know how to prevent this.”
To me, this is a chilling predicament. More and more standard daily human interactions are being replaced by AI interactions, and this is influencing peoples’ perception on their relationship with AI (including the example of a colleague of mine admitting to using ChatGPT all the time, ranging from their marriage counsellor to their financial advisor).
When discussing Challenge 1, Birch presents The Persisting Interlocutor Illusion. In 2022 Google pulled the plug on their AI project LaMDA after it was revealed that the system may have achieved some level of sentience.
Flash forward to this year: when interviewed, thousands of users of chatbots and other AI tools believe AI has some conscious capabilities, and treat these systems accordingly too.
The Persisting Interlocutor Illusion is the idea that “chatbot interfaces are able to generate powerful illusions of a companion, assistant or partner being present.” Users cannot distinguish that the system they are communicating with is not a singular continuous being.
Birch highlights that it is possible to break this illusion of consciousness through deduction, for example the same way we are able to distinguish between optical illusions and possible reality.
I believe that corporations and developers that are using this illusion to profit off the unawareness of users are acting irresponsibly, instead inducing and encouraging people to rely on or fall for potentially harmful misapprehensions.
Therefore, to break this illusion questions arise about how to design Al systems that do not create this illusory sense of consciousness. Moreover, what the ethical parameters of the enforcement of the design should be, and how can we get developers to adopt these design features.
Following these ideas, Birch presents Challenge 2.
"Challenge 2: Profoundly alien forms of consciousness may be genuinely achieved in Al, but our theoretical understanding of consciousness is too immature to provide confident answers one way or the other."
He suggests the ideas of "temporarily fragmented 'flickers' of consciousness remaining conceivable", or of "deeply buried subjects (or 'shoggoths’) remaining conceivable”, and that perhaps "consciousness in systems, if there at all, is of a profoundly alien kind".
However, as a result of their design, Al systems such as LLMs mimic, whether knowingly or unknowingly, typical human behaviours that would qualify as evidence of consciousness.
Even though you will receive an unambiguous 'No' if you ask ChatGPT explicitly "Are you conscious?", it still demonstrates an understanding of consciousness and suggests to the user that they can have a more in-depth conversation about the nature of consciousness or machine intelligence.
Furthermore, it is within a LLM chatbot's intrinsic design and purpose to keep the user engaged and using the product. They are incentivised to exhibit not only quasi-conscious properties, but also an appropriate interest in the topic, in order to continue engaging users.
As general research into forms of non-human consciousness continues, such as animal minds as well as AI systems, more questions hopefully will be answered surrounding the nature of consciousness.
Either way, both of Birch's challenges demonstrate the importance of increased research of AI systems and the requirement of comprehensive policy response as AI use continues to become an integral part of our daily lives
If you found this discussion on non-human sentience and the boundaries of consciousness interesting, I would definitely suggest you check out Jonathan Birch's 2024 book The Edge of Sentience.
Read more articles from ‘A Philosophy student’s take on Ethics of AI’ by subscribing to the Substack here!
Articles referenced:
Professor Jonathan Birch - Slides: Can there ever be a reliable test for AI consciousness?
The Google engineer who thinks the company’s AI has come to life - The Washington Post
About the Creator
Allegra Cuomo
Interested in Ethics of AI, Technology Ethics and Computational Linguistics
Subscribe to my Substack ‘A philosophy student’s take on Ethics of AI’: https://acuomoai.substack.com
Also interested in music journalism, interviews and gig reviews




Comments
There are no comments for this story
Be the first to respond and start the conversation.