AI consciousness: science fiction or a reality?
How close are we to defining AI as conscious and how will it affect our relationship with it.
“Some believe that AI systems will soon become independently conscious, if they haven’t already.”
Waking up this morning and reading Pallab Ghosh BBC In-Depth article of ‘The people who think AI might become conscious’, led me to think about the reasons behind why I initially became interested in the discussion of ethical technology and ethics of AI.
Those of you who have met me in person may know that in my penultimate year of high school, I took philosophy as one of my subjects. I did not really know what the subject would entail, and had no idea that it would be something I would not only be enraptured by, but then eventually go on to study at university.
Part of the course requirement was a piece of extended research on an area of philosophy of your choice. Having really enjoyed the ‘Minds and Machine’ module, I decided to discuss the question of AI consciousness and personhood, with the following research question:
Is Artificial Intelligence capable of possessing consciousness, and should it be granted personhood?
Having spent the first part of the essay discussing different definitions and philosophical positions of consciousness, I concluded that AI, or at least in its current form, was not capable of possessing consciousness, at least not in terms of the definitions of the philosophers I researched.
As a result, after discussing what levels of rights other non-human beings had, I concluded that AI should not be granted the same personhood human beings have, but instead potentially granted a form of ‘electronic personhood’, which would help with discussions around whether we should be holding AI accountable for certain actions, and with AI and the law.
While my 3000 word research piece written by a 16 year old high school student was not able to even scratch the surface of the questions of AI and consciousness, it opened my eyes to the vast and quickly-changing field of technology ethics and ethical AI.
Flash forward a couple of years: conversation, discussion and research about these topics has exponentially multiplied, including my own understanding of the field.
Therefore, seeing articles such as Ghosh’s discussing the possibility of AI consciousness elicits a mixture of sentiments, such as awe at the research being done, to fear as a result of a Bladerunner-esque dystopian prospect.
Ghosh writes about his experience taking part in Sussex University's Centre for Consciousness Science 'Dreamachine' experiment, which aims to find out how we create our conscious experiences of the world. The machine does this by measuring brain activity resulting from the subject being exposed to a series of flashing lights.
The brain's response to these lights produces unique geometric patterns, which are then analysed to try and provide an explanation for how the human brain generates consciousness and how thoughts are processed, affecting our ability to view and interpret reality.
Trying to find an answer to the centuries-old question of what explains consciousness is a mammoth task, and is one of the biggest questions in science and philosophy. Therefore, the Sussex team’s approach is to break down the question into smaller parts and try to uncover the truth behind the different systems of the brain and hoping this will lead them closer to an explanation for consciousness.
The aim of this research is to try and understand more about the nature of consciousness, and then to apply this understanding to researching quite how close AI is to gaining consciousness. And through further discussion of this, how will the prospect of conscious AI affect how humans interact with it.
Questions and concerns about AI and human interaction have been around for the best part of a century. However, through the success of LLMs (Large Language Models), everyday use of AI that is demonstrating a perceived level of some sort of thinking has pushed the conversation of AI consciousness to the front of peoples’ minds.
Leaders of the Sussex team stand with the view that we are still at the point where we can decide what we want the potentially conscious AI future to look like. Instead, some people in the tech sector have the belief that the AI in our computer and phones may already be conscious.
Sighting reasons such as how not even those who develop these AI systems truly know how they work, and arguments of whether AI already exhibits ‘feelings’, demonstrates that there are a significant numbers of people who believe we have already reached certain levels of AI consciousness, even if it is not in the same way as humans.
Whether consciousness really can just be boiled down to the presence of electrical activity or of different coloured strobe lighting, or if instead machines are only exhibiting an illusion of consciousness, the questions still remain.
The possibility of AI consciousness and of AI being capable of feeling will fundamentally affect how we view and treat the systems, and will inevitably affect our moral and ethical priorities and standpoints.
So much has already changed in the couple of years since I researched my first piece on the topic of AI consciousness and ethical technology. Let’s see how much things continue to change in the next couple of years.
About the Creator
Allegra Cuomo
Interested in Ethics of AI, Technology Ethics and Computational Linguistics
Subscribe to my Substack ‘A philosophy student’s take on Ethics of AI’: https://acuomoai.substack.com
Also interested in music journalism, interviews and gig reviews




Comments (2)
Such an important topic. Until recently this was all just a thought-experiment. But not it is real. And important.
I can relate to your journey into AI ethics. I also got into it through philosophy in high school. It's fascinating how a simple module can spark such a deep interest. Your conclusion about 'electronic personhood' is thought-provoking. Do you think this concept could really change how we approach AI accountability? And how do you see the field evolving in the next few years?