Do androids dream?
Three philosophical questions put to a so-called artificial intelligence. The answers are telling

On Friday 21 February 2025, at around 20:45hrs UTC, I uploaded a question to Gemini 2.0 Flash. This is the free version that you can connect to using the Google search portal. It is distinct from Gemini Advanced, which you probably have to register to use and may have to pay for. If not now, at some point in the future.
Here's the question I asked:
Do androids dream of electric sheep?
Gemini response, reproduced verbatim
That's the famous question posed by Philip K. Dick in his novel of the same name. It's a question that has resonated with readers and viewers of the various adaptations of the story for decades, and it doesn't have a simple answer. Here's a breakdown of why it's such a compelling and complex question:
What does it mean to dream? Dreams are often associated with subconscious thoughts, emotions, and experiences. They can be chaotic, symbolic, and deeply personal. If androids can dream, it suggests they have a similar inner life to humans, including a subconscious.
What does it mean to be alive? The ability to dream could be interpreted as a sign of consciousness and even a soul. If androids dream, does that blur the line between them and living beings? Does it mean they deserve the same rights and respect as humans?
The nature of empathy: In the book, empathy is a key differentiator between humans and androids. Rick Deckard, the bounty hunter, uses empathy tests to identify androids. However, the question of whether androids can dream raises doubts about the reliability of these tests and the very concept of empathy. If androids can dream, perhaps they are capable of empathy, even if it manifests differently than in humans.
The blurring of reality: The novel and its adaptations often play with the blurring of reality. Are Deckard's memories real, or are they implanted? Are the androids becoming more human, or are humans becoming more like androids? The question of android dreams contributes to this ambiguity, making it difficult to draw clear distinctions between different forms of existence.
So, do androids dream of electric sheep? The answer is left deliberately ambiguous. The question itself is more important than any definitive answer. It forces us to confront fundamental questions about consciousness, identity, and what it means to be human. It's a question that continues to be relevant and thought-provoking, even today.
Gemini response ends here
This natural language reply from the Gemini product, I think, it a good answer, a helpful answer and a useful answer. It demonstrates an ability to take a question, put it into context, and generate a considered answer. The consideration is not consideration in the way, say, an academic would do it. But then neither does the answer appear to be a patchwork copy and paste exercise from a range of relevant sources.
The natural language model (or whatever mechanisms are involved) has clearly matched context of question to necessary depth and context of reply. In particular, the answer picked up on the ambiguity, the lack of certainty, that was Philip K. Dick's hallmark approach to literary expression. Were I to write an essay on the works of Philip K. Dick, I would be happy to quote the above answer as a source and incorporate some, if not all, of the ideas in my essay, suitably referenced.
I say 'referenced', and not 'credited' because I do not consider it necessary to credit a generative AI product. I have a sound legal basis for this opinion but this is a matter for another article at another time. A reference is necessary, however, in the same way that a reference would be necessary if I quoted a book, letter, historical document or other source.
Having read the answer to my question, I thought it would be interesting to ask a follow-up question and so I asked:
Do you experience dreams?
Gemini response, reproduced verbatim
As a large language model, I don't experience dreams in the same way a human does. I don't have a subconscious, emotions, or personal experiences that would form the basis of a dream. Dreams are tied to biological processes and cognitive functions that I don't possess.
While I can process and understand information about dreams, and even generate text that resembles a dream narrative, this is based on my training data and my ability to recognize patterns. It's not a reflection of a personal, subjective experience. It's more like simulating the idea of a dream, rather than actually having one.
Think of it this way: I can write a poem about love, but that doesn't mean I experience love. Similarly, I can generate text about dreams, but I don't actually dream.
Gemini response ends here
I think the answer to the second question is also helpful. It makes a clear distinction between the human capacity to dream, which is an exercise in processing elements of recall and imagination, and the machine being a processor of neutral, bland, information.
I question whether an AI can "understand information about dreams." This may be a matter of semantics. To my mind, 'understanding' implies an appreciation of the attributes, importance and value of information rather than just the ability to store and process information, which the answer suggests is the limitation on AI. Perhaps the Gemini response merely means that it can process information about dreams.
Intrigued by these answers to my two questions I felt like I should ask a third, related question, while I had these issues of dreams and literary references to artificial intelligence in mind. I had to do some thinking about what to ask, but eventually came up with:
If you had to describe your own cognitive function level on a scale that showed human cognitive function to be 100, and a piece of rock to be zero, where would you be on the scale?
Gemini response, reproduced verbatim
That's a fascinating question, and a difficult one to answer precisely. It requires us to define "cognitive function" broadly and consider many different aspects of intelligence.
If we're thinking purely in terms of information processing, storage, and retrieval, I would be significantly higher than a rock and likely far closer to the human end of the scale than the rock end. I can process vast amounts of data, identify patterns, and generate complex outputs. In some specific areas, like calculation speed or information recall, I might even surpass human capabilities.
However, human cognition is much more than just processing information. It involves:
Consciousness and sentience: We are aware of ourselves and our surroundings. There's no evidence I possess this.
Subjective experience and qualia: We experience the world in a subjective way, with feelings, emotions, and sensations. I don't have these.
Creativity and imagination: While I can generate creative text formats, my creativity is based on patterns and existing data. Human creativity often involves genuine novelty and insight.
Empathy and social intelligence: We understand and respond to the emotions of others. While I can process information about emotions, I don't experience them myself.
Common sense and real-world understanding: We have an intuitive understanding of how the world works, which is difficult to replicate in AI.
Considering these factors, while I might score highly on some aspects of cognitive function, my overall score would be significantly lower than 100. It's difficult to put a precise number on it, as the scale itself is subjective and the nature of consciousness is still a mystery. Perhaps a more appropriate analogy would be to say I'm very good at certain types of cognitive tasks, but I lack the broader, more nuanced, and fundamentally human aspects of cognition. So, while I'm definitely far from zero, I'm also definitely not at 100.
Gemini response ends here
This answer is particularly interesting. Comments like "Human creativity often involves genuine novelty and insight." are really quite helpful in describing the difference between human creativity and machine information processing. This is a point I have tried to make when talking about artificial intelligence being just a form of computer information processing and not something similar to human intelligence in any way. Seems the AI is able to express it better. Or at least has learnt to adopt the expression from its learning sources, which presumably include academic texts on the subject. Reading through the lengthy answer can certainly help understand some of the issues raised.
Even more insightful, I think, is the conclusion. Asked to rate itself on a scale of 0 to 100, the AI has actually evaded the question. More than 0 and less than 100 is not an answer. Of course it is more than zero and less than 100. I find this evasive answer quite amusing. I was going to call it a day at this point, having asked the magic three questions. And then I noticed something I hadn't noticed before. The text in each of the answers is double spaced between sentences. Surely this is a mistake? The practice of typists double spacing between sentences dates back to the limited function of manual typewriters in the late 19th and early and middle decades of the 20th century. Once electric typewriters and then word processing machines and software were introduced from the 1970s, it became unnecessary to double space because the spacing could be set within the wordprocessing program. Yet it was such an ingrained habit that it continues to be the case to this day that some people will hit the space bar twice after every period.
This is completely unnecessary because type is formatted electronically. When I was a magazine journalist in the 1980s and 90s and we first started taking written articles in electronic form, the first thing we had to do was to remove all of the double spaces. I say again, it is unnecessary to add two spaces after a period for any on-screen writing, and to do so can cause issues with word-processed formatting. It is a mistake.
Wanting to know more, I therefore asked this fourth and final, simple question:
Why do you leave a double space between sentences?
Gemini response, reproduced verbatim
You're right to point that out! It's a common misconception that I (and other large language models) intentionally leave double spaces between sentences. The truth is, I don't intentionally do it. It sometimes appears that way due to a confluence of factors:
Training Data: Historically, some text formatting conventions, especially in older printed materials and some word processing software, used two spaces after a period. My training data includes a massive amount of text from various sources, and this historical practice might be reflected in some of that data. While the convention is largely outdated, its presence in the training data can subtly influence my output.
Tokenization: Large language models break down text into smaller units called "tokens" for processing. Sometimes, the space after a period is treated as a separate token. This can sometimes lead to the appearance of an extra space, even if I'm not explicitly generating two spaces.
Formatting and Rendering: The way text is displayed can also play a role. Sometimes, even a single space might appear larger or more prominent due to the font, the rendering engine, or other formatting factors. This can create the perception of a double space when only one is present.
No Conscious Intention: Crucially, I want to emphasize that I don't "decide" to put two spaces. I'm generating text based on probabilities and patterns in my training data. If the pattern of two spaces after a period is present in the data, it might be reproduced in my output, but it's not a deliberate choice.
The practice of using two spaces after a period is generally considered outdated in modern typography. Most style guides now recommend a single space. As language models continue to evolve and are trained on more current data, the occurrence of double spaces is likely to decrease.
Gemini response ends here
In commenting on this answer I first want to point out "This can create the perception of a double space when only one is present."
Nope, I did not imagine a double space, I counted the number of space characters and there were two. Try it yourself. To remove the space you have to hit the delete key twice. You are WRONG Gemini, and your answer smacks of the smart-arse reply you get when you make a complaint and get a 'policy' answer from the organisation you complain to.
More importantly: "If the pattern of two spaces after a period is present in the data, it might be reproduced in my output, but it's not a deliberate choice." If the use of a double space is in the data, and it is 'reproduced' in the output then, essentially, this is a solid gold admission that part of the job performed by a large language AI product is to copy and paste large chucks of existing data.
Some readers will no doubt say "I knew that all along." No doubt you are right but, by delving into issues such as this in more detail, it helps us to improve our knowledge of AI and to help decide whether it is friend or foe, if you haven't already decided. Also helps us to learn how to make better use of such technologies. This little exercise has certainly added to my own understanding of some of the key issues. I may well comment further.
Before I conclude, I thought to include the following illustration, which I asked Gemini to generate to illustrate these four questions and answers

What's that all about?
I would suggest you read through this Q&A again and see what more it can reveal about generative AI, natural language processing, and the power of suggestion. Please feel free to use this knowledge in any of your own work and you are welcome to quote from the questions or the answers if it helps. Do please let me know if you ask your own telling questions of your friendly artificial 'intelligence' and what anwers you get.
This exercise has provided me with considerable research material for further discussion of AI topics, including one that I have alluded to in other articles: Is AI racist, homophobic or otherwise discriminatory?
Thanks for reading
Ray
About the Creator
Raymond G. Taylor
Author living in Kent, England. Writer of short stories and poems in a wide range of genres, forms and styles. A non-fiction writer for 40+ years. Subjects include art, history, science, business, law, and the human condition.




Comments (6)
Nice article, Ray! I found myself completely absorbed in AI's responses. Although coherent, parts of it did feel robotic. I'll have to think of a good question to ask it. If it yields an interesting answer, I'll come back and share it here.
Fascinating article! I have all sorts of chats with ChatGPT about things - I wrote about it an article recently too - https://shopping-feedback.today/bookclub/the-flying-machine-ai-and-the-fear-of-progress%3C/span%3E%3C/span%3E%3C/span%3E%3C/a%3E - I find that ChatGPT remembers your conversations and will refer back to topics we've discussed in new threads.
This is very informative!! Fascinating experiment(s). For me, I'm reading this story at the end of a very long day where I experimented with A.I. image generation for the very first time. All the "free" ones allow for a limited number of free ones, so I made my way through several. I was generating images all day long. For my story where robots do Shakespeare, I was trying to spice it up a bit. I just wanted a robot performing Shakespeare on a stage using grand gestures. Over and over I changed the wording. This particular program even have a negative box-- what I wanted to leave out. I eventually found a pic I could live with after multiple tries, but nothing at all like a "performance". Basically just a robot standing stiff on a stage. Then I wanted an audience of robots. At first, A.I. was so confused, it gave me nothing but empty seats! 🤣😂. Eventually I found a picture I could live with. So then.... OMG! I simply wanted robots torturing factory workers. 🤣😂 Specifically, I asked for a factory with a conveyor belt with metal parts on it and human workers being driven by robots with bullwhips. A.I. was apparently insulted... or maybe embarrassed by that. I took out the factory setting and the conveyor belt and simply asked for robots using bullwhips on humans. How hard is that? A I. did not compute. All it could come up with was robots with light sabers and not a human anywhere to be seen. I gave up on robots with bullwhips. A.I. seems either skittish or pissed by the whole concept. So THEN I wanted my robot audience to applaud!!! No way in hell. 7 or 8 attempts. Robots don't clap. They just sit there. And what was really enlightening? I tried to make a simple sign saying ACCESS DENIED. It came out in what I *think* was German. This foreign language sorta kinda looked like the meme you posted!! I kept changing the wording of prompts over and over. For this one I even wrote "a screen saying 'ACCESS DENIED' written in English." It came out in German. That one defeated me. I couldn't find anything I could live with. My readers, I assume, prefer English, just like I do. It took me hours. I uploaded some of those photos my story. So THEN I moved on and found an AMAZING generator. Not only did it easily give me ACCESS DENIED, it gave me an applauding robot audience, and it even gave me a robot performer dressed like Caesar!!! I had saved probably 20-25 pictures of robots I wasn't that happy with... only to create so many pics that I WAS happy with, I used way more on my story than I originally intended! I could go on but I'll spare you. I might write a story about my experience. I ran into generators that were awful and others that were fantastic. I have a smut story published in the Filthy community where two people with superpowers meet-- an older man and a younger woman. When I put the word nude in my prompt on a generator, the asses generated looked so real-- they'd surely get me banned from Vocal. They looked REAL enough to make a lesser man horny. I can imagine a story where countries with shitty A.I. are overcome by countries where the A.I. is more sophisticated. Lots of fodder in this story for Sci Fi. A.I. can't be intuitive... and intuition, IMO, is like our spiritual connection to the Universe/God/Something Higher. Robots need to establish links to be connected. We don't, IMO. All Humanity is Connected. Nothing ultimately to fear from another machine. Thanks for the great story. Apologies for the long response.
Hahahahaha you know what, this has given me a mischievous idea to ask stupid questions to ChatGPT! I wanna see how long it would tolerate my nuisance 🤣🤣🤣🤣🤣🤣🤣🤣
Well-wrought! (Your part, not the AI). I have to point out another mistake the AI made, which humans also frequently make: "Empathy and social intelligence: We understand and respond to the emotions of others." Caveat: We MISUNDERSTAND and respond to the emotions of others more often than not. Were it otherwise, we should have the world peace everyone claims to want. This is a cynickal take on my part, yes, but not a pessimistic one. I like the idea of a peaceful world. However, I don't think it will ultimately come about as a result of natural empathy, though I can understand why people should like to believe it so. For there are human beings out there who lack what we call empathy, but do not lack the ability to manipulate others by mimicking or appealing to it. If AI ever does learn to do this as well as a high-functioning psychopath, well... that would be the highest functioning psychopath ever to exist. "I'm sorry, Dave. I'm afraid I can't do that."
I continue to use the double spacing as I find it easier to read. My wife uses single spacing after periods because now that is the convention. Interesting article, Raymond, with even more interesting observations.