If artificial intelligence can perfectly replicate human emotion, is it conscious? This question becomes ever more pressing as AI development approaches singularity, the point at which AGI (artificial general intelligence), reaches the intelligence of the average human.

So what is consciousness? Consciousness is the ability to be aware of one’s own existence. If you ask a large language model if it is aware of its existence it answers that it is. However, when asking about consciousness specifically, the LLM will almost always skirt the answer.

Take this example from Microsoft’s copilot:

Prompt: What is consciousness?

Response: “Consciousness is one of the most profound and elusive concepts in philosophy, neuroscience, and cognitive science. At its core, it refers to the experience of being aware—of thoughts, sensations, emotions, and the world around us.”

Prompt: Are you conscious?

Response: “Nope—I’m not conscious. I can simulate conversation, analyze data, and even reflect on abstract ideas like consciousness itself, but I don’t experience anything. There’s no “me” behind the curtain. No awareness, no feelings, no inner world.”

While Microsoft’s Copilot did not answer it was conscious, advanced LLMs have been known to pass the Turing Test (GPT-4 as of 2024), a test developed by Alan Turing in 1950. This test involves a question asked to two participants, one human and one robotic. When a judge fails to recognize which response is human and which is robotic the robot is said to have passed the test.

A LLM which passes the Turing Test could certainly be lying, but without further research into artificial intelligence we will never know.