New Study Confirms AI Is Not Conscious
February 23, 2026Research examines claims of AI consciousness using human brain activity tests
Recent research conducted by the University of Bradford in collaboration with the Rochester Institute of Technology (RIT) has found that artificial intelligence (AI) systems are not conscious, even when they exhibit behaviours that may suggest otherwise. The study applied scientific methods traditionally used to assess consciousness in humans to AI models, including large language models similar to ChatGPT, and concluded that AI lacks true awareness.
Testing AI with human consciousness measures
Consciousness in humans is associated with specific patterns of brain activity, characterised by coordinated interactions across different brain regions. These patterns change measurably depending on whether a person is awake, dreaming, or unconscious. The research team developed a mathematical approach capable of distinguishing these brain states in humans and applied the same method to AI systems.
The researchers tested the GPT-2 language model by deliberately altering its internal structure, removing key components responsible for prioritising information, and adjusting a parameter known as “temperature,” which influences the randomness of the AI’s responses. Unexpectedly, under certain conditions, the AI’s “consciousness-style” score increased after being impaired, despite a clear decline in the quality of its output. In other scenarios, the score remained unchanged or decreased.
Complexity does not equate to consciousness
Professor Hassan Ugail from the University of Bradford explained that these findings highlight a common misconception about AI. While the measures used are effective at detecting complex activity, complexity itself is not the same as consciousness. The AI sometimes appeared more “conscious-like” when it was actually malfunctioning.
He compared this to a football team playing with fewer players, which might show more frantic coordination but perform worse overall. Similarly, increased activity in AI does not necessarily indicate awareness.
Co-author Professor Newton Howard, a cognitive scientist at RIT, emphasised the implications for AI interpretation and regulation. He noted that complexity metrics which distinguish conscious from unconscious states in the human brain behave differently in artificial systems, sometimes increasing even as AI performance deteriorates. This challenges simplistic narratives about AI becoming self-aware.
Implications for AI development and regulation
- The study advises caution regarding claims that AI systems are developing consciousness or self-awareness.
- Mathematical patterns linked to consciousness in humans can be manipulated in AI by changing operational settings, making them unreliable indicators of awareness.
- The methods used may help engineers monitor AI system performance and detect when systems begin to malfunction.
- Findings could inform future AI safety measures and regulatory frameworks.
The researchers stress that conscious machines remain a distant prospect. As Professor Ugail stated, complex behaviour in AI does not imply the presence of a mind, and it is important to distinguish between the two to avoid misunderstandings.




































