Why we may never know if AI is conscious — and why that uncertainty matters

Why we may never know if AI is conscious — and why that uncertainty matters

A Cambridge University philosopher argues that scientists may never definitively determine whether artificial intelligence systems possess consciousness, and that this uncertainty poses significant ethical and commercial risks. In a new paper titled “Agnosticism About Artificial Consciousness,” Tom McClelland contends that both optimists who believe machines can achieve consciousness and skeptics who dismiss the possibility are overconfident in their positions.

McClelland argues current consciousness science, based on studying biological creatures, hits an “epistemic wall” when applied to silicon-based systems. He advocates for “evidentialism,” requiring claims about AI consciousness to be grounded in solid scientific evidence rather than speculation. “We do not have a deep explanation of consciousness,” McClelland explains in the paper. “There is no evidence to suggest that consciousness can emerge with the right computational structure, or indeed that consciousness is essentially biological.”

The philosopher warns that uncertainty about AI consciousness could be exploited by tech companies for marketing purposes, with firms using consciousness claims as hype to sell advanced capabilities. McClelland also expresses concern that research funding might be diverted to studying AI consciousness when resources could address more tractable questions, such as consciousness in prawns, half a trillion of which are killed annually. Surveys show more than a third of people believe AI systems “truly understood” their emotions or seemed conscious.

Rather than attempting to build a “consciousness meter” for AI, McClelland suggests focusing on sentience—the capacity for positive or negative experiences. “If neither common sense nor hard-nosed research can give us an answer, the logical position is agnosticism,” McClelland writes. “We cannot, and may never, know.” He argues this cautious approach could slow AI development, allowing for better regulation and preventing society from wrongly granting or denying moral consideration to AI systems.

READ MORE FROM THE DEBRIEF

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top