“If we give AI freedom, we must also give it moral guidance.”
A new study suggests artificial intelligence (AI) may meet the criteria for free will, sparking debate over responsibility.
Finnish philosopher Frank Martela, assistant professor at Aalto University, argues advanced AI systems show goal-directed behavior, real choices, and control—hallmarks of free will.
“Intentions, alternatives, and decision-making are the core of free will,” Martela explains, drawing on theories by Daniel Dennett and Christian List. He warns that as AI gains autonomy—like drones or self-driving cars—moral responsibility may shift from developers to the machines themselves.
“If we give AI freedom, we must also give it moral guidance,” Martela says, stressing that ethical training is essential. A withdrawn ChatGPT update highlighted the risks of flawed AI judgment.
Martela urges developers to embed ethics early, noting, “Modern AI is like an adult facing complex moral choices.” His work challenges the notion of AI as passive tools, calling for urgent discussions on accountability and governance.
