16 mai 2023, 11h30–12h30
Toulouse
Salle Auditorium 4 (First floor - TSE Building)
Résumé
One of the most remarkable things about the human moral mind is its flexibility: we can make moral judgments about cases we have never seen before.Yet, on its face, morality often seems like a highly rigid system of clearly defined rules. Indeed, the past few decades of research in moral psychology have revealed that human moral judgment often depends on rules. But sometimes, it is morally appropriate to break the rules. And sometimes, new rules need to be created. The field of moral psychology is just now beginning to explore and understand this kind of flexibility. Meanwhile, the flexibility of the human moral mind poses a challenge for AI engineers. Current tools for building AI systems fall short of capturing moral flexibility and thus struggle to predict and produce human-like moral judgments in novel cases that the system hasn’t been trained on. I will present a series of experiments and models that demonstrate and capture the human capacity for rule making and breaking. I will then discuss a series of ongoing projects that draw inspiration from these models to develop AI systems that make human-like moral judgments.
Référence
Sydney Levine (Allen Institute for Artificial Intelligence), « Moral Flexibility in Human and Machine Minds », IAST General Seminar, Toulouse : IAST, 16 mai 2023, 11h30–12h30, salle Auditorium 4 (First floor - TSE Building).