This paper explores how humans make contextual moral judgments to inform the development of AI systems capable of balancing rule-following with flexibility. We investigate the limitations of rigid constraints in AI, which can hinder morally acceptable actions in specific contexts, unlike humans who can override rules when appropriate. We propose a preference-based graphical model inspired by dual-process theories of moral judgment and conduct a study on human decisions about breaking the social norm of "no cutting in line." Our model outperforms standard machine learning methods in predicting human judgments and offers a generalizable framework for modeling moral decision-making across various contexts. This short paper summarizes the main findings of our paper published in the journal Autonomous Agents and Multi-Agent Systems. [2].
When Is It Acceptable to Break the Rules? Knowledge Representation of Moral Judgements Based on Empirical Data (Extended Abstract)
Loreggia A.;
2025-01-01
Abstract
This paper explores how humans make contextual moral judgments to inform the development of AI systems capable of balancing rule-following with flexibility. We investigate the limitations of rigid constraints in AI, which can hinder morally acceptable actions in specific contexts, unlike humans who can override rules when appropriate. We propose a preference-based graphical model inspired by dual-process theories of moral judgment and conduct a study on human decisions about breaking the social norm of "no cutting in line." Our model outperforms standard machine learning methods in predicting human judgments and offers a generalizable framework for modeling moral decision-making across various contexts. This short paper summarizes the main findings of our paper published in the journal Autonomous Agents and Multi-Agent Systems. [2].I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


