Autonomous artificial moderators can be useful to monitor social media for content that violates platform policies, but such artificial moderators can be confidently wrong about their decisions. While creating an approach that makes no mistakes is effectively impossible, being able to generate explanations for any given decision can simplify the task of detecting when the system is wrong. In this work we present LiveEvents, a neuro-symbolic agent capable of generating explanations based on which rules have lead to its decisions. We deliver these explanations via Cogni-Sketch, which provides users with an interactive visual representation, allowing them to easily understand the explanations given by the system.
Visualizing Logic Explanations for Social Media Moderation
Cerutti F.;
2023-01-01
Abstract
Autonomous artificial moderators can be useful to monitor social media for content that violates platform policies, but such artificial moderators can be confidently wrong about their decisions. While creating an approach that makes no mistakes is effectively impossible, being able to generate explanations for any given decision can simplify the task of detecting when the system is wrong. In this work we present LiveEvents, a neuro-symbolic agent capable of generating explanations based on which rules have lead to its decisions. We deliver these explanations via Cogni-Sketch, which provides users with an interactive visual representation, allowing them to easily understand the explanations given by the system.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.