This paper introduces argumentative-generative models for statistical learning-i.e., generative statistical models seen from a Bayesian argumentation perspective-and shows how they support trustworthy artificial intelligence (AI). Generative Bayesian approaches are already very promising for achieving robustness against adversarial attacks, a fundamental component of trustworthy AI. This paper shows how Bayesian argumentation can help us achieve transparent assessments of epistemic uncertainty and testability of models, two necessary ingredients for trustworthy AI. We also discuss the limitations of this approach, notably those traditionally linked to Bayesian methods.
Supporting Trustworthy Artificial Intelligence via Bayesian Argumentation
Federico Cerutti
2022-01-01
Abstract
This paper introduces argumentative-generative models for statistical learning-i.e., generative statistical models seen from a Bayesian argumentation perspective-and shows how they support trustworthy artificial intelligence (AI). Generative Bayesian approaches are already very promising for achieving robustness against adversarial attacks, a fundamental component of trustworthy AI. This paper shows how Bayesian argumentation can help us achieve transparent assessments of epistemic uncertainty and testability of models, two necessary ingredients for trustworthy AI. We also discuss the limitations of this approach, notably those traditionally linked to Bayesian methods.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.