The rapid growth of Deep Neural Networks (DNNs) has brought substantial advances in artificial intelligence across domains such as vision, language, and recommendation systems. However, this progress comes at a steep energy cost, with model training and deployment contributing significantly to global computational energy consumption. Understanding what drives this energy demand requires more than empirical correlation- it demands causal explanations. In this work, we investigate the causal factors underlying energy use in DNN training, using structure learning algorithms such as the PC algorithm to derive candidate causal graphs. Recognising the limitations of such methods-particularly in terms of assumptions and finite data-we introduce a novel approach to evaluate each inferred link through formal argumentation. We treat each proposed causal relationship as a dialectical object, generating arguments and counterarguments that articulate its plausibility, underlying mechanisms, and possible confounders. We operationalise this reasoning using large language models in a zero-shot prompting setup, surfacing the evidential and conceptual assumptions behind each causal claim. This hybrid approach, combining causal discovery with structured argumentative evaluation, promotes interpretability and critical scrutiny in data-driven causal modelling. Preliminary results demonstrate its potential for rendering causal claims more transparent and contestable.
Early Insights into Argumentation-Guided Causal Evaluation with the Help of LLMs
Cerutti F.
;Giacomin M.;Lamperti G. F.;Zanella M.
2025-01-01
Abstract
The rapid growth of Deep Neural Networks (DNNs) has brought substantial advances in artificial intelligence across domains such as vision, language, and recommendation systems. However, this progress comes at a steep energy cost, with model training and deployment contributing significantly to global computational energy consumption. Understanding what drives this energy demand requires more than empirical correlation- it demands causal explanations. In this work, we investigate the causal factors underlying energy use in DNN training, using structure learning algorithms such as the PC algorithm to derive candidate causal graphs. Recognising the limitations of such methods-particularly in terms of assumptions and finite data-we introduce a novel approach to evaluate each inferred link through formal argumentation. We treat each proposed causal relationship as a dialectical object, generating arguments and counterarguments that articulate its plausibility, underlying mechanisms, and possible confounders. We operationalise this reasoning using large language models in a zero-shot prompting setup, surfacing the evidential and conceptual assumptions behind each causal claim. This hybrid approach, combining causal discovery with structured argumentative evaluation, promotes interpretability and critical scrutiny in data-driven causal modelling. Preliminary results demonstrate its potential for rendering causal claims more transparent and contestable.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


