Interpretability mechanisms helping users in better understanding machine learning models are crucial for Artificial Intelligence acceptance. In this manuscript, our experience in interpretation of random forest regression via surrogate models, i.e. models trying to replicate in an interpretable framework an original fitting difficult to understand, is reported. It is shown how, beyond classical R2 analysis, adequacy of surrogate models can be assessed via variable importance analysis.

Using Surrogate Models and Variable Importance to better Understand Random Forests Regression Fitting

migliorati manlio;simonetto anna
2023-01-01

Abstract

Interpretability mechanisms helping users in better understanding machine learning models are crucial for Artificial Intelligence acceptance. In this manuscript, our experience in interpretation of random forest regression via surrogate models, i.e. models trying to replicate in an interpretable framework an original fitting difficult to understand, is reported. It is shown how, beyond classical R2 analysis, adequacy of surrogate models can be assessed via variable importance analysis.
File in questo prodotto:
File Dimensione Formato  
Simonetto - Surrogate models.pdf

accesso aperto

Licenza: Dominio pubblico
Dimensione 9.58 kB
Formato Adobe PDF
9.58 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11379/589745
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact