Many important decisions historically made by humans are now being made by algorithms - often learnt from data - whose accountability measures and legal standards are far from satisfactory. While model transparency is important, it is neither necessary nor sufficient. Accountability is arguably more important. However, accountability needs to carefully take into consideration the weaknesses of the original data, as well as the weaknesses of the model itself: Indeed, robust datasets enable model robustness, and vice versa. In this paper we will focus on unfair datasets, as an example of the weaknesses in datasets. Fairness directly involves privacy problems, since learning without fairness can emphasize certain features or directions that generate private information leakage. For instance, a model may inadvertently reveal a persons age if age is a discriminating feature in a models decision making. Moreover, we will investigate the robustness of model in presence of adversarial activities. Indeed, we should strengthen our models by estimating what an adversary will do based on continuous dynamic learning, mindful of concealment and deception, and with a clear, explainable, insightful summary for the final decision makers. In this paper we will discuss how models based on unfair datasets can hardly be robust; and datasets used by weak models can hardly be fair.

When data lie: Fairness and robustness in contested environments

Cerutti F.;
2018-01-01

Abstract

Many important decisions historically made by humans are now being made by algorithms - often learnt from data - whose accountability measures and legal standards are far from satisfactory. While model transparency is important, it is neither necessary nor sufficient. Accountability is arguably more important. However, accountability needs to carefully take into consideration the weaknesses of the original data, as well as the weaknesses of the model itself: Indeed, robust datasets enable model robustness, and vice versa. In this paper we will focus on unfair datasets, as an example of the weaknesses in datasets. Fairness directly involves privacy problems, since learning without fairness can emphasize certain features or directions that generate private information leakage. For instance, a model may inadvertently reveal a persons age if age is a discriminating feature in a models decision making. Moreover, we will investigate the robustness of model in presence of adversarial activities. Indeed, we should strengthen our models by estimating what an adversary will do based on continuous dynamic learning, mindful of concealment and deception, and with a clear, explainable, insightful summary for the final decision makers. In this paper we will discuss how models based on unfair datasets can hardly be robust; and datasets used by weak models can hardly be fair.
2018
9781510618176
9781510618183
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11379/528978
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact