Artificial intelligence (AI) techniques can significantly improve cyber security operations if tasks and responsibilities are effectively shared between human and machine. AI techniques excel in some situational understanding tasks; for instance, classifying intrusions. However, existing AI systems are often overconfident in their classification: this reduces the trust of human analysts. Furthermore, sophisticated intrusions span across long time periods to reduce their footprint, and each decision to respond to a (suspected) attack can have unintended side effects. In this position paper we show how advanced AI systems handling uncertainty and encompassing expert knowledge can lessen the burden on human analysts. In detail: (1) Effective interaction with the analyst is a key issue for the success of an intelligence support system. This involves two issues: a clear and unambiguous system-analyst communication, only possible if both share the same domain ontology and conceptual framework, and effective interaction, allowing the analyst to query the system for justifications of the reasoning path followed and the results obtained. (2) Uncertainty-aware machine learning and reasoning is an effective method for anomaly detection, which can provide human operators with alternative interpretations of data with an accurate assessment of their confidence. This can contribute to reducing misunderstandings and building trust. (3) An event-processing algorithm including both a neural and a symbolic layer can help identify attacks spanning long intervals of time, that would remain undetected via a pure neural approach. (4) Such a symbolic layer is crucial for the human operator to estimate the appropriateness of possible responses to a suspected attack by considering both the probability that an attack is actually occurring and the impact (intended and unintended) of a given response.

Self-Aware Effective Identification and Response to Viral Cyber Threats

Baroni P.;Cerutti F.;Fogli D.;Giacomin M.;Gringoli F.;Guida G.;
2021-01-01

Abstract

Artificial intelligence (AI) techniques can significantly improve cyber security operations if tasks and responsibilities are effectively shared between human and machine. AI techniques excel in some situational understanding tasks; for instance, classifying intrusions. However, existing AI systems are often overconfident in their classification: this reduces the trust of human analysts. Furthermore, sophisticated intrusions span across long time periods to reduce their footprint, and each decision to respond to a (suspected) attack can have unintended side effects. In this position paper we show how advanced AI systems handling uncertainty and encompassing expert knowledge can lessen the burden on human analysts. In detail: (1) Effective interaction with the analyst is a key issue for the success of an intelligence support system. This involves two issues: a clear and unambiguous system-analyst communication, only possible if both share the same domain ontology and conceptual framework, and effective interaction, allowing the analyst to query the system for justifications of the reasoning path followed and the results obtained. (2) Uncertainty-aware machine learning and reasoning is an effective method for anomaly detection, which can provide human operators with alternative interpretations of data with an accurate assessment of their confidence. This can contribute to reducing misunderstandings and building trust. (3) An event-processing algorithm including both a neural and a symbolic layer can help identify attacks spanning long intervals of time, that would remain undetected via a pure neural approach. (4) Such a symbolic layer is crucial for the human operator to estimate the appropriateness of possible responses to a suspected attack by considering both the probability that an attack is actually occurring and the impact (intended and unintended) of a given response.
2021
978-9916-9565-5-7
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11379/551393
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
social impact