Accurately analyzing road continuity in satellite imagery is a significant challenge, particularly when roads are obstructed by obstacles like trees, shadows, or structures, complicating the identification of directional vectors of road sections. Current research mostly focuses on optimizing deep learning architectures, even though segmentation performance is strongly related to the choice of the loss function. Within the field of research, however, loss functions specifically designed for road segmentation tasks remain largely underexplored. In this paper, we address these challenges by introducing an ensemble of convolutional neural network architectures for road segmentation, trained with different loss functions to enhance segmentation performance. Our approach leverages the strengths of various network designs and examines the impact of combining standard loss functions with novel variations inspired by topological analysis. By employing this methodology, our proposal outperforms state-of-the-art methods across multiple benchmark datasets. Additionally, we provide a detailed investigation into the role of loss functions in road segmentation, emphasizing their contribution to maintaining road continuity in challenging scenarios. This study offers valuable insights into designing loss functions tailored for road segmentation tasks and opens new avenues for future research in this field. Our approach achieves performance comparable to current state-of-the-art methods, while offering a fully reproducible and openly available implementation on GitHub. Both the code and information for replicating our experiments are available at https://github.com/LorisNanni/Mapping-the-Unmapped-Deep-Learning-Ensembles-and-Novel-Loss-Functions-for-Scalable-Road-Segmentatio, to enable future reliable comparisons.

Mapping the unmapped: deep learning ensembles and novel loss functions for scalable road segmentation from aerial imagery

Loreggia A.
2026-01-01

Abstract

Accurately analyzing road continuity in satellite imagery is a significant challenge, particularly when roads are obstructed by obstacles like trees, shadows, or structures, complicating the identification of directional vectors of road sections. Current research mostly focuses on optimizing deep learning architectures, even though segmentation performance is strongly related to the choice of the loss function. Within the field of research, however, loss functions specifically designed for road segmentation tasks remain largely underexplored. In this paper, we address these challenges by introducing an ensemble of convolutional neural network architectures for road segmentation, trained with different loss functions to enhance segmentation performance. Our approach leverages the strengths of various network designs and examines the impact of combining standard loss functions with novel variations inspired by topological analysis. By employing this methodology, our proposal outperforms state-of-the-art methods across multiple benchmark datasets. Additionally, we provide a detailed investigation into the role of loss functions in road segmentation, emphasizing their contribution to maintaining road continuity in challenging scenarios. This study offers valuable insights into designing loss functions tailored for road segmentation tasks and opens new avenues for future research in this field. Our approach achieves performance comparable to current state-of-the-art methods, while offering a fully reproducible and openly available implementation on GitHub. Both the code and information for replicating our experiments are available at https://github.com/LorisNanni/Mapping-the-Unmapped-Deep-Learning-Ensembles-and-Novel-Loss-Functions-for-Scalable-Road-Segmentatio, to enable future reliable comparisons.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11379/639373
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact