The automatic estimation of gender and face expression is an important task in many applications. In this context, we believe that an accurate segmentation of the human face could provide a good information about these mid-level features, due to the strong interaction existing between facial parts and these features. According to this idea, in this paper we present a gender and face expression estimator, based on a semantic segmentation of the human face into six parts. The proposed algorithm works in different steps. Firstly, a database consisting of face images was manually labeled for training a discriminative model. Then, three kinds of features, namely, location, shape and color have been extracted from uniformly sampled square patches. By using a trained model, facial images have then been segmented into six semantic classes: hair, skin, nose, eyes, mouth, and back-ground, using a Random Decision Forest classifier (RDF). In the final step, a linear Support Vector Machine (SVM) classifier was trained for each considered mid-level feature (i.e., gender and expression) by using the corresponding probability maps. The performance of the proposed algorithm was evaluated on different faces databases, namely FEI and FERET. The simulation results show that the proposed algorithm outperforms the state of the art competitively.

Gender and Expression Analysis Based on Semantic Face Segmentation

Khan, Khalil
Software
;
Mauro, Massimo
Methodology
;
Migliorati, Pierangelo
Supervision
;
Leonardi, Riccardo
Conceptualization
2017-01-01

Abstract

The automatic estimation of gender and face expression is an important task in many applications. In this context, we believe that an accurate segmentation of the human face could provide a good information about these mid-level features, due to the strong interaction existing between facial parts and these features. According to this idea, in this paper we present a gender and face expression estimator, based on a semantic segmentation of the human face into six parts. The proposed algorithm works in different steps. Firstly, a database consisting of face images was manually labeled for training a discriminative model. Then, three kinds of features, namely, location, shape and color have been extracted from uniformly sampled square patches. By using a trained model, facial images have then been segmented into six semantic classes: hair, skin, nose, eyes, mouth, and back-ground, using a Random Decision Forest classifier (RDF). In the final step, a linear Support Vector Machine (SVM) classifier was trained for each considered mid-level feature (i.e., gender and expression) by using the corresponding probability maps. The performance of the proposed algorithm was evaluated on different faces databases, namely FEI and FERET. The simulation results show that the proposed algorithm outperforms the state of the art competitively.
2017
9783319685472
File in questo prodotto:
File Dimensione Formato  
KMML_ICIAP-2017_post-print.pdf

solo utenti autorizzati

Descrizione: KMML_ICIAP-2017_post-print
Tipologia: Documento in Post-print
Licenza: Creative commons
Dimensione 1.75 MB
Formato Adobe PDF
1.75 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11379/502705
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 2
social impact