This paper is a first step towards a smart hand gesture recognition set up for Collaborative Robots using a Faster R-CNN Object Detector to find the accurate position of the hands in RGB images. In this work, a gesture is defined as a combination of two hands, where one is an anchor and the other codes the command for the robot. Other spatial requirements are used to improve the performances of the model and filter out the incorrect predictions made by the detector. As a first step, we used only four gestures.
Deep learning-based hand gesture recognition for collaborative robots
Nuzzi, CristinaMethodology
;Pasinetti, SimoneValidation
;Lancini, MatteoMembro del Collaboration Group
;Docchio, FrancoMembro del Collaboration Group
;Sansoni, GiovannaSupervision
2019-01-01
Abstract
This paper is a first step towards a smart hand gesture recognition set up for Collaborative Robots using a Faster R-CNN Object Detector to find the accurate position of the hands in RGB images. In this work, a gesture is defined as a combination of two hands, where one is an anchor and the other codes the command for the robot. Other spatial requirements are used to improve the performances of the model and filter out the incorrect predictions made by the detector. As a first step, we used only four gestures.File in questo prodotto:
File | Dimensione | Formato | |
---|---|---|---|
08674634.pdf
accesso aperto
Descrizione: Articolo principale
Tipologia:
Full Text
Licenza:
DRM non definito
Dimensione
1.29 MB
Formato
Adobe PDF
|
1.29 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.