Content characterization of sport videos is a subject of great interest to researchers working on the analysis of multimedia documents. In this paper, we propose a semantic indexing algorithm which uses both audio and visual information for salient event detection in soccer. The video signal is processed first by extracting low-level visual descriptors directly from an MPEG-2 bitstream. It is assumed that any instance of an event of interest typically affects two consecutive shots and is characterized by a different temporal evolution of the visual descriptors in the two shots. This motivates the introduction of a controlled Markov chain to describe such evolution during an event of interest, with the control input modeling the occurrence of a shot transition. After adequately training different controlled Markov chain models, a list of video segments can be extracted to represent a specific event of interest using the maximum likelihood criterion. To reduce the presence of false alarms, low-level audio descriptors are processed to order the candidate video segments in the list so that those associated to the event of interest are likely to be found in the very first positions. We focus in particular on goal detection, which represents a key event in a soccer game, using camera motion information as a visual cue and the “loudness” as an audio descriptor. The experimental results show the effectiveness of the proposed multimodal approach.
Semantic Indexing of Soccer Audio-Visual Sequences: A Multimodal Approach Based on Controlled Markov Chains
LEONARDI, Riccardo;MIGLIORATI, Pierangelo;PRANDINI, Maria
2004-01-01
Abstract
Content characterization of sport videos is a subject of great interest to researchers working on the analysis of multimedia documents. In this paper, we propose a semantic indexing algorithm which uses both audio and visual information for salient event detection in soccer. The video signal is processed first by extracting low-level visual descriptors directly from an MPEG-2 bitstream. It is assumed that any instance of an event of interest typically affects two consecutive shots and is characterized by a different temporal evolution of the visual descriptors in the two shots. This motivates the introduction of a controlled Markov chain to describe such evolution during an event of interest, with the control input modeling the occurrence of a shot transition. After adequately training different controlled Markov chain models, a list of video segments can be extracted to represent a specific event of interest using the maximum likelihood criterion. To reduce the presence of false alarms, low-level audio descriptors are processed to order the candidate video segments in the list so that those associated to the event of interest are likely to be found in the very first positions. We focus in particular on goal detection, which represents a key event in a soccer game, using camera motion information as a visual cue and the “loudness” as an audio descriptor. The experimental results show the effectiveness of the proposed multimodal approach.File | Dimensione | Formato | |
---|---|---|---|
LMP_CSVT May-2004_full-text.pdf
solo utenti autorizzati
Descrizione: LMP_CSVT May-2004_full-text
Tipologia:
Full Text
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
295.38 kB
Formato
Adobe PDF
|
295.38 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
14tcsvt05-leonardi-proof.pdf
accesso aperto
Descrizione: LMP_CSVT May-2004_post-print
Tipologia:
Documento in Post-print
Licenza:
Creative commons
Dimensione
371.97 kB
Formato
Adobe PDF
|
371.97 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.