The automatic organization of video databases according to the semantic content of data is a key aspect for efficient indexing and fast retrieval of audio-visual material. In order to generate indices that can be used to access a video database, a description of each video sequence is necessary. The identification of objects present in a frame and the track of their motion and interaction in space and time, is attractive but not yet very robust. For this reason, since the early 90's, attempts have been applied in trying to segment a video in shots. For each shot a representative frame of the shot, called k-frame, is usually chosen and the video can be analysed through its k-frames. Even if abrupt scene changes are relatively easy to be detected, it is more difficult to identify special effects, such as dissolve, that were operated in the editing stage to merge two shots. Unfortunately, these special effects are normally used to stress the importance of the scene change (from a content point of view), so they are extremely relevant therefore they should not be missed. Beside, it is very important to determine precisely the beginning and the end of the transition in the case of dissolves and fades. In this work, two new parameters are proposed. These characterize the precision of boundaries of special effects when the scene change involves more than two frames. They are combined with the common recall and precision parameters. Three for cut detection are considered: histogram-based, motion-based and contour-based. These algorithms are tested and compared on several video sequences. Results will show that the best performance is achieved by the global histogram-based method which uses color information.

Scene Break Detection: A Comparison

LEONARDI Riccardo
Supervision
1998-01-01

Abstract

The automatic organization of video databases according to the semantic content of data is a key aspect for efficient indexing and fast retrieval of audio-visual material. In order to generate indices that can be used to access a video database, a description of each video sequence is necessary. The identification of objects present in a frame and the track of their motion and interaction in space and time, is attractive but not yet very robust. For this reason, since the early 90's, attempts have been applied in trying to segment a video in shots. For each shot a representative frame of the shot, called k-frame, is usually chosen and the video can be analysed through its k-frames. Even if abrupt scene changes are relatively easy to be detected, it is more difficult to identify special effects, such as dissolve, that were operated in the editing stage to merge two shots. Unfortunately, these special effects are normally used to stress the importance of the scene change (from a content point of view), so they are extremely relevant therefore they should not be missed. Beside, it is very important to determine precisely the beginning and the end of the transition in the case of dissolves and fades. In this work, two new parameters are proposed. These characterize the precision of boundaries of special effects when the scene change involves more than two frames. They are combined with the common recall and precision parameters. Three for cut detection are considered: histogram-based, motion-based and contour-based. These algorithms are tested and compared on several video sequences. Results will show that the best performance is achieved by the global histogram-based method which uses color information.
1998
0818683899
File in questo prodotto:
File Dimensione Formato  
LSL_WRIDE-1998_post-print.pdf

accesso aperto

Descrizione: LSL_WRIDE-1998_post-print
Tipologia: Documento in Post-print
Licenza: Creative commons
Dimensione 163.65 kB
Formato Adobe PDF
163.65 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11379/34476
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 77
  • ???jsp.display-item.citation.isi??? 57
social impact