Home
Video Content Analysis Using Multimodal Information: For Movie Content Extraction, Indexing and Representation / Edition 1
Loading Inventory...
Barnes and Noble
Video Content Analysis Using Multimodal Information: For Movie Content Extraction, Indexing and Representation / Edition 1
Current price: $109.99
Barnes and Noble
Video Content Analysis Using Multimodal Information: For Movie Content Extraction, Indexing and Representation / Edition 1
Current price: $109.99
Loading Inventory...
Size: OS
*Product Information may vary - to confirm product availability, pricing, and additional information please contact Barnes and Noble
Video Content Analysis Using Multimodal Information For Movie Content Extraction, Indexing and Representation is on content-based multimedia analysis, indexing, representation and applications with a focus on feature films. Presented are the state-of-art techniques in video content analysis domain, as well as many novel ideas and algorithms for movie content analysis based on the use of multimodal information.
The authors employ multiple media cues such as audio, visual and face information to bridge the gap between low-level audiovisual features and high-level video semantics. Based on sophisticated audio and visual content processing such as video segmentation and audio classification, the original video is re-represented in the form of a set of semantic video scenes or events, where an event is further classified as a 2-speaker dialog, a multiple-speaker dialog, or a hybrid event. Moreover, desired speakers are simultaneously identified from the video stream based on either a supervised or an adaptive speaker identification scheme. All this information is then integrated together to build the video's ToC (table of content) as well as the index table. Finally, a video abstraction system, which can generate either a scene-based summary or an event-based skim, is presented by exploiting the knowledge of both video semantics and video production rules.
This monograph will be of great interest to research scientists and graduate level students working in the area of content-based multimedia analysis, indexing, representation and applications as well s its related fields.
The authors employ multiple media cues such as audio, visual and face information to bridge the gap between low-level audiovisual features and high-level video semantics. Based on sophisticated audio and visual content processing such as video segmentation and audio classification, the original video is re-represented in the form of a set of semantic video scenes or events, where an event is further classified as a 2-speaker dialog, a multiple-speaker dialog, or a hybrid event. Moreover, desired speakers are simultaneously identified from the video stream based on either a supervised or an adaptive speaker identification scheme. All this information is then integrated together to build the video's ToC (table of content) as well as the index table. Finally, a video abstraction system, which can generate either a scene-based summary or an event-based skim, is presented by exploiting the knowledge of both video semantics and video production rules.
This monograph will be of great interest to research scientists and graduate level students working in the area of content-based multimedia analysis, indexing, representation and applications as well s its related fields.