On the use of visual conceptual information for the indexing and retrieval of image regions
2017-02-06T06:06:59Z (GMT) by
To address the semantic gap, state-of-the-art automatic image annotation frameworks concatenate the low-level image features (such as color, texture, etc.) in high- dimensional spaces to learn a set of high-level semantic categories. This research work investigates a multimedia indexing framework establishing a correspondence between visual conceptual information representing image regions and a set of high-level semantic concepts. The computational models for extracting the visual color, texture, and shape concepts utilized in this thesis aim to capture aspects related to human perception and understanding of the color, texture, and shape features of image regions. Through the development of models that map the low-level features into a set of high-level symbolic descriptors, image regions are described in transparent and readable form. These symbolic descriptors, named in this thesis as the visual concepts, are then used to learn a set of high-level semantic categories to classify un-annotated image regions. A main contribution here is presenting a framework for characterizing visual shape concepts to describe the shapes of image regions. The efficiency of using the visual conceptual information is demonstrated by conducting a comparison with a baseline framework operating on a set of low-level image features to learn the same set of semantic categories. Using the visual concepts to describe image regions is shown to achieve promising results and outperform the baseline model. The experimental results show that using the visual conceptual information to describe image contents achieves the goal of this thesis of narrowing the semantic gap between the low-level image features and high-level semantic categories.