The "most probable" and "most uncertain" strategies are roughly equivalent for moderately difficult and difficult concepts. For easy concepts, the "most probable" strategy is the best one when less than 15% of the dataset is annotated and the "most uncertain" strategy is the best one when 15% or more of the dataset is annotated. ![]() The first two respectively select the most probable and the most uncertain samples. The simulation allows exploring the effect of several parameters: the strategy, the annotated fraction of the dataset, the number of iterations and the relative difficulty of concepts. Performance is measured on the 20 concepts selected for the TRECVID 2006 concept detection task. Training is done using the collaborative annotation of 39 concepts of the TRECVID 2005 campaign. Active learning is simulated using subsets of a fully annotated dataset instead of actually calling for user intervention. In this paper, we compare active learning strategies for indexing concepts in video shots. Selected results show good performance of the proposed approach. An empirical assessment of the proposed technique was conducted. This multi-feature space metric is learned from a group of representative salient blocks using a multi-objective optimisation approach. In the multi-feature space, it is expected that the visual patterns of objects of interest can be effectively discriminated from irrelevant regions. To guarantee the accuracy of salient block matching, the similarities of block regions are calculated within an optimised concept-specific multi-feature space. Salient blocks of the selected images are used as training examples. In each iteration, the user is requested to select images relevant to the query concept. Relevance Feedback is seamlessly integrated in the retrieval process. The salient blocks are then used as cues for representing the whole images when semantic-based searching is performed. The core technique is designed to adaptively and efficiently locate the salient block of objects of interest in each image. ![]() In this paper, an approach to tackle the object based image retrieval problem is proposed.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |