Charles Explorer logo
🇨🇿

Evaluating a Bayesian-like relevance feedback model with text-to-image search initialization

Publikace na Matematicko-fyzikální fakulta |
2022

Tento text není v aktuálním jazyce dostupný. Zobrazuje se verze "en".Abstrakt

Although interactive video retrieval systems often boost search effectiveness, their smart design and optimal usage remains a true challenge. Since verification of design choices or search strategies with real users is tedious and unwieldy task, research efforts in interactive video search area focus also on options for automatic evaluations.

This paper contributes to the area with an analysis of artificial user models for relevance feedback based video retrieval systems. Using a state-of-the-art system SOMHunter utilizing the W2VV++ text-image search model, several studies were performed.

First, a study without search guidelines was organized with 34 users trying to solve known-item search tasks in a simplified version of SOMHunter. The results of the study were thoroughly analyzed and its data were used to train several artificial user models simulating relevance feedback.

The models were evaluated with respect to a second study, where 50 displays of images were annotated by real users. The most promising artificial user model wPCU was selected for simulations analyzing performance of relevance feedback based browsing with different strategies.

In a third study, 17 real users achieved on average 70% success rate for a new set of challenging known-item search tasks, strictly following the recommended search strategy. Furthermore, a similar performance for the same set of tasks was predicted by the wPCU model trained with data from the first study.

The results and future challenges are thoroughly discussed.