Different task interpretations are a highly undesired element in interactive video retrieval evaluations. When a participating team focuses partially on a wrong goal, the evaluation results might become partially misleading.
In this paper, we propose a process for refining known-item and open-set type queries, and preparing the assessors that judge the correctness of submissions to openset queries. Our findings from recent years reveal that a proper methodology can lead to objective query quality improvements and subjective participant satisfaction with query clarity.