We present results of the first evaluation of parallel manual annotations of discourse in the Prague Dependency Treebank. We give an overview of the process of the annotation itself, describe the inter-annotator agreement measurement, and, most importantly, we classify and analyze the most common types of annotators’ disagreement and propose solutions for the next phase of the annotation.