We introduce interpolation of trained MSTParser models as a resource combination method for multi-source delexicalized parser transfer. We present both an unweighted method, as well as a variant in which each source model is weighted by the similarity of the source language to the target language.
Evaluation on the HamleDT treebank collection shows that theweightedmodelinterpolationperforms comparably to weighted parse tree combination method, while being computationally much less demanding.