Evaluation plays a key role in the field of machine translation. In general, the evaluation of machine translation can be divided into two types, manual (human) and automatic (machine).
Professional human translators can understand and evaluate the text with the best results in terms of measuring quality and analyzing errors but on the other hand, this approach brings a number of disadvantages, including high time consumption, the subjectivity of the translator, and the finan-cial costs associated with hiring professional translators. Automatic evaluation approaches are usually based on the correlation between the sentences or n-grams from human translation and machine translation.
The aim of this paper is to capture the semantics of human translation from the English language to the Slovak language and the same text translated by ETransL and Deepl translating engines by extracting the keywords which represent the main phrases from doc-uments to determine how much the machine translations differ from the reference human translation. Based on our results the translations are equal from a seman-tic point of view and the end user should understand the text translated by ETransL and Deepl equally as human translation.