Many computer science contests use automatic evaluation of contestants' submissions by running them on test data within given time and space limits. It turns out that the design of contemporary hardware and operating systems, which usually focuses on maximizing throughput, makes consistent implementation of such resource constraints very cumbersome.
We discuss possible methods and their properties with emphasis on precision and repeatability of results.