Forecasts of research results can aid evaluation of their novelty and credibility by indicating whether the results should be regarded as surprising, and by helping to mitigate publication bias against null results. Further, surprising differences between predictions and field results might help identify candidates for replication studies, an important task in ensuring research transparency.
We run a laboratory experiment in which non-experts forecast the results of two large field experiments on TV license fee collection, to evaluate the degree to which they can successfully predict these results. In our setting, forecasters successfully identified the most effective treatments applying a deterrence motive but struggled to forecast the results of "soft" behavioral treatments compared to the baseline.
However, they were mostly correct when forecasting same effectiveness of the "soft" treatments compared to each other. Our results suggest that, despite the artificiality of the laboratory environment, forecasts generated there can improve the informativeness and interpretation of research results to some extent.