Recent advances in automated planning are leading towards the use of planning engines in a wide range of real-world applications. As the exploitation of planning techniques in applications increases, it becomes imperative to assess the robustness of planning engines with regards to poorly-engineered (or maliciously modified) knowledge models provided as input for the reasoning process.
In this work, to understand the impact of poorly-engineered knowledge on planning engines, we consider the perspective of a hypothetical attacker that is interested in subtly manipulating such knowledge to introduce unnecessary overheads that consequently slow down the planning process. This narrative ploy allows us to describe different types of knowledge engineering issues that cannot be detected via validation of the models, and to measure their impact on the performance of a range of planning engines exploiting very different approaches for steps like pre-processing and search.