Charles Explorer logo
🇬🇧

Deep Learning for Knowledge Extraction from UAV Images

Publication at Faculty of Mathematics and Physics |
2021

Abstract

We study possibilities and ways to increase automation, efficiency, and digitization of industrial processes by integrating knowledge gained from UAV (unmanned aerial vehicle) images with systems to support managerial decision-making. Here we present our results in the secondary wood processing industry.

First, we present a deployed solution for repeated area and volume estimated calculations of wood stock areas from our UAV images in the customer's warehouse. Processing with the commercial software we use is time-consuming and requires annotation by humans (each time aerial images are processed).

Second, we present a partial solution where for computing areas of woodpiles, the only human activity is annotating training images for deep neural networks' supervised learning (only once in a while). Third, we discuss a multicriterial evaluation of possible improvements concerning the precision, frequency, and processing time.

The method uses UAVs to take images of woodpiles, deep neural networks for semantic segmentation, and an algorithm to improve results. (semantic segmentation as image classification at a pixel level). Our experiments compare several architectures, backbones, and hyperparameters on real-world data.

To calculate also volumes, the feasibility of our approach and to verify it will function as envisioned is verified by a proof of concept. The exchange of knowledge with industrial processes is mediated by ontological comparison and translation of OWL into UML.

Furthermore, it shows the possibility of establishing communication between knowledge extractors from images taken by UAVs and managerial decision systems.