Charles Explorer logo
🇬🇧

On Visualizations in the Role of Universal Data Representation

Publication at Faculty of Mathematics and Physics |
2020

Abstract

The deep learning revolution changed the world of machine learning and boosted the AI industry as such. In particular, the most effective models for image retrieval are based on deep convolutional neural networks (DCNN), outperforming the traditional "hand-engineered" models by far.

However, this tremendous success was redeemed by a high cost in the form of an exhaustive gathering of labeled data, followed by designing and training the DCNN models. In this paper, we outline a vision of a framework for instant transfer learning, where a generic pre-trained DCNN model is used as a universal feature extraction method for visualized unstructured data in many (non-visual) domains.

The deep feature descriptors are then usable in similarity search tasks (database queries, joins) and in other parts of the data processing pipeline. The envisioned framework should enable practitioners to instantly use DCNN-based data representations in their new domains without the need for the costly training step.

Moreover, by use of the framework the information visualization community could acquire a versatile metric for measuring the quality of data visualizations, which is generally a difficult task.