A growing amount of data is stored in electronic health records, which are crucial for the clinical decision-making process. A large part of these data has a tabular form, consisting of numerical and categorical values originating from various laboratory examinations and sensors.
Unlike medical images and clinical notes, tabular data lack higher semantics and, combined with the high dimensionality and heterogeneity, their interpretation by a human is challenging. On the other hand, we have witnessed superior performance of deep convolutional neural network (DCNN) models in the visual medical domain.
In this paper, we propose visual representations of complex tabular medical data readable simultaneously by humans and machines. To show that these representations can encode the patient's data semantics effectively, we use them to fine-tune a DCNN to predict the disability level of patients suffering from multiple sclerosis.
Our experiments show that the visual models could match the performance of non-visual models. Moreover, the visual representations add the benefit o f s ummarizing complex information about the patient's state to a human.