One of the biggest problems in automated diagnosis of psychiatric disorders from medical images is the lack of sufficiently large samples for training. Sample size is especially important in the case of highly heterogeneous disorders such as schizophrenia, where machine learning models built on relatively low numbers of subjects may suffer from poor generalizability.
Via multicenter studies and consortium initiatives researchers have tried to solve this problem by combining data sets from multiple sites. The necessary sharing of (raw) data is, however, often hindered by legal and ethical issues.
Moreover, in the case of very large samples, the computational complexity might become too large. The solution to this problem could be distributed learning.
In this paper we investigated the possibility to create a meta-model by combining support vector machines (SVM) classifiers trained on the local datasets, without the need for sharing medical images or any other personal data. Validation was done in a 4-center setup comprising of 480 first-episode schizophrenia patients and healthy controls in total.
We built SVM models to separate patients from controls based on three different kinds of imaging features derived from structural MRI scans, and compared models built on the joint multicenter data to the meta-models. The results showed that the combined meta-model had high similarity to the model built on all data pooled together and comparable classification performance on all three imaging features.
Both similarity and performance was superior to that of the local models. We conclude that combining models is thus a viable alternative that facilitates data sharing and creating bigger and more informative models.