K-means is one of the most frequently used algorithms for unsupervised clustering data analysis. Individual steps of the k-means algorithm include nearest neighbor finding, efficient distance computation, and cluster-wise reduction, which may be generalized to many other purposes in data analysis, visualization, and machine learning.
Efficiency of the available implementations of k-means computation steps therefore directly affect many other applications. In this work, we examine the performance limits in the context of modern massively parallel GPU accelerators.
Despite the existence of many published papers on this topic, we have found that crucial performance aspects of the GPU implementations remain unaddressed, including the optimizations for memory bandwidth, cache limits, and workload dispatching on problem instances of varying cluster count, dataset size, and dimensionality. We present a detailed analysis of individual computation steps and propose several optimizations that improve the overall performance on contemporary GPU architectures.
Our open-source prototype exhibits significant speedup over the current state-of-the-art implementations in virtually all practical scenarios.