Charles Explorer logo
🇬🇧

Lyrics or Audio for Music Recommendation?

Publication at Faculty of Mathematics and Physics, Central Library of Charles University |
2020

Abstract

Music recommender systems (RS) aim to aid people with finding relevant enjoyable music without having to sort through the enormous amount of available content. Music RS often rely on collaborative filtering methods, which however limits predicting capabilities in cold-start situations or for users who deviate from main-stream music preferences.

Therefore, this paper evaluates various content-based music recommendation methods that may be used in combination with collaborative filtering to overcome such issues. Specifically, the paper focuses on the ability of lyrics-based embedding methods such as tf-idf, word2vec or bert to estimate songs similarity compared to state-of-the-art audio and meta-data based embeddings.

Results indicate that both audio and lyrics methods perform similarly, which may favor lyrics-based approaches due to the much simpler processing. We also show that although lyrics-based methods do not outperform meta-data based approaches, they provide much more diverse, yet reasonably relevant recommendations, which is suitable in exploration-oriented music RS.