Conference Papers

Information Retrieval and User-Centric Recommender System Evaluation

AuthorAlan Said, Alejandro Bellogin, Arjen De Vries, Benjamin Kille
SourceProceedings of the 21st Conference on User Modeling, Adaptation, and Personalization (UMAP'13) 
LinksBibTeX 

Traditional recommender system evaluation focuses on rais- ing the accuracy, or lowering the rating prediction error of the recommendation algorithm. Recently, however, discrepancies between commonly used metrics (e.g. precision, recall, root-mean-square error) and the experienced quality from the users' have been brought to light. This project aims to address these discrepancies by attempting to develop novel means of recommender systems evaluation which encompasses qualities identi ed through traditional evaluation metrics and user-centric factors, e.g. diversity, serendipity, novelty, etc., as well as bringing further insights in the topic by analyzing and translating the problem of evaluation from an Information Retrieval perspective.