##
###
Tighter Generalization Bounds for Matrix Completion Via Factorization Into Constrained Matrices

Ken-ichiro MORIDOMI, Kohei HATANO, Eiji TAKIMOTO

2018
*
IEICE transactions on information and systems
*

We prove generalization error bounds of classes of lowrank matrices with some norm constraints for collaborative filtering tasks. Our bounds are tighter, compared to known bounds using rank or the related quantity only, by taking the additional L 1 and L ∞ constraints into account. Also, we show that our bounds on the Rademacher complexity of the classes are optimal. key words: matrix completion, non-negative matrix factorization, collaborative filtering, Rademacher complexity, generalization
## more »

... y, generalization error bound Introduction Learning preferences of users over a set of items is an important task in recommendation systems. In particular, the collaborative filtering approach is known to be quite effective and popular [1]- [3] . Simply put, the collaborative filtering is an approach of inferring user's rating for an item, which the user has not rated yet, from the existing ratings of other users. The approach is formulated as the matrix completion problem, that is, learning a user-item rating matrix from given partial entries of the matrix. More formally, we consider a (true) rating matrix X ∈ R N×M to be learned, where N and M are the numbers of users and items, respectively, and each component X i, j corresponds to user i's rating for item j. The task is to find a hypothesis matrixX ∈ R N×M that approximates the true matrix X when only some of components of X are given as a sample. A common assumption in the previous work is that the true matrix X ∈ R N×M can be well approximated by a matrix of low rank (or low trace norm, as a convex relaxation of the rank constraint). In other words, we assume that our hypothesis matrixX ∈ R N×M can be decomposed asX = UV T for some U ∈ R N×K and V ∈ R M×K with a small number K, where K gives an upper bound of the rank ofX. Generalization ability of algorithms (such as the empirical risk minimization) using low rank or low trace norm matrices is intensively studied in the literature (see, e.g., [4]-[7]). Recently, further additional constraints on the class of hypothesis matrices turns out to be effective in practice. In particular, a major approach is to impose the constraints Manuscript Generalization: The second one is a slightly generalized class, consisting of all matricesX = UV T ∈ R N×M with U ∈ R N×K and V ∈ R M×K such that for every i ∈ [N] and j ∈ [M],

doi:10.1587/transinf.2017edp7339
fatcat:g4usqhsvzncorcx523fo3745rq