18

推荐系统实践 0x0b 矩阵分解

 3 years ago
source link: http://www.cnblogs.com/nomornings/p/14087718.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

前言

推荐系统实践那本书基本上就更新到上一篇了,之后的内容会把各个算法拿来当专题进行讲解。在这一篇,我们将会介绍矩阵分解这一方法。一般来说,协同过滤算法(基于用户、基于物品)会有一个比较严重的问题,那就是头部效应。热门的物品容易跟大量的物品产生相似性,而尾部的物品由于特征向量系数很少产生与其他物品的相似性,也就很少被推荐。

矩阵分解算法

为了解决这个问题,矩阵分解算法在协同过滤算法中共现矩阵的基础上加入了隐向量的概念,也是为了增强模型处理稀疏矩阵的能力。物品和用户的隐向量是通过分解协同过滤的共现矩阵得到的。矩阵分解的主要方法有三种,特征值分解、奇异值分解以及梯度下降。特征值分解主要用于方阵,而用户-物品矩阵并不一定是方阵,所以不太适用。而奇异值分解通过保留对角矩阵较大元素的方式,对矩阵进行分解比较完美的解决了矩阵分解的问题,但是计算复杂度到达了 \(O(mn^2)\) 的级别,在显示业务场景当中显然是无法使用的。所以梯度下降成为了矩阵分解的主要方法。

梯度下降

有过深度学习基础的同学肯定对梯度下降不陌生,介绍梯度下降的博文也是数不胜数,这里给出一篇博文作为参考,不再赘述。这里梯度下降所需要优化的目标函数是

\[\min_{q^*,p^*}\sum_{u,i\in K}(r_{ui}-q_i^Tp_u)^2 \]

其中, \(r_ui\) 是用户 \(u\) 对物品 \(i\) 的评分,用户向量为 \(p_u\) ,物品向量为 \(q_i\)

比较有趣的一点是,由于不同用户的打分体系不一样,我们还需要消除用户和物品打分的品茶,通常的做法是加入用户和物品的偏差向量:

\[r_{ui}=\mu + b_i + b_u + q_i^T p_u \]

其中, \(\mu\) 是全局偏差常数, \(b_i\) 是物品 \(i\) 的偏差系数,可以使用收到所有评分的均值, \(b_u\) 是用户偏差系数,可以使用用户 \(u\) 给出所有评分的均值。

优缺点

优点:

  • 泛化能力强,一定程度解决了数据系数的问题。
  • 空间复杂度低。只需要存储用户和物品向量,空间复杂度降低到了 \((m+n)k\) 的级别。
  • 更好地扩展性。最终的输出是用户和物品的隐向量,所以可以很好地拼接到深度学习当中。
    缺点:
  • 没有考虑用户、物品、上下文的特征。
  • 缺乏用户历史行为时无法进行推荐。

实验

我们在 动漫推荐数据集 上来实现以下矩阵分解中梯度下降的算法,数据集已经给出来了。下面给出相对应的代码:

#library imports
import numpy as np
import pandas as pd
from collections import Counter
from sklearn.model_selection import train_test_split
from scipy import sparse

lmbda = 0.0002


def encode_column(column):
    """ Encodes a pandas column with continous IDs"""
    keys = column.unique()
    key_to_id = {key: idx for idx, key in enumerate(keys)}
    return key_to_id, np.array([key_to_id[x] for x in column]), len(keys)


def encode_df(anime_df):
    """Encodes rating data with continuous user and anime ids"""

    anime_ids, anime_df['anime_id'], num_anime = encode_column(
        anime_df['anime_id'])
    user_ids, anime_df['user_id'], num_users = encode_column(
        anime_df['user_id'])
    return anime_df, num_users, num_anime, user_ids, anime_ids


def create_embeddings(n, K):
    """
    Creates a random numpy matrix of shape n, K with uniform values in (0, 11/K)
    n: number of items/users
    K: number of factors in the embedding 
    """
    return 11 * np.random.random((n, K)) / K


def create_sparse_matrix(df, rows, cols, column_name="rating"):
    """ Returns a sparse utility matrix"""
    return sparse.csc_matrix((df[column_name].values,
                              (df['user_id'].values, df['anime_id'].values)),
                             shape=(rows, cols))


def predict(df, emb_user, emb_anime):
    """ This function computes df["prediction"] without doing (U*V^T).
    
    Computes df["prediction"] by using elementwise multiplication of the corresponding embeddings and then 
    sum to get the prediction u_i*v_j. This avoids creating the dense matrix U*V^T.
    """
    df['prediction'] = np.sum(np.multiply(emb_anime[df['anime_id']],
                                          emb_user[df['user_id']]),
                              axis=1)
    return df


def cost(df, emb_user, emb_anime):
    """ Computes mean square error"""
    Y = create_sparse_matrix(df, emb_user.shape[0], emb_anime.shape[0])
    predicted = create_sparse_matrix(predict(df, emb_user,
                                             emb_anime), emb_user.shape[0],
                                     emb_anime.shape[0], 'prediction')
    return np.sum((Y - predicted).power(2)) / df.shape[0]


def gradient(df, emb_user, emb_anime):
    """ Computes the gradient for user and anime embeddings"""
    Y = create_sparse_matrix(df, emb_user.shape[0], emb_anime.shape[0])
    predicted = create_sparse_matrix(predict(df, emb_user,
                                             emb_anime), emb_user.shape[0],
                                     emb_anime.shape[0], 'prediction')
    delta = (Y - predicted)
    grad_user = (-2 / df.shape[0]) * (delta * emb_anime) + 2 * lmbda * emb_user
    grad_anime = (-2 / df.shape[0]) * (delta.T *
                                       emb_user) + 2 * lmbda * emb_anime
    return grad_user, grad_anime


def gradient_descent(df,
                     emb_user,
                     emb_anime,
                     iterations=2000,
                     learning_rate=0.01,
                     df_val=None):
    """ 
    Computes gradient descent with momentum (0.9) for given number of iterations.
    emb_user: the trained user embedding
    emb_anime: the trained anime embedding
    """
    Y = create_sparse_matrix(df, emb_user.shape[0], emb_anime.shape[0])
    beta = 0.9
    grad_user, grad_anime = gradient(df, emb_user, emb_anime)
    v_user = grad_user
    v_anime = grad_anime
    for i in range(iterations):
        grad_user, grad_anime = gradient(df, emb_user, emb_anime)
        v_user = beta * v_user + (1 - beta) * grad_user
        v_anime = beta * v_anime + (1 - beta) * grad_anime
        emb_user = emb_user - learning_rate * v_user
        emb_anime = emb_anime - learning_rate * v_anime
        if (not (i + 1) % 50):
            print("\niteration", i + 1, ":")
            print("train mse:", cost(df, emb_user, emb_anime))
            if df_val is not None:
                print("validation mse:", cost(df_val, emb_user, emb_anime))
    return emb_user, emb_anime


def encode_new_data(valid_df, user_ids, anime_ids):
    """ Encodes valid_df with the same encoding as train_df.
    """
    df_val_chosen = valid_df['anime_id'].isin(
        anime_ids.keys()) & valid_df['user_id'].isin(user_ids.keys())
    valid_df = valid_df[df_val_chosen]
    valid_df['anime_id'] = np.array(
        [anime_ids[x] for x in valid_df['anime_id']])
    valid_df['user_id'] = np.array([user_ids[x] for x in valid_df['user_id']])
    return valid_df


anime_ratings_df = pd.read_csv("../dataset/anime/rating.csv")
print(anime_ratings_df.shape)
print(anime_ratings_df.head())

anime_ratings = anime_ratings_df.loc[
    anime_ratings_df.rating != -1].reset_index()[[
        'user_id', 'anime_id', 'rating'
    ]]
print(anime_ratings.shape)
anime_ratings.head()

print(Counter(anime_ratings.rating))

# Average number of ratings per user
print(np.mean(anime_ratings.groupby(['user_id']).count()['anime_id']))

train_df, valid_df = train_test_split(anime_ratings, test_size=0.2)

# resetting indices to avoid indexing errors in the future
train_df = train_df.reset_index()[['user_id', 'anime_id', 'rating']]
valid_df = valid_df.reset_index()[['user_id', 'anime_id', 'rating']]

anime_df, num_users, num_anime, user_ids, anime_ids = encode_df(train_df)
print("Number of users :", num_users)
print("Number of anime :", num_anime)
anime_df.head()

Y = create_sparse_matrix(anime_df, num_users, num_anime)

# to view matrix
Y.todense()

emb_user = create_embeddings(num_users, 3)
emb_anime = create_embeddings(num_anime, 3)
emb_user, emb_anime = gradient_descent(anime_df,
                                       emb_user,
                                       emb_anime,
                                       iterations=800,
                                       learning_rate=1)

print("before encoding:", valid_df.shape)
valid_df = encode_new_data(valid_df, user_ids, anime_ids)
print("after encoding:", valid_df.shape)

train_mse = cost(train_df, emb_user, emb_anime)
val_mse = cost(valid_df, emb_user, emb_anime)
print(train_mse, val_mse)

现在实验还在跑着,等出了结果就贴在这里。

参考


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK