Recommender System

Recommender System with Python Code Implementation-Part 2

This article is in continuation to Recommender System with Python Code Implementation-Part 1 where we have discussed content-based filtering. Click on the link below for Part 1.

Recommender System with Python Code Implementation-Part 1

Now, we will discuss Collaborative and Hybrid filtering in detail.

Collaborative Filtering

Collaborative filtering uses the user data collected by the system and filters are related information. It is based on the idea that people who agree on the evaluation of certain items are more likely to agree again in the future.

The concept is simple: when we want to find a new web series or movie to watch we’ll often ask our friends for recommendations. Naturally, we have greater trust in the recommendations from our like-minded friends.

Collaborative Filtering
Collaborative Filtering (Source)

Most collaborative filtering systems use the so-called similarity index-based method. In the neighborhood-based approach, several users are selected based on similarities with active users.

Inference for the active user is created by computing the weighted average rating of the selected users. The collaborative filtering system focuses on the relationship between users and entities. The similarity of an item is determined by the similarity of ratings for both items by users who have rated both items.

Understand the Collaborative Filtering Algorithm

To experiment with the recommendation algorithm, you’ll need a dataset containing items and a set of users who have reacted to some of the items. Reactions can be either explicit (rating on a scale of 1 to 5, likes or dislikes) or implicit (time spent viewing the product, adding to the wish list).

While working with such data, you’ll most often see it in the form of a matrix consisting of the reactions given by a set of users to some items from a set of items. Each row contains the ratings given by a user, and each column contains the ratings given by a user to an item.

A matrix with five users and five items looks like this:

i1i2i3i4i5
u1541
u233
u32441
u4445
u52452

The above matrix depicts five users who have rated a few items on a scale of 1 to 5.

For example, the first user has given a rating of 5 to the first item and a rating of 1 to the fourth item.

In most cases, the cells in the matrix are empty, because the users only rate a few items. Not all users will rate or respond to all available items. A matrix with mostly empty cells is said to be sparse, and the opposite to that (a mostly filled matrix) is said to be dense.

The Dataset

One best example to get started would be the MovieLens dataset collected by GroupLens Research.

Specifically, the MovieLens 100k dataset is a stable reference dataset with 100,000 ratings by 943 users for 1682 movies, with each user rating at least 20 movies.

This dataset consists of many files, containing information about the movies, the users, and the ratings given by users to the movies they have seen.

The following are of interest:

u.item: the list of movies

U.data: ratings given by users

The dataset looks like this:

user_iditem_idratingtimestamp
1962423881250949
1863023891717742
223771878887116
244512880606923
1663461886397596

This dataset contains 100,000 such ratings, which will be used to predict the ratings of a movie that the user has not seen.

Types of Collaborative Filtering

Let’s go through the types of collaborative filtering:

Collaborative Filtering Methods

Memory-based

Memory-based collaborative filtering uses the entire user-item data to generate predictions. The system utilizes statistical methods to find users whose transaction history is similar to that of an active user.

Memory-Based Collaborative Filtering
Memory-Based Collaborative Filtering (Source)

This method is also known as nearest-neighbor or user-based collaborative filtering. To find the rating R that a user U would give to an item I, the approach would be:

  • Search for users similar to U who rated item I.
  • Calculating the rating R based on the user ratings found in the previous step.
How to Find Similar Users Based on Ratings

Let’s take an example to understand the concept of similarity.

A dataset includes four users A, B, C, and D who have rated two movies. The ratings are as follows:

Ratings by A are [1.0, 4.0].

Ratings by B are [5.0, 4.0].

Ratings by C are [3.5, 1.0].

Ratings by D are [4.5, 2.0].

Ratings by users

(X-axis: Movie1, Y-axis: Movie2)

The above graph depicts the users and ratings given by them to Movie1 and Movie2.

The Euclidean distance between the points is used to estimate the similarity measure. It can also be determined using the cosine similarity. By calculating the Euclidian distance between points we can say that ‘C’ and ‘D’ are quite similar as they have minimum Euclidean distance between hen

How to Calculate the Ratings

After finding a list of users similar to user U, you need to compute the rating R that U would give to a certain item I. Again, just like similarity, you can do this in more than one way. The user rating R for an item I can be expected to be close to the average rating given to item I by the most similar users in the top 5 or top 10 U.

The average rating given by the n similar users is the sum of the ratings given by them divided by the n number of similar users. In the weighted average method, each rating is multiplied by a similarity factor (a measure of how similar the users are). Add weight to the evaluation by multiplying it by a similarity factor. The heavier the weight, the more important the evaluation. The final predicted rating by user U is the sum of the weighted ratings divided by the sum of the weights.

User-Based vs Item-Based Collaborative Filtering

User-based computes the similarity between target users and other users.

Item-based computes the similarity between the items that target users rate/interact with and other items.

Model-Based

Model-based collaborative filtering develops a model based on user ratings to provide recommendations. That is, we extract some information from the dataset and use it as a “model” to make recommendations without using the entire dataset every time.

This approach provides the pros of both speed and scalability. It involves a step to minimize or compress a large but sparse user-item matrix.

A user-item matrix consists of two dimensions:

  1. The number of users
  2. The number of items.

If the matrix is mostly empty, dimensionality reduction can improve the performance of the algorithm in terms of space and time.

This can be done using a variety of methods, such as matrix factorization or autoencoders.

Implementation of Collaborative Filtering

We are implementing a Book recommendation system. Click here for the dataset.

Here, are the results Authors of the five books recommended by our recommendation system:

Code Output

Click here for the detailed code.

Pros of Collaborative Filtering

1. No domain knowledge is necessary

No domain knowledge is required as embeddings are learned automatically.

2. Serendipity

The model can help users discover new interests. Separately, the ML system may not know the user is interested in a given item, but the model might still recommend it because similar users are interested in that item.

3. Captures the change in user interests over time:

Focusing solely on content doesn’t add any flexibility to the user’s perspective and preferences.

Cons of Collaborative Filtering

1. Synonyms

Collaborative filtering can’t differentiate between synonyms. Here, “synonyms” refer to similar items which are named or labeled differently. Collaborative filtering treats these products differently because they cannot detect hidden associations between synonyms. For example, CF can’t understand that “Backpack” and “knapsack” appear to be different but refer to the same thing.

Image: Collaborative Filtering Synonyms Example

2. Diversity and the long tail

The fact that more users viewing and buying a popular product will make it more popular and make new items remain in the shadow behind these best-selling products. In short, this approach creates a rich-get-richer effect for popular products, which leads to a lack of diversity.

3. Cold-start problem:

The model prediction for a given pair(user, item) is the dot product of its embeddings. So, if an item is not displayed during training, the system cannot create an embedding for that item and can’t query the model with this item.  This problem is called the cold-start problem. However, projection in WALs can be used to avoid the cold-start problem to some extent.

In projection in WALS, If a new item i0 is not displayed in training and the system has a few interactions with users, then the system can easily compute an embedding vi0  for this item without having to retrain the entire model.

Hybrid Recommendation System

A hybrid recommendation system is a special type of recommendation system that provides a recommendation to the user by combining two or more methods: a content-based filtering method and a collaborative filtering method. The combination of these two filtering methods helped overcome the challenges faced by using them separately.

 

Hybrid Recommendation System

Hybrid Recommendation System (Source)

Hybrid

As shown in the above diagram, the results are obtained by combining the results of the content-based filtering method and collaborative-based filtering method on different attributes like ranking, and the recommendations are made based on the top items on the list.

Hybrid Recommender System Approaches

The following are the seven approaches to building the hybrid recommender system:

Weighted

In this approach, a mixture of expert frameworks is made for decision-level fusion. Rating for a given item is computed as the weighted sum of ratings provided by a pool of recommenders.

Weighted

Weights are determined by training on the user’s previous rating and can be adjusted when a new rating arrives.

Switching

In switching hybridization, the system switches to one of the recommenders according to a heuristic that reflects the recommender’s ability to generate good scores.

Switching

Switching hybrids can avoid method-specific issues, such as issues with new articles for content-based recommenders, by switching to collaborative recommender systems.

Mixed

Mixed hybrids combine recommendations of multiple systems rather than using them to predict the rating of an individual item. Mixed hybrids can also avoid issues such as the new item problem.

Mixed

In mixed hybridization, the individual performances do not affect the overall system’s performance in a local region.

Feature Combination

In feature combination, the rating produced by one system is fed into another as a recommendation feature.

Feature Combination

Cascade

Cascade systems apply an iterative refinement procedure for constructing a preference order among items.

Cascade

At every stage, a recommender takes a set of items with an equal preference at the higher level and orders them into bins of equal preference. At each step, a better refinement is obtained. The cascade systems are efficient and tolerant to noise due to the coarse-to-fine nature of iteration.

Feature Augmentation

Feature augmentation systems are hierarchical hybrids consisting of a cascade of recommenders.

Feature Augmentation

However, unlike the cascade hybrids, feature augmentation systems make use of the rating and other information produced by the previous recommender in the cascade.

Meta-level

Meta-level hybrids feed the built model by one recommender to another as input. The completed model is denser in information when compared to a single rating. Hence in meta-level hybrids, more information is carried from one recommendation to another.

Implementation: Hybrid Recommendation system

We will be implementing a hybrid recommendation system on the IMDB Dataset of 50K Movie Reviews.

Click here for the dataset, and the code.

Evaluation metrics

After understanding and successfully implementing the recommendation systems, it is equally important to evaluate the models. There are various metrics for evaluating the model but here we will discuss 4 major metrics.

These are as follows:

Mean Average Precision at K

Mean Average Precision@k (MAP@k) is a commonly used metric, especially where information retrieval is done and the ranking of documents is equally important. It is mainly used in Recommender Systems and Ranking Models.

Coverage

The coverage of a recommender system is a measure of the domain of items that the system can make recommendations for. The term “coverage” is associated with two concepts:

  1. the percentage of the items for which the system can generate a recommendation, and
  2. the percentage of the available items that have been recommended to users.

Personalization

Personalization is a way to match the right type of service, item, or content to the right user. When approached correctly, it helps increase user engagement — and interactions people have with a service, product, website, app, etc.

Intralist Similarity

It is the average similarity of all recommendations to the user. This similarity can be attributed to the nature of the subject, such as the genre of a movie. If items that are very similar concerning the selected features are recommended. This value can be high or low depending on the programmer’s recommendations.

Conclusion

This article covered various topics related to recommender systems like- What is a recommender system, its use-cases, also its types like- content-based filtering, collaborative-based filtering, and hybrid with their code implementation.

Apart from this, we also learned their pros, and cons, and finally, discussed some evaluation metrics to evaluate the model.

Stay Tuned!!

Keep learning and keep implementing!!

3 thoughts on “Recommender System with Python Code Implementation-Part 2”

  1. I really enjoyed what you have accomplished here. The outline is elegant, your written content is stylish, yet you seem to have acquired a bit of apprehension over what you aim to convey next. Undoubtedly, I will revisit more frequently, just as I have been doing nearly all the time in case you sustain this upswing.

Leave a Comment

Your email address will not be published. Required fields are marked *