# Recommender Systems
## Formalization
- Set of users: $\mathscr{U}$
- Set of items: $\mathscr{F}$
- Set of ratings already recorder: $\mathscr{R}$.
- For user $u \in \mathscr{U}, i \in \mathcal{F},$ rating: $r_{u i}$
- Subset of users who have rated an item $i: \mathscr{U}_{i}$ (likewise $\left.\mathcal{F}_{u}\right)$
- $\mathcal{F}_{u v}=\mathcal{F}_{u} \cap \mathcal{F}_{v}$
- $\mathscr{U}_{i j}=\mathscr{U}_{i} \cap \mathscr{U}_{j}$
- Set of possible values for a rating: $\delta$
- Recommendation problem: Given $\mathscr{R},$
$\operatorname{Let} f: \mathscr{U} \times \mathscr{F} \rightarrow \mathcal{S}$
Given these, we can get the best item for a user
- $i^{*}=\operatorname{argmax}_{j \in \mathscr{T} \mathcal{F}_{u}} f\left(u_{a}, j\right)$
Alternatively
- The top-N recommendation task: Recommend the best $\mathrm{N}$ items: $L\left(u_{a}\right)$
## Evaluation
If ratings are available, an option is to use MAE/RMSE (on a test set)
More common however,
- Treat it like ranking problem
- Use common ranking metrics like [[Precision and Recall]], [[Discounted Cumulative Gain]] etc.
## Challenges
- Cold start
- What items to recommend for new users?
- Howto infer preferences for something "new"?
- Modelling preferences - Dynamic vs static preferences/ short or long term preferences
- Users do not retain the same tastes
- "I'm in the moood for something new"
- Exploration vs Exploitation
- Recommend based on view/rating history? Recommend to understand preferences?
- Exploitation - recommend only items that are likely to interest a user
- What about a new user?
- Exploration - recommend other items
- Learn user's preferences over un-encountered items
- Related: Serendipity and Diveristy issues
- Common approaches - [[Multi-Armed Bandits]]
Other: Scalability, explainability, transparency, trust etc.
---
## References
[1] Ricci, Francesco, Lior Rokach, and Bracha Shapira. "Introduction to recommender systems handbook." Recommender systems handbook. Springer, Boston, MA, } 2011