In my last post, we discussed the recommendation algorithm, the merchandizing layer and the presentation layer as well as the fact that I am working on a KPI-optimizing algorithm that differs from a collaborative filtering algorithm. In running A/B tests on these differing approaches, we’ve found that the sites divide into two types.
Type 1: The proportion of users who use the recommendations is the same for groups A (KPI-optimizing recommendations) and B (traditional item-based collaborative filtering recommendations).
Type 2: The proportion of users who use the recommendations is lower for the new KPI-optimizing algorithm than for the more traditional algorithm.
However, for both types of sites, the total revenue was the same or higher for the group that was seeing the recommendations from the new KPI-optimizing algorithm.
Think about this for a minute. For sites of type 2, a smaller percentage of the users use the recommendations, but the site’s revenue is improved. This can happen one of two ways, either those using the recommendations are converting more often and/or spending more per purchase, or those users who do not use the recommendations end up converting more often and/or spending more than those who do use the recommendations.
Recommendations Provide Alternatives
Isn’t this the opposite of what we’re trying to do with a recommendation system? Don’t we want the users to use the recommendations, as that will lead them to find the items that they want to buy? Yes and No. What happens if the user has already found the item they want to buy? It’s right there in front of them, on the page they’re looking at. Traditional item-based collaborative filtering recommendations at this point will show them alternatives to the item they’ve chosen, which may cause them to start browsing again, and possibly slow conversion to purchase. KPI-optimizing recommendations show different sorts of items – ones that in some sense “go with” the current item – but which may be less distracting to a user who has already found the item they want. Net result – users who don’t use the recommendations convert more.
I should also point out that the KPI-optimizing algorithm does also work as intended – those users who use the recommendations coming from the KPI-optimizing algorithm typically convert at a higher rate and with a higher average order value than those interacting with the recommendations coming from the more traditional algorithm.
That’s what’s going on with sites of Type 2. What about Type 1? Why did the proportion of users who used the recommendations not change on these sites? Answer – The merchandising layer. These sites had filters applied to the output coming from the recommendation algorithm that restricted what was shown to the user. So the distraction factor was still there, but the users were being distracted by items which had the highest expected KPI – net result, conversions and revenue were both up for these sites.
An Important Combination
It has often been argued that the particular algorithm behind a recommendation system has little effect on the overall system. We’ve found that the algorithm is in fact important, and can increase lift, but exactly how it does it is a complicated combination of the algorithm, the merchandizing, and the psychology of the web-site user.