Can AI recommendation algorithms provoke new interests instead of recycling familiar preferences?

 

 

 

Problem


 

Recommendation algorithms driven by artificial intelligence provide convenience, but they also capture us in our established preferences and consequently inhibit the discovery of new interests. So, Netflix constantly proposes similar films, LinkedIn job opportunities replicate past roles, social media platforms connect us with others like ourselves, Tinder repeatedly surfaces the same person, but with a different name. We are trapped inside of who we are.

  • For individuals, this is an ethical problem because human freedom is confined by our established preferences. Our ability to change is restricted by our own past.
  • For society, this is a solidarity problem because users are funneled into echo chambers of narrow perspectives and polarization. Diversity collapses into tribalism.
  • For platforms, this is an economic problem because bored users leave the site.

 

Solution


 

Re-engineer AI recommendations to provoke curiosity and new interests instead of echoing those already established.

 

Challenge


 

On one side, recommendations must disrupt the flow of similarity, the logic of filtering for nearness or resemblance to past choices. They must be discontinuous with established interests.

On the other side, recommendations must be engaging, they must appeal to users despite being atypical and leading toward unexplored possibilities.

One crude way to approximate this effect is to sprinkle random offerings among conventional recommendations, but the challenge is to do better than random. It is to propose possibilities that are engaging despite - or even because - they are unfamiliar.

 

Strategy


 

The industry standard for recommendations is collaborative filtering, meaning that recommendations are produced through a logic of resemblance. If two users have enjoyed similar movies in the past, and one of these users enjoys a new offering, that offering will be recommended to the other user.

The curiosity engine employs antagonistic filtering and explainability. Antagonistic filtering begins by identifying users who are significantly different. Then, among those diverse users, a strand of similarity is located, perhaps a single movie or two that they both strongly enjoy. From there, interests from one and the other can be recommended back and forth.

The idea is that the recommendations will be unfamiliar given that they emerge from different types of people. But, they will not be simply random because overlap exists between the two. Finally, the reason for the offering will be explained. This will empower users to control the experiment in novel preferences, and so increase engagement.

Technical conclusion: the strategy is provocation, not accuracy. It seeks to provoke new interests, instead of accurately offering recommendations that resemble past preferences.

 

Benefits


 

The Curiosity Engine project increases human freedom, contributes to an open and diverse society, and helps platforms maintain their users.

About