Recommendation algorithms

From Tournesol
Jump to navigation Jump to search

Recommendation algorithms select the content users will be exposed to when they land on a platform or when they open an application. Such algorithms are ubiquitous on social media, especially to organize news feeds. They also rely on state-of-the-art machine learning algorithms, as they usually require quality image, video and natural language processing CovingtonAS-16 NaumovMSHS+19 IeJWNA+19.

Solving the ethics of such algorithms has been argued to be a priority cause area to do good Vendrov-19 ElmhamdiHoang-19FR, mostly because of the scale of their side effects, but also because of the scale of the opportunities provided by aligned recommendation algorithms Hoang-20 HoangFE-21.

Stakeholders

It is common to consider that recommendation algorithms must be designed for the user. In practice, the company developing these algorithms is also in an important stakeholder, as well as the creators whose contents are shared (or not) by the algorithms.

However, MilanoTF-20 argues that this is a narrow perspective. Given the scale of the side effects and opportunities of customized recommendations, in terms of disinformation, misinformation, radicalization, manipulation, cyberbullying, hate, addiction, attention span, loneliness, depression, suicides, biases, privacy, transparency, deplatforming and increased existential risks, MilanoTF-20 argues that society at large should be regarded as a key stakeholder. Put differently, it seems relevant to acknowledge that each of us has (moral) preferences over what algorithms will recommend to other users.

Tournesol🌻 can be regarded as a project to elicit, collect and model such human preferences.

Can recommendation algorithms become superintelligent?

One controversial topic of debate is the possibility that recommendation algorithms become superintelligent.

ElmhamdiHoang-19FR argues that today's recommendation algorithms are already superhuman in their task, simply because of the scale of this task. Indeed, they process billions of textual and audiovisual contents per day, which are usually filtered for copyrights, hate speech and other unsuitable images SmarterEveryDay-19. They also monitor, analyze and predict the daily habits of billions of humans.

Moreover, their task falls into the general framework of reinforcement learning IeJWNA+19. Recommendation algorithms observe vast amounts of complex data, and take billions of recommendation decisions to maximize their rewards, which usually revolve around user engagement. In principle, artificial general intelligence (AGI) algorithms like AIXI would thus be best to deploy to solve content recommendation.

Perhaps most importantly, recommendation algorithms have become central to the business of social media companies. This induces enormous incentives to constantly optimize these algorithms, and to exploit state-of-the-art image, video and language processing algorithms FedusZS-21. The lay-off of Timnit Gebru from Google's AI ethics team is arguably concerning in this regard TechnologyReview-20.

Note that this question is also discussed in our article on YouTube.