Radicalization is a complex phenomenon difficult to define. Arguably, a clear sign of radicalization is the increased division and polarization between two or more opposing sides, with increased tensions, aggressivity and hate. In such a case, at least one side may be argued to undergo radicalization.
There has long been an ongoing controversy about the extent of online radicalization. Aral-20 provides a nice survey of relatively reliable results. However,, annoyingly, much of the research on the topic seems much less compelling than the claimed results (this is further discussed below). In particular, some arguably fail to factor in cognitive biases. Yet BailABBC+18 suggest that they are critical to online polarization. However, recently, the Facebook Files reveal that Facebook's internal auditing has detected increased polarization caused by its newsfeed algorithm, while a research by Twitter HKOBSH-21 concludes that Twitter's algorithm amplifies politicians compared to the chronological tweet ordering, with a bias favoring right parties which strongly varies from a country to another.
Interestingly, the case of far-right radicalization has been greatly documented by former white supremacist Christian Picciolini, especially in his books Picciolini-17 Picciolini-20. His work also suggests pathways towards deradicalization.
PerceptionGap highlights exaggerated misconceptions on each side on the preferences of the other side they call the perception gap. Interestingly, estimates of independent voters are in-between a side's actual view and the estimate of this side's view by the other side. It is noteworthy that news and education do not help Mendelberg-02 Muller-08 KahanPDS-17.
TuckerGBVS+18 surveys research on offline and online disinformation and polarization. It notes that the research is hard to analyze because of terminology inconsistencies across papers. Also, much of it relies on self-reporting, which seems unreliable given cognitive biases, especially on political questions. Moreover, the online environment is changing at a rapid rate, which means that past studies may become obsolete. Finally, much seems to come the deliberative tradition of political theory, which they argue to lack quantitative methods.
This calls for caution when analyzing results of polarization research, which is why we dedicated a "less compelling findings" section to findings that don't seem very reliable. In particular, the study of online polarization seems in need of unconventional clever methods to probe what's going on in cyberspace.
By studying YouTube comments, RiberioOWAM-20 highlighted a pattern towards radicalization. Alt-right commenters used to be "intellectual dark web" commenters, who used to be alt-lite commenters. They also found that alt-lite videos are much easier to reach than alt-right videos from intellectual dark web videos. This suggests that commenters moved from intellectual dark web to alt-right thanks to the YouTube recommender.
ShoreBD-18 distinguish two user profiles: influencers (top 1%) and typical users. Influencers tend to write more polarized contents than they receive, while typical users tend to write less polarized contents than they receive. (Note though that ShoreBD-18 arguably oversells the results with a misleading title).
KKaakinenRNMKO18 showed that online hate is produced by rare but highly influential users. They also showed association between low offline social capital and hate production (which is consistent with CacioppoCacioppo-14 VanhalstGP-17 Kurzgesagt-19), but also association between high online social capital and hate production. The causality relationship for this latter case seems hard to diagnose.
MatzKNS-17 suggest the effectiveness of mass targeted persuasion through social medias (more to come).
BABBC+18 studied 1-week treatment where Democrats and Republican Twitter users were paid to follow a bot retweeting leading opponents' tweets. They showed increased polarization, mostly among Republicans. While the paper (brilliantly) points out limits of the generalizability of the findings, this line of study is clearly critical for the design of robustly beneficial algorithms.
BailGS-17 showed greater increased polarization among older surveyed subjects. They also propose a model that suggest that increased polarization is greater for non-Internet users. Nevertheless, they observe increased polarization for all surveyed subjects. It remains to be explained why polarization increased so much over the last decade.
Less compelling findings
There is a large number of studies with conflicting conclusions. Unfortunately, as suggested by the conflicts, many of the studies are not very compelling, as there often is a gap between the empirical data and the claimed findings of the study.
Typically, "echo chamber" is sometimes measured by "automated estimates of sharing of contents we agree with" or "self-reported exposure to contents we disagree with". These two measures are very distinct. In fact, in the latter case, polarization may increase with decreased echo chamber, for instance if the contents we are exposed to are caricatures of the opponents' ideas. Unfortunately, paper's titles and abstracts are often vague about their actual findings, which adds to confusion. Annoyingly, they sometimes claim definite conclusions despite the weaknesses of their analysis. Publication bias should make us even more careful.
The core problem is that studying online polarization is just very, very, very hard. The internet has arguably become as complex as the human brain (if not more!), with thousands of billions of human and algorithmic nodes, and millions of billions of evolving interactions between nodes via complex (private) messaging that often involves multimedia contents. Such systems are not easily interpretable. In particular, online information is highly personalized, especially on social medias or on YouTube. Today's greatest talents are needed to construct the appropriate tools to probe cyberspace.
Nevertheless, it seems worth listing a few findings, even if they are disputable, as long as their limits are highlighted. Especially given that they have nevertheless gained attention.
DuboisBlank-18 argue against echo chambers. But they rely on self-reported claims of being in echo chambers, which seems very unreliable given our cognitive biases on such matters. Their sample size also seems small (~350, which is then further controlled by 6 variables). The paper does make an interesting point though about the flaws of studies focused on a single media.
HaimGB-17 NechushtaiLewis-19 showed evidence of little personalization for identical queries on Google Search and Google News. However, Allgaier-19 showed that YouTube Search gave very different answers to slightly different queries: "climate change" yielded videos in line with the scientific consensus, while "climate manipulation" returned climate change denialist videos.
MöllerTHE-18 argue that algorithms should not be blamed for lack of diversity in recommendations. The study compares journalists' recommendations to classical recommendation algorithms (like matrix factorization) on a set of 1000 articles. But the study fails to factor in how the impact of recommendation algorithms over time, which has been argued to be critical, as repeated exposure seems to take weeks to have strong impact HohnholdOT-15. Also the algorithms they studied seem very different from YouTube's algorithms, and the pool of contents to recommend hugely smaller than the pool of YouTube videos. These major limitations strongly undermine the radical claim of MöllerTHE-18.
It is unfortunate that such limited evidence against filter bubbles have led to radical conclusions, e.g. Bruns-19 calling "echo chambers" and "filter bubbles" the "dumbest metaphor on the Internet". While these terms are indeed imperfect, their criticisms seem to miss out on the quality of exposure to opposing views. It seems quite likely that social medias highlight the most extreme opposing views, which would in fact increase polarization. In this sense, the "bubble" would not only correspond to individuals being exposed only to their views; it would also correspond to individuals being exposed mostly to the most caricatural and easily criticizable views ot the opposite side.
One challenge to design robustly beneficial recommendation algorithms is that humans suffer from numerous cognitive biases, like ingroup bias. LordRL-79 showed that subjects with strong views on capital punishment increase their confidence on their views when exposed to the information on capital punishment, no matter what these views are. BailABBC+18 seem to confirm this, as they even suggest that exposure to alternative views on social medias can increase individuals' confidence in their own views (caveats apply!).
Moreover, YouTube seems quite different from other offline, online and social medias, as its recommender algorithm can pull any video from a gigantic reservoir of highly biased contents. The algorithm, who's responsible for 70% of views, seems to play a much more critical role than in other platforms cnet-18.