Unfortunately, there are growing evidence that disinformation has become predominant on social medias, with malicious activities from both governmental entities and private companies SmarterEveryDay-19 BradshawHoward-19.
In 2018, Google removed 2.3 billion ads that violated their advertising policy VentureBeat-19.
Between April and September 2019 alone, Facebook removed 3.2 billions fake accounts Reuters-19, along with millions of child abuse posts.
BradshawHoward-19 report that, in 2019, 70 countries were found to be using disinformation campaigns, and observe a concerning increasing trends over the last few years.
ElmasOOA-19 analyze expose (lateral) astroturfing accounts, which use fake accounts to hack trending topics on Twitter.
Huawei's disinformation campaign using Twitter fake accounts has been exposed NYTimes-21.
Arguably, social media platforms can be compared to physical territories. In the absence of content moderation, those who invest the most will likely conquer the territory. Unfortunately, there are currently huge incentives to conquer this territory, for both economical and political reasons, from all sorts of entities. A flat territory seems thus likely to be invaded by non-benevolent forces.
Unfortunately, such territories are still relatively flat, in the sense that they do not favor more reliable and desirable contents. This has concerning implications for global misinformation, with major associated risks in terms of climate change, public health, cyberbullying, national security and existential risks.
Another unfortunate feature of social medias is that they are extremely opaque to external entities, including academic researchers, as Google, Facebook and others do not seem under enough legal and social pressure to share their data (and algorithms) with auditors. This makes disinformation and misinformation very hard to study, and this may be even more so in more encrypted social medias.
More generally, there seems to be a tension between privacy and content moderation PhilosophyOfDataScience-20. Indeed, content moderation requires an analysis of contents; yet fully encrypted contents cannot be analyzed. Aral-20 argues that too much focus may have been given to privacy, which has hindered the study of misinformation and of effective means to mitigate it.
There also seems to be a tension between antitrust (or deplatforming) and content moderation, as evidenced recently by the case of Parler NYTimes-21. Indeed, a profusion of alternatives to mainstream social medias might allow radicalized subgroups to adopt solutions with limited content moderation and strong privacy, which may make disinformation and misinformation significantly harder to both monitor and counteract ElmhamdiHoang-19FR.