
Beyond Content Moderation: Additional Approaches to Addressing the Spread of Harmful Content on Social Media Platforms
Reducing the spread of harmful content on social media platforms (SMPs) has become a critical goal for regulators and governments around the world. Most of the conversation around the issue thus far has focused on developing appropriate content moderation strategies that can be used to compel platforms to take down content that has been deemed to be ‘illegal.’
In India, the Information Technology Rules (2021) frame SMPs as intermediaries and outline the responsibilities of SMPs with regard to content moderation. They also mandate the availability of grievance redressal mechanisms and place additional obligations on ‘significant social media intermediaries,’ which include a responsibility to avoid posting certain types of harmful content, as well as responsibilities as a platform to swiftly remove any content that has been deemed to be unlawful as per the act.
This approach to limiting the spread of harmful content online places responsibilities on platforms to restrict or moderate content that is deemed harmful. Such content-moderation-focused approaches alone will not be adequate and can result in the restrictions of rights. This approach is limited because:
- Defining “harmful content” is difficult, and changes with time and context. Allowing governments and/or platforms to define it risks freedom of speech.
- Dominant social media platforms’ decisions regarding what content should be shown, moderated, and amplified cannot be seen in isolation from their business model that centres around behavioural and personalized advertising.
- Automated content moderation systems are fraught with challenges, particularly when used for non-English language content and manual content moderation has been linked to psychological trauma and Post Traumatic Stress for actual content moderators.
This form of content moderation assumes that platforms function as intermediaries and not as content curators. While social media platforms do undoubtedly function as intermediaries, and there is tremendous value to the safe harbour freedoms they have, this does discount the fact that these social media platforms also play a curatorial role through the use of algorithmic recommendation systems that are optimised to serve their business interests. These systems are responsible for determining what content is shown to the user, usually based on numerous personal data points. These recommendation algorithms, therefore, play a significant role in what content users see and by extension can facilitate the spread of harmful content across users. Regulating them, therefore, is pivotal to addressing the spread of harmful content online.
Given these limitations of current content moderation strategies, the report suggests that equal, if not more, policy attention should be paid to content recommendation algorithms that are responsible for algorithmic amplification of harmful content. Such measures could include:
- Increasing user control and agency: Users should be able to know why and how content is being recommended to them and have the option to opt in/opt out of the use of such algorithms.
- Encouraging platforms to adopt technical solutions, known as circuit breakers, to limit the virality of harmful content
- Mandating interoperability to encourage a more competitive environment where new platforms can emerge and compete with incumbents. This way no single platform would have control over the nature and direction of discourse through amplification and virality.
- Passing strict privacy legislation that limits what kind of data can be collected and used by platforms for content recommendation. For example, privacy legislation can look to prevent the collection and use of sensitive personal data such as health data.
- Address social media platforms’ underlying advertising-based business model which incentivises the use of algorithmic recommendations and the spread of harmful content.
None of these solutions are perfect in themselves and further research and experimentation is required both from researchers and the government. What is clear, however, is that there must be some expansion of the current content moderation regime to include issues relating to content recommendation and algorithmic amplification.