Problematizing digital content moderation and its impact on online harm reduction
In the most recent years, a shift of selling and buying illicit psychoactive drugs from darknet cryptomarkets to publicly accessible social media platforms has raised a number of research and policy questions. While emerging research about drugs and social media focus on specifics of user experiences in selling and buying controlled drugs and digitally mediated features enabling these activities, little has been written on the potential of social media for harm reduction work. This paper aims to address this gap by analyzing content moderation policies of certain social media platforms such as Instagram and Facebook to underpin the impact of prohibitionist policies as enabled and upheld on social media. While some platforms are less restrictive (e.g., TikTok) about harm reduction content, others implement higher degrees of moderation including removal of individual content and banning of entire profile pages (e.g., Instagram). Moreover, the increasingly automated nature of content moderation and algorithmic developments to personalize individually consumed content are blurring the boundaries between professional and peer-led knowledge and making access to and filtering of needed content complicated. We argue that the potential for online harm reduction is ripe, and a joint effort of professional harm reduction organizations, academic researchers and social media organizations can bear fruitful results in designing better content moderation policies.