Terrorists and violent extremists have historically used social media to share content, which can be algorithmically amplified by search and recommender systems in certain contexts. Therefore, it is suggested that algorithms should be utilised to amplify counter-speech to target audiences as a countering violent extremism strategy. However, there is limited research on the effectiveness of this approach, and the ethical principles that must underpin these campaigns to balance safeguarding user well-being, maintaining user agency and avoiding counter-productive impacts such as reinforcing extremist views and increased polarisation.
Through a mixed-methodology empirical design this research aims to address this gap by exploring the effectiveness and ethics surrounding the algorithmic amplification of counter-speech.
This research is important to inform policymakers, practitioners, and tech companies of how to ethically and effectively use algorithms to amplify counter-speech, to dissuade individuals from extremist content. This is particularly timely given recent regulatory efforts such as the Online Safety Bill and Digital Services Act. Furthermore, this research will identify where resources and research can address existing challenges. This will aid efforts to combat online extremism, and mitigate against potential negative impacts associated with counter-speech.