Due to the ever increasing amount of content, social media companies are employing machine learning and artificial intelligence (ML/AI) in order to scale content moderation. However, context is often key in detecting extremist propaganda and other violative content, which ML/AI is unable to understand. Groups across extremist ideologies are adept at gaming the terms of service, including posting content that seems innocuous when taken at face value as well as violative content that has been modified. This becomes even more problematic for small companies which do not have many resources but host the majority of this content.
Whilst some of the modifications employed by terrorist groups/sympathisers to evade detection are basic, others are more sophisticated and exploit commercial or open-source anti-forensic methods and tools. The aim of this project is to study the application of digital forensics methods – together with machine learning techniques – to the automated detection of terrorist content that has been modified to evade identification. This project will employ criminology frameworks as well as computer science to advance understanding of terrorists’ use of online spaces and propaganda dissemination strategies, to inform efforts to develop tools to detect content that has been modified to evade detection.