Automatic Moderation of Online Discussion Sites
Jean-Yves Delort, Bavani Arunasalam, and Cecile Paris
International Journal of Electronic Commerce
Volume 15 Number 3, Spring 2011, pp. 9.
Abstract: Online discussion sites are plagued with various types of unwanted content, such as spam and obscene or malicious messages. Prevention and detection-based techniques have been proposed to filter inappropriate content from online discussion sites. But, even though prevention techniques have been widely adopted, detection of inappropriate content remains mostly a manual task. Existing detection techniques, which are divided into rule-based and statistical techniques, suffer from various limitations. Rule-based techniques usually consist of manually crafted rules or blacklists of key words. Both are time-consuming to create and tend to generate many false-positives and false-negatives. Statistical techniques typically use corpora of labeled examples to train a classifier to tell “good” and “bad” messages apart. Although statistical techniques are generally more robust than rule-based techniques, they are difficult to deploy because of the prohibitive cost of manually labeling examples.
In this paper we describe a novel classification technique to train a classifier from a partially labeled corpus and use it to moderate inappropriate content on online discussion sites. Partially labeled corpora are much easier to produce than completely labeled corpora, as they are made up only with unlabeled examples and examples labeled with a single class (e.g., “bad”). We implemented and tested this technique on a corpus of messages posted on a stock message board and compared it with two baseline techniques. Results show that our method outperforms the two baselines and that it can be used to significantly reduce the number of messages that need to be reviewed by human moderators.
Key Words and Phrases: automatic classifiers, automatic moderation, content filtering, online discussion sites