Troublemakers on online messaging services and forums, also known as trolls, are a consistent problem for websites. trolling epidemics and ultra aggressive users can drive away longtime users of sites, even when moderators are hard at work keeping things peaceful on the boards.
A new algorithm is in development, built using research undertaken by both Cornell and Stanford universities, which detects behavior of users on a message board and flags them as troublemakers based on the comments they make. the study found that the topics some users view also is a marker for antisocial behavior.
SEE ALSO: Justice Department to block Comcast / Time Warner Merger
“We find that such users tend to concentrate their efforts in a small number of threads, are more likely to post irrelevantly, and are more successful at garnering responses from other users,” Said researchers in a pre-publication paper discussing antisocial behavior online.
The study examined the behavior of users who were eventually banned from a message board, looking into the pages they viewed and the comments they wrote up until they were kicked out of the board.
“Studying the evolution of these users from the moment they join a community up to when they get banned, we find that not only do they write worse than other users over time, but they also become increasingly less tolerated by the community,” Said the study.
The commenting service which provided the information was Disqus, with around 35 million posts by almost 2 million users being studied by the investigation. out of those users 50,000 odd trolls were banned from one of the three websites using Disqus that were studied.
SEE ALSO: Apple Buys LinX Camera Company
Another indicator in the comments was the language used and the text content itself – users who were banned showed similarities to each other far more than the average, non banned users. Additionally the time spent on a topic by a banned user was noticed to be more in individual threads.
Once the data was compiled the researchers created a prediction model which eventually proved 80% successful at removing troublesome users from communities by only studying their first five posts – when the first ten posts were used to sniff out trolls, there was an increased 82% success rate.
Future versions of Disqus may go on to use such a system, although it’s more likely that a silent ‘flagging’ feature will be made available to moderators to avoid overzealous automatic bans. Such an automated system could be far too annoying and drive users away.
Via: The Guardian
Source: Cornell University Library