Few game companies are taking negative online behaviour as seriously as Riot Games. The League of Legends developer has begun trials for near-instant punishments for players who create a negative gameplay experience, going as far as to automate the system to deal out bans as quickly as 15 minutes after the match ends.
The system, which is currently in place in the EU, will use player reports to help the instant feedback system understand and punish the kind of verbal harassment the community actively rejects: homophobia, racism, sexism, death threats, and other forms of excessive abuse. These harmful communications will be punished with two-week or permanent bans within fifteen minutes of game’s end.
When a player is reported by his peers, the automated system examines the case based on community-driven standards of behaviour, and then hands out the appropriate punishment. However, Riot Games has not exactly explained what constitutes excessive abuse, or how the community will affect the standards of behaviour. Fortunately, affected players will receive a snapshot of the chatlog that got them banned in the first place; albeit with the names of the other players involved removed.
While this may work for English speaking LoL players, there is a question of how the system will handle the massive number of other languages used by LoL players. Riot Games does not say if the system will cover non-English speaking communications, although that could be left to localisation teams once they work out any bugs in the system.
The MOBA genre is often used as an example of where the worst online behaviour takes place, although this is not always necessarily true. Most online games have their fair share of trolls and problematic players, but it is good to see a developer take active steps to make the environment more welcoming to everyone.
[Source: Riot Games]