Username Guardian is a tool for detecting toxicity in usernames. You can pass any username to the API in order to subject it to a linguistic analysis to identify any disruptive, profane, sexual or otherwise inappropriate language. With the Username Guardian you can protect your communities and eliminate the risk of a toxic username setting the tone for the ensuing discourse within the community.
Username Guardian uses a neuro-symbolic approach to dissect each username into its constituent parts, identify disruptive content and provide detailed information on what makes the username toxic. Importantly, Username Guardian can handle such text manipulation techniques as:
This way, the Username Guardian is able to decipher even deeply hidden offensive meanings.
Then, the Username Guardian classifies toxic content according to one of the nine toxicity categories:
For instance, for the input DuckSick96, the pipeline of the Guardian consists of the following steps:
The input N00B5H1T triggers the following pipeline:
With the Username Guardian, you can configure which types of usernames are not allowed in your community, according to your guidelines. If you run an app for kids, most likely all of the four types of toxic usernames should be ruled out. If the use case involves a dating community for adults, you can opt for enabling the usage of sexually explicit usernames but disable those that are offensive, profane, or inappropriate. Since a username can be classified into more than one category, Username Guardian will know which sexually-explicit usernames might at the same time be offensive and should be filtered out.
Username Guardian can prevent a user from registering the toxic username or it can be used to scan an already existing user base for toxic name spotting. It can help in enforcing your policies against toxic behavior and proactively prevent its breaches, resulting in increased safety and trust in your service.
There is research conducted by Samurai Labs and Riot Games independently, confirming the correlation between toxic usernames and toxic behavior: users with toxic names are much more likely to produce violent messages or to harass others. Preventing users from opting for such a username is a strong signal to your users about the rules that are being enforced and could potentially prevent them from further disruptive behavior.
The Username Guardian is an effective tool that allows community owners and moderators to effectively automate this aspect of moderation, without the need to rely on a manual process, reports by other users, or low-precision keyword methods.
Language - ISO-639-1 Code
English - en
Polish - pl
Spanish - es
To detect toxicity in usernames in all available languages, use all in the language parameter. If no value is provided for parameter ‘language’, the default is en.
For more information, visit our website: https://www.samurailabs.ai/#UNM