in

Riot will record your conversations in Valorant to then evaluate them for toxicity

Riot plans to go further in combating toxicity in Valorant.


Riot continues its adventure in combating toxicity. Developers in the latest information reveal the next steps they plan to take. Most to be expected, but there was also some interesting news.

For example, Riot wants to impose penalties in real time before the end of the game:

More immediate, real-time text moderation: While we currently have automatic detection of “zero tolerance” words when typed in chat, the resulting punishments don’t occur until after a game has finished. We’re looking into ways to administer punishments immediately after they happen.

But it is not everything

After all, it is also known why the privacy policy changed last year. Riot will analyze what you say, at least that’s what the official announcement says:

‘Last year Riot updated its Privacy Notice and Terms of Service to allow us to record and evaluate voice comms when a report for disruptive behavior is submitted, starting with VALORANT. As a complement to our ongoing game systems, we also need clear evidence to verify violations of behavioral policies before we can take action, as well as help us share with players on why a particular behavior may have resulted in a penalty.

As of now, we are targeting a beta launch of the voice evaluation system in North America/English-only later this year to start, then we will move into a more global solution once we feel like we’ve got the tech in a good place to broaden those horizons. Please note that this will be an initial attempt at piloting a new idea leveraging brand new tech that is being developed, so the feature may take some time to bake and become an effective tool to use in our arsenal. We’ll update you with concrete plans about how it’ll work well before we start collecting voice data in any form.’

The evaluation will probably be about catching key words. In addition, other issues will change:

  • Generally harsher punishments for existing systems: For some of the existing systems today to detect and moderate toxicity, we’ve spent some time at a more “conservative” level while we gathered data (to make sure we weren’t detecting incorrectly). We feel a lot more confident in these detections, so we’ll begin to gradually increase the severity and escalation of these penalties. It should result in quicker treatment of bad actors.
  • Improvements to existing voice moderation: Currently, we rely on repeated player reports on an offender to determine whether voice chat abuse has occurred. Voice chat abuse is significantly harder to detect compared to text (and often involves a more manual process), but we’ve been taking incremental steps to make improvements. Instead of keeping everything under wraps until we feel like voice moderation is “perfect” (which it will never be), we’ll post regular updates on the changes and improvements we make to the system. Keep an eye out for the next update on this around the middle of this year.

All of this will be introduced into Valorant gradually into different regions.