Combining ML and human
moderators to rid review platform
of offensive content
With a daily audience of millions of users, it publishes about 150K reviews every day.
3.5M
MAU
30M
Visits per month
Challenge
The platform's popularity attracts all kinds of abuse: spam, fraud, hate speech, vandalism, and more. This puts the platform's moderation system under constant stress.

On the one hand, offensive content repels users (leading them to choose other platforms) and creates legal risks for the company. On the other hand, frequent bans make the platform less informative and discourage users from visiting. Spending your time writing a valid review and then having it rejected is obviously a disappointing user experience.

With all of that in mind, the platform felt it was essential to build a consistent and transparent moderation process, while at the same time making it difficult for malicious users to trick the system.
Results
The system scales automatically and handles spam attacks.
More than 30,000 inappropriate reviews are prevented from publishing daily.
The cost of moderation per review dropped by 50%.
The number of complaints to support reduced by 35%.
Average moderation speed increased by 70%.
Moderation accuracy is constantly at > 97%.
Thanks to fast and highly precise moderation, the service was able to maintain a healthy community and provide a safe platform for communicating on a variety of social issues. At the same time, they were able to operate in more than 200 cities and 3 culturally diverse countries without fear of offending any user or letting anyone misuse the platform.

Request trial