Transparency Report
Effective Date: 2021-04-12
Introduction
ASKfm is aimed to be a safe place where people can get a portion of personal attention and freely express themselves by asking and answering questions. Our important task is to make our platform secure and protected, where people get what they want without facing any maleficent content or threats.
This report outlines the findings of the analysis of 2020 data.
Safety Value
The ASKfm team is dedicated to making the user experience as positive as possible. Our content philosophy is introduced in Community Guidelines. We understand the topical realities imposing the safety requirements of the content we are responsible for. For that very reason, we reworked ASKfm Terms of Use in April 2020, clarified rules about user-generated content posted on the platform. Our Safety Center has safeguarding information and a product feature guide. It explains safety settings, privacy settings, and reporting/blocking functions. On top of activities carried out, we started rechecking historical media content that didn’t comply with the actual safety requirements.
Moderation
2020 was also distinguished by new internal improvements in the Moderation team and our processes. We are focused on the quality of moderators’ decisions and their speed of work. Moderation checked 45% more violation reports per daily active user in 2020 compared to 2019, and 240% more profile reports than in 2019.
- In 2020 moderation processed 370,408 profile reports. When we review a profile, we can clearly separate content items, send a warning message if the profile has a higher rate of violations during the last visits on the website, or ban the whole profile. Every user can contact our support team via the contact form or email directly for explanations why his/her profile was banned.
- This diagram shows the percentage of users banned by the main categories we define on ASKfm in 2020:
- This is top banned reasons by country diagram 2020
- 2,663,214 text reports were processed in 2020. This is 34% more than in 2019.
- We cleared 697,507 text items, which is 7% less than in 2019. Even after changing our ToU to stricter ones and implementing new directives, we regard this as a positive tendency in user-generated content posted on our platform.
- Our pre-moderation tools hit 915,532 times in 2020 helping us to define deleterious content automatically. We use a pattern system that distinguishes web-links, words, and expressions as hurtful or suspicious.
- In 2020 we enriched our pattern list with 33,734 patterns. When we define a new threat on our platform, we add new variation patterns in the system to prevent it from appearing again. We also create patterns to dedicate exceptional events happening in the world to define text content that can constitute a danger to our users.
- This is the diagram of the pre-moderation type of pattern results to DAU in 2020
- In 2020 we processed 46% text items more (to DAU) defined by our text patterns. Text patterns underline suspicious word expressions in different languages and pass them to manual check.
- Cleared text items as compared to ignored ones is 36.08% in 2020 (found by text patterns). In 2019 it was 51.97%. This is another positive tendency in user-generated content on our platform.
- We use hash lists to define suspicious media content. 45,823,816 media content was cleared in 2020, which is 60% more than in 2019. This result reflects our extensive work in old content recheck. We continue this challenge in 2021 and plan to automate this process.
- Hash list hits in 2020 are shown in this diagram
External Requests
In 2020 we received 243 requests to take actions against maleficent content on the platform from official organizations or representatives. All requests were processed and actions were taken according to our platform policy.
We are committed to escalating issues that threaten the safety of our users that may require Law Enforcement intervention. From our side, we reported 69 cases with the top priority threats to Law Enforcement. 3 of them were extremism cases in 2020.
ASKfm is open for cooperation with official organizations against threats to others or self, threats of violence, illegal activity, and content harmful to minors on the Web.
Our email for content issues is abuse@ask.fm.