Discourse: Google Play user-generated content policy violation

Hey everyone,

I’m looking for some advice here. We have an app, available on Google Play and Apple’s App Store, with an embedded Discourse community. It has been like this for several years now and the community itself is strong.

Occasionally, when you submit an app update to Google, they will review it manually and go through your entire app looking for app violations. This now happened to us, specifically about our community features and their user-generated content policy.

Initially, they rejected the app because the community (Discourse) didn’t have these features:

  1. Providing an in-app function that allows users to report/flag potential violating content
  2. Providing an in-app function that allows users to remove/block abusive users

As you can see, this isn’t about our specific community, but more about the features in Discourse itself. I responded to this by showing them a video of how to flag posts and how to mute/ignore users, then I submitted the app again.

They have now again rejected the app, and they have added this issue:

  1. Providing an in-app function that allows users to report/flag other users for potential violations

I don’t agree with this, as 99% of the issues in a discussion forum will come from the discussion, so flagging offending posts will handle most of it. But it’s not like I can argue against Google’s policy.

To my knowledge, there is no “button” for reporting a user in Discourse. I plan on sending them a video to show that any user can reach out to our team of moderators and report a user through a private message. I don’t know if they will accept this.

I’m also posting it here to raise a bit of awareness. Google Play has recently been on a mission to revamp their User Generated Content policy, so this might affect other Android apps that use Discourse in some way.

Full violation detail

Issue with your app
Your app contains or features User Generated Content (UGC) that isn’t compliant with the User Generated Content policy.

Issue details

We found an issue in the following area(s):

  • In-app experience: Please see attached screenshot com.sociosoft.sobertime-InAppExperience-321.png

Please note that user-generated content is content that users contribute to an app, and which is visible to or accessible by at least a subset of the app’s users. This includes, but is not limited to user profiles, comments, media, posts, etc.

As such, our guidelines require that apps containing UGC content, whether or not it is the app’s primary purpose, MUST have the following features/functionalities:

  • A user-friendly, in-app system for reporting objectionable UGC and taking action against that UGC where appropriate. This includes:
    • Providing an in-app function that allows users to report/flag other users for potential violations
    • Providing an in-app function that allows users to report/flag potential violating content
    • Providing an in-app function that allows users to remove/block abusive users

For more information, you can review our [e-learning course on UGC] before submission.

During our review of your app, we found objectionable content and/or potentially missing UGC features. We kindly ask that you re-check your app and make sure that ALL these functionalities are in place for your users. We also ask that you review the User Generated Content policy to ensure your app features ALL the requirements outlined in the help article.

If your app is missing one or more of the UGC features, make sure you add them to your app before resubmission.

We recommend that all required functionalities are labeled and/or designed in a way that is clear for the users to avoid confusion. You can self-resolve this issue by ensuring all required UGC features are properly implemented. If your issue has already been resolved, OR if you have updated your app on Play Console and submitted them for review, no further action is required from you and you do not need to reach out to us.

If you’ve reviewed the UGC policy and feel our decision may have been in error because all the UGC required features in your app actually exist and are reasonably identifiable to your users, please reach out to our policy support team.

4 Likes

We have a client who recently encountered the same things and they took the following approach (and were successful)

  • Allow users to report/tag other users for potential violations: use the Custom Wizard plugin to build such a feature.
  • Allow users to flag/flag potentially infringing content: exists using the post flagging feature. You need to set “min trust to flag posts” to 0 to make sure that the Google test team is seeing the flag feature.
  • Allow users to delete/block abusive users: exists using the “mute user” feature, see /my/preferences/users
  • All users MUST agree to the terms of service of the app: implement using a custom field How to make users to explicitly agree to ToS - #4 by neil
3 Likes

Thanks for this!

  • Allow users to report/tag other users for potential violations : We’re having trouble with this one, so I appreciate the custom wizard plugin suggestion. We haven’t used it before and we’ll try it out. For now, I have added a “Moderators” link that users can tap and message the moderators to report a user. I sent them a video of this. Hopefully this is acceptable, as it is an in-app feature, they just have to type a reason out (but they would have to do this anyway).

  • Allow users to flag/flag potentially infringing content: I believe we passed this test. Their communication is terrible though, so it’s hard to tell. For the first rejection they actually did flag a random post using the test account, and attached a screenshot of it. But still rejected it for this reason. Weird.

  • Allow users to delete/block abusive users: They didn’t know how to do this, so I sent them a video on how to mute or ignore a user.

  • All users MUST agree to the terms of service of the app: They didn’t complain about this yet. We have a system in place for it, but we may follow your approach if they give us trouble.

I’ll update this when I get feedback from them.

1 Like

I think this is a question of clearly explaining the existing flagging functionality in Discourse in detail to the reviewer. I’ve had to do this before for Apple’s reviewers, it’s possible that Google’s reviewers need even more specific screenshots/details included.

Core certainly provides many options to do this, that’s what flagging is, a way to report violations by users. We don’t let people go to the user’s profile to flag a user, but from posts, they can certainly be flagged.

And once a regular user flags another one, admins/moderators can very easily suspend/block that user globally. You might need to show some of those screenshots to the review, but the features in Discourse for this are quite solid.

There is a statement by default that users agree to ToS and privacy policy when creating an account:

And over the next few months, we will also add the ability for sites to add a consent checkbox that users must check, both when signing up and when logging in (for example, if the policy changes so much that admins need to request a new acceptance by all users).

5 Likes

Today is day 18 of not being able to publish our app.

A few things we’ve learned:

  • The review team is actually outsourced to another country where English is not the first language
  • When you appeal a rejection, it is reviewed by a Google employee and their English seems fine

Our app was initially rejected (by the outsourced company) due to Google Play’s User Generated Content Policy. I created a Google Drive folder with documents, screenshots and videos showing how Discourse has the required capabilities. I then submitted my appeal. The appeal was accepted by Google and they told me to resubmit the app, which I did. The review team then again rejected the app update due to Google Play’s User Generated Content Policy. I submitted another appeal and Google again accepted the appeal and told me to resubmit the app. I again resubmitted the app and now we’re waiting on the review team to review it.

There’s clearly a communication issue here, as Google seems fine with the functionality, but the outsourced review team do not. In my opinion, Discourse does have the required capabilities and this is coming down to either a language or comprehension issue on the reviewer’s side.

Just a guess, but there might also be a system in place that rewards or punishes the outsourced reviewer for correct or incorrect app reviews, so there might be reluctance on their side to admit that your app is actually compliant. I’ve dealt with them on other apps too where they, for example, insisted for two months that I list the subscription details of an in-app purchase that was just a once-off purchase.

For anyone else running into this, here are the issues mentioned and how we dealt with them:

1. Providing an in-app function that allows users to report/flag other users for potential violations
We have a link in our community that we built using Nav Links Component that links to the Moderator group. I recorded a video where I show how you can tap on this link, then message the moderator group to directly report a user. Google accepted this, as the only requirement is that it be in-app. It doesn’t have to be a “report user” button.

2. Providing an in-app function that allows users to remove/block abusive users
Discourse has a mute/ignore system. Just make sure your site settings allows even new users with trust level 0 to mute/ignore users. I showed them a video of how to tap on a user to open their profile card, then enter into the user’s profile to ignore/mute them. Google accepted this.

3. Providing an in-app function that allows users to report/flag potential violating content
Discourse has its flag functionality for this, and it covers the requirement. It seems like the review team didn’t understand this. What’s funny is that they actually did flag a random post using their test account, and then included a screenshot showing they flagged a post, but still brought this up. I sent them a video showing how to flag posts. Google accepted this.

4. All users MUST accept the app’s terms of user/user policy provided by the developer before app usage
For this one, they sent a screenshot showing the sign in form where they are signing in with their test account. I’m assuming they meant to indicate that users don’t have to agree to the terms on each sign in. Reading the policy requirement, I can interpret it that users only need to accept this when they initially sign up.
As @pmusaraj mentioned, Discourse’s default sign up form already mentions the terms of service and privacy policy. To ensure compliance, we have a mandatory user field that the user must accept when signing in.
I sent them a video showing you that you can’t create an account without ticking the box, and Google accepted it.

5. App’s terms of user/user policy MUST define objectionable content and behaviors
We do already clearly define what acceptable and objectionable content in our community is. We have it in our terms of service and in our FAQ. This is a detailed set of guidelines that our moderators worked together on. I sent them a video of how to access the terms (About - FAQ / Terms of Service) and also sent them direct links to both of those pages. Google accepted this.

6 Likes

Our update was accepted by the review team today, so the above should be sufficient for anyone facing this in the future.

3 Likes