A plugin that checks all images uploaded to Discourse via the Google Cloud Vision API and rejects if not appropriate according to set levels.
Documentation:
A Ruby Gem is provided by Google:
https://googleapis.dev/ruby/google-cloud-vision/latest/index.html
https://googleapis.dev/ruby/google-cloud-vision/latest/Google/Cloud/Vision.html
https://cloud.google.com/vision/docs/detecting-safe-search
This plugin should hook into the main image upload process of Discourse for all images (posts, avatars, profile backgrounds etc.) and reject images that contain disallowed content:
puts "Adult: #{safe_search.adult}" puts "Spoof: #{safe_search.spoof}" puts "Medical: #{safe_search.medical}" puts "Violence: #{safe_search.violence}" puts "Racy: #{safe_search.racy}"
['UNKNOWN', 'VERY_UNLIKELY', 'UNLIKELY', 'POSSIBLE', 'LIKELY', 'VERY_LIKELY']
The min likelihoods to reject uploads for the various flag categories (Adult, Sppor, Medical, Violence, Racy) should be configurable in the plugin settings.
Rejected images should be removed instantly and don’t be saved anywhere.
Supported image formats by Google Cloud Vision: