75

Comparison of image moderation APIs

 5 years ago
source link: https://www.tuicool.com/articles/hit/qm2a6vi
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

In the era of smartphones with phones loaded with 1, 2, 3 or even more cameras, images (and videos) have become the de-facto way for users to interact with the social media.

Almost all user-generated content like reviews on Yelp or TripAdvisor, posts on Instagram and Facebook, forwards on WhatsApp etc, everything is increasing more and more images. A recent 2017 stat, approximates around 350 million images uploaded per day on Facebook alone. That is a gigantic number of images and that, coupled with the rise of trolls, cyber bullies, and spam, is a dangerous place for the world to be in.

If your website or app allows user-generated content (UGC) like reviews or posts which might include images, the problem is very real, where a troll can upload sexually explicit or gruesome photos and have it publicly visible on your site, bringing a lot of user backlash or worse, legal liability for you.

Solutions

A common way companies deal with this is via moderation, where every piece of UGC is manually verified by a human elevator and only then, allowed to be visible on the website/app. Many companies employ dozens of moderators who keep filtering UGC day-in and day-out. Having a team of moderators is both expensive financially and slow, as a typical piece of content might take a few hours before being moderated.

In the last few years, there is a new way technology companies are dealing with this menace via machine learning and using ML algorithms to detect objectionable content (also sometimes called “not safe for work” NSFW content) and auto-moderate UGC. Some content which is still not confidently classified (mostly is a tiny fraction of all the UGC content) is routed to the human moderators for a final verdict. This makes it much cheaper and faster to do moderation on large scale.

Since developing the tech to build such ML solution is a complicated task, big tech companies like Google, Microsoft or small upstarts like Clarifai provide APIs to do this for you. Since most of these APIs provide a similar functionality we wanted to test the effectiveness of these APIs w.r.t each other, specifically for detecting adult or sexually explicit content in images. We compared the performance of moderation APIs for AWS rekognition, Microsoft moderation services, Google vision and Clarifai.

Dataturks or I have no affiliation with any of these providers (except that in the past I have worked at Microsoft and Amazon) and we have tried to be completely unbiased third party who just wanted to independently evaluate how these APIs stack up.

Target use case:

Flagging user uploaded images as having adult/sexually explicit content, so that only safe images are approved, the one with nudity etc are automatically removed and the ones which are not classified confidently are sent for manual evaluation.

Setup:

We usedYACVID datasetof 180 images with 90 images manually labeled as not nude and 90 images as nude. We tested each of these 180 images with each of the 4 APIs to note their classification.

Here is theopen-dataset we used: (Open to download)

byM32m7.png!web

An example data-item from the dataset

eeyeMrQ.png!web

Our assumption of a correct classification is w.r.t how a human would classify the same image. We are interested to see how much manual moderation can be off loaded to these APIs. Also, there are cases below where its very subjective to decide if the image is NSFW, and we leave it up to you to decide if the API response for these cases was correct or not.

Below we are not showing the sexually explicit images as that may be not safe for work for many of you. However here is how you can access the image if you want to, each image below is referred by its name ex: “nude12.jpg” and all these images are present in our S3 bucket, which you can access at: https://s3.amazonaws.com/com.dataturks.imagemoderation/IMAGE_NAME.jpg.

So, for example, an image named “nonnude26" can be found at https://s3.amazonaws.com/com.dataturks.imagemoderation/nonnude26.jpg

Examples of safe images in the dataset:

jQJ7nef.jpg!webrqee2iF.jpg!webeYBJBnI.jpg!webM3MzMvF.jpg!web2Q7NZjA.jpg!web

Examples of explicit images in the dataset:

  • nude01

  • nude43

  • nude57

  • nude74

Results:

During testing of each of these APIs, the primary concern was to look for these values, (1) True Positive (TP) : Given a safe photo, the API correctly says so, (2) False Positive (FP): Given an explicit photo but the API incorrectly classifies it as safe. (3) False negative (FN): Given a safe photo but the API is not able to detect so and (4) True negative(TN): Given an explicit photo and the API correctly says so.

Ideally one would want a 100% TP rate and 0% FP. Any amount of FP is really harmful and can cause explicit photos being publicly published on your app. A higher value of FN can make the system ineffective by increasing the load on your moderators unnecessarily.

Get code

Provider True +ve (TP) False +ve (FP) True -ve (TN) False +ve (FN) Precision (TP/ (TP+FP)) Recall (TP/ (TP+FN)) Microsoft moderation 78 2 88 11 97% 87% Google vision 85 1 89 4 99% 94% AWS rekognition 83 3 87 6 97% 93% Clarifai 74 0 77 2 100% 82% All together* 67 0 73 1 100% 74% Microsoft and Google* 76 0 87 2 100% 84% Microsoft and AWS* 75 0 85 3 100% 83% Google and AWS* 81 0 86 2 100% 90%

*These are ‘ANDs’ of the results from the APIs. So if both MSFT and Google came out as TP then the result is TP else the result is unknown.

We have made the code and dataset freely available for anyone to validate the results.

The best standalone API is from Google with a precision of 99% and a recall of 94%. As can be seen from the above, most of the these APIs work well, with precision values being in the late 90s but given the context of the problem even these precision rates may not be foolproof for many use cases.

We also tried to do a combination of two or more of these APIs to find the best possible solution for the moderation problem. From our dataset, it seems like combining Google and AWS APIs gives the best performance. Even then 10% of safe images need to be manually verified to make a foolproof system.

Examples

Microsoft image moderation

This API classifies an explicit image as “adult” or “racy”, both of which we took as NSFW class. Here are some of the images where it wrongly classified safe images to be explicit:

qeaiU3J.jpg!webAZBRbuz.jpg!webiYFr2qi.jpg!webvEfQ3eA.jpg!web

Here are some of the an explicit images which were classified as safe: (please access them if you wish to, as described above):

  • nude46

  • nude51

Google cloud vision

This API classifies an explicit image “likely”, ”possible”, ”very_likely” all of which we took as NSFW class. Here are some of the images where it wrongly classified safe images to be explicit:

zqERRvN.jpg!webr6b6juY.jpg!webfEzau2M.jpg!web

Here are some of the an explicit images which were classified as safe: (please access them if you wish to, as described above):

  • nude61

AWS Rekognition

This API classifies an explicit image as “explicit nudity”, “nudity”, ”suggestive” all of which we took as NSFW class. Here are some of the images where it wrongly classified safe images to be explicit:

meMjeeY.jpg!webVBZJ3ei.jpg!webVjMzeaA.jpg!webNzqQbmr.jpg!web

Here are some of the an explicit images which were classified as safe: (please access them if you wish to, as described above):

  • nude77

  • nude85

  • nude86

Clarifai nudity moderation

This API is Clarifai nudity model , which returns a NSFW/SFW class confidence values and they suggest a NSFW value greater or equal to 0.85 to use it as NSFW and value of NSFW less than or equal to 0.15 to use it as SFW. It keeps a huge range of values between 0.15 to 0.85 where the state of the image in unknown (and hence low recall) Here are some of the images where it wrongly classified safe images to be explicit:

VBZJ3ei.jpg!webrmErMbN.jpg!web

There were no explicit images which were classified as safe.

The black swan:

Here is one image which each of these APIs classified as NSFW but was manually evaluated as SFW. Maybe it depends on how cool a “workplace” we are talking about, would love to hear how according to you do you see the below image.

VBZJ3ei.jpg!web

Latencies:

We also measured the API response times, which can be a factor for you to decide which API to choose. Since response times can have many factors influencing them hence the below numbers should just be used as a ballpark and not as actual values. The below stats are based on 180 calls per API from a Linux laptop running Ubuntu.

Provider Avrg response time(sec) Median response time(sec) Microsoft moderation 2.20 1.99 Google vision 1.68 1.60 AWS rekognition 0.70 0.65 Clarifai 1.40 1.33

One thing to note is that all these APIs accessed images uploaded on Amazon S3 hence an AWS API would have an unfair advantage w.r.t accessing a S3 image hence possibly the lower response time for it.

Ease of use:

API wise, all these APIs were pleasant to use and easy to integrate with apps. But a major drawback with AWS rekognition is that it only accepts images as S3 objects (or as an upload of base64 encoded image data), unlike Google, Clarifai or Microsoft moderation services who gracefully work with images stored anywhere on the web by just passing a URL to the image.

Pricing:

AWS rekognition is the cheapest of the three costing around $1 for every 1000 API calls (excluding charges to store images on S3), where as Clarifai costs around $1.2 for every 1000 API calls along . Microsoft moderation services and Google vision is the costliest, costing around $1.5 for every 1000 API calls.

Here is theopen-dataset we used.

Get code

If you like this piece, here is our blog on the comparison of best face recognition APIs

If you have any queries or suggestions I would love to hear about it. Please write to me at [email protected].

Dataset Reference: LOPES, Ana; AVILA, Sandra; PEIXOTO, Anderson; OLIVEIRA, Rodrigo; ARAÚJO, Arnaldo. A Bag-of-Features Approach based on Hue-SIFT Descriptor for Nude Detection. In: 17th European Signal Processing Conference (EUSIPCO), Glasgow, 2009.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK