The Imagga Content Moderation Platform handles every aspect of content moderation, providing automatic filtering of unsafe content in images, videos and live streaming. It helps keep your users and brand reputation safe by filtering or flagging visuals containing nudity, weapons, drugs, offensive and hate symbols, infamous people and landmarks, offensive language.
As an early pioneer in the image recognition space, the Imagga engineering team has deep expertise in developing AI solutions. They can address any specific moderation needs for various content types. They can help you customize the out-of-the-box content moderation solution to cover your particular needs.
Imagga Content Moderation Explained
The free access to content sharing online creates tons of “digital garbage”.
Billions of users share text, image, video and audio content online. Some of it is inappropriate, insulting, obscene, or outright illegal. If left unsupervised, user-generated content may cause significant harm to brands and vulnerable groups.
Companies built around user-generated content need a shield against questionable content.
More than 100,000 people worldwide manually scan through the most violent, pornographic, exploitative and illegal content to protect internet users from online bullies and criminals. However, the job of human content moderator is difficult and can take its emotional and mental toll. We believe that AI-powered content moderation is the answer!
Automate your content screening with the Imagga Content Moderation Platform.
Create your API account and immediately feed your data to our REST API. The pre-trained model will start doing the heavy lifting, processing large amounts of data in real-time, and filtering any content identified as inappropriate. You can customize the filter criteria. Like all of our solutions, the Imagga Content Moderation Platform is available as both Cloud and On-premise API.
Set your own custom moderation rules via a handy user interface.
Through the platform dashboard (available on both web and mobile) you can control the scope of moderation, confidence interval thresholds, statistics and moderators roles. The rules defined here are used to determine if manual modification of ambiguous images and borderline cases is required.
Hire a human moderation team or add an internal one.
The AI-powered content moderation is vastly more productive, less expensive and more scalable than human moderation. But it can also have its limitation when it comes to understanding very subtle context. If needed, we can provide human moderation, delivering a complete solution, or you can plug-in your manual moderation team.
Deliver safe content to your users on desktop and mobile.
Our algorithm learns from the data it analyzes and the moderators’ feedback and in result, improves its accuracy. Over time less and less human moderation is required, if any at all. The end result is safe content that meets regulations, protects vulnerable groups, and your brand reputation.
Have a specific need? Need additional moderation criteria?
Let’s talk! We can help you find the best solution for your case.
Thank you for your message!
We'll contact you shortly to discuss your case.
Interested in the nitty-gritty? Read on!
Scope of the Content Moderation Platform
Visual Content Moderation
- Nudity, partial nudity
- Self harm, gore
- Alcohol, drugs, forbidden substances
- Abuse of alcohol, drugs, etc
- Weapons, torture instruments
- Verbal abuse, harsh language, racism
- Obscene gestures
- Graffiti, demolished sights
- Physical abuse, slavery
- Brawls, mass fights
- Propaganda, terrorism
- Infamous symbols, vulgar symbols
- Infamous landmarks
- Infamous people
- Horror, monstrous images
- Culturally-defined inappropriateness
- Custom-defined based on customer needs
Textual Content Moderation
- Hate Speech
- Inappropriate Content
- Copyrighted materials
- Spam and Scam content
- Specific language moderation
- Fraudulent content
- Pornographic content
- Text in pictures
- Custom restrictions
*Available customisation and language support on request
The dashboard allows users to control the following configurable rules for moderation through a visual interface.
- Scope and rules (e.g. confidence intervals ) - adjust the confidence intervals within which the images are automatically moderated or send for human check
- Team/Users management scope and access (users roles) - assign access rights and roles to users
- Data team roles and scope - assign categories to human moderators
- Data retention policy - control whether the data is stored and used for retraining of the model or is deleted
- Enable mobile UI - optionally providing mobile UI for moderation in addition to the web UI
- Analytics - visualize stats regarding types of inappropriate content
The Content Moderation Platform offers multiple integration options to suit your needs.
Use Imagga Content Moderation Platform in the Cloud to reduce IT costs and to speed up deployment.
We’ll help you deploy the Content Moderation Platform on your private servers for full compliance with the privacy regulations.
You can also export a snapshot of each model to be used directly on the edge.