User-generated content (UCG) is at the heart of the online world we live in, dramatically shaping how we humans exchange information. While it brings unseen freedom in terms of communication channels and forms of expression, as we well know, there is a dark side to every revolution. Scamming, plagiarism, cyber bullying, not safe for work content (NSFW) and outright scary stuff – they’re all out there in the pictures, videos, articles and audio clips being freely posted online.
For years, brands have been trying to figure out effective ways to sort out disturbing and harmful user-generated and curated content. Content moderation is seen as the only way to ensure a safe online environment for digital users and prevent abuse and harmful practices online.
For businesses, efficient moderation is the answer to the ever-growing risks that they face if they don’t eliminate illegal and abusive content: harm to the brand’s reputation, legality and compliance issues, and financial consequences. Moderation is the only solution for excluding content that contains violence, extremism, child abuse, hate speech, graphic visuals, nudity and sex, cruelty, and spam.
Yet the price to pay is high, typically involving endless hours of traumatic work for low-paid employees. In the beginning of November 2019, the top content moderation company Cognizant, serving giants like Facebook and Google, declared it’s leaving the business. The shocking news came amid scandals revolving around its working conditions and the harmful psychological effects that content moderation jobs have on people.
While still developing its capabilities, AI-powered moderation is of enormous help in solving many of the issues with illegal and abusive content. Computer vision alleviates a large part of the burden on human content moderators, while increasing productivity and optimizing the moderation process for businesses.
What’s the solution to the harsh reality of content moderation? Let’s dig in and explore the options.
Contents
The risks of human content moderation
The job of the online content moderator is in high demand: more than 100,000 people around the world are involved in content review for online companies. Many of them are based in the Silicon Valley, but a significant proportion of the work is also outsourced – such as in the Philippines and Mexico.
A recent New Yorker interview with Sarah T. Roberts, author of Behind the Screen book about content moderation workers, reveals the ‘underworld’ of this job. Employees have to go through thousands of visuals per day, typically reported by end users as inappropriate, illegal or disturbing. It would be then up to the moderators to decide, in a matter of seconds, whether the image contradicts the company’s and country’s policies, or not. This means moderators have to make super-quick decisions about the appropriateness of a piece of content, while considering business guidelines and ethical codes.
According to the interviews that Roberts conducted with moderators around the world, the emotional toll is enormous. Burnout and desensitization are only the surface, followed by PTSD, social isolation and depression. On top of the negative psychological effects, Roberts’ research shows that content moderators typically receive entry-level salaries and have to agree with subpar working conditions. Tech companies, as well as content moderation service providers, consider this type of work as rather ‘mechanical,’ ignoring its heavy effects.
Besides the ethical and psychological considerations, content moderation done by humans is slow and cannot be easily scaled. With millions of visuals posted daily, online companies are facing waterfalls of content that needs to be reviewed for legal and ethical reasons. So what’s on the table today, coming from Artificial Intelligence applied to content moderation?
What AI has to offer in content review
Image recognition is revolutionizing the online world in countless ways. Its most ‘humane’ and ethical benefit, though, may as well be its contribution to automatizing image moderation. Computer vision allows the rapid analysis and identification of visual content, saving hours of work that would otherwise be performed manually by people.
How does it function? When an image is submitted for AI review, it is first analyzed by the API, which is the pre-moderation stage. Then the algorithm makes its conclusion regarding what the picture represents, which is done with near-perfect accuracy. On the basis of this keyword categorization, the computer models identify if any objects on the image match its list of inappropriate content.
For images screened by the AI that it deems safe, it’s possible for businesses to set automatic posting, as well as automatic categorization of the content. In case the algorithm detects potentially problematic content (the grey zone), the visuals are referred for human screening. While manual review is still necessary, the toughest part of the work – the identification of objects on the monstrous volume of visuals – is completed by the AI. The content moderation software also offers automatic filtering. If a piece of content is considered by the algorithm as too offensive, it’s not redirected for moderation, but directly removed.
In addition to reviewing potentially harmful and disturbing content, AI-powered content moderation can bring extra benefits to businesses. For example, it helps with reducing clutter in websites where users post sales listings. Computer vision allows the filtering of offers, so that buyers are not swamped with irrelevant, as well as inappropriate or disturbing listings. This has immediate and important benefits for online marketplaces and e-commerce.
Image recognition can also be used to ensure quality by removing low-resolution visuals, as well as identifying unwanted watermarks or symbols and preventing copyright infringements. This is especially useful for social media, stock websites, and also for retailers. Last but not least, AI content moderation can contribute to fighting disinformation and the spreading of fake news.
Blending AI and human power
Numerous cases in the last decade illustrate that relying solely on human moderation is expensive and difficult to scale, in addition to being a hazard to people conducting the job. At the same time, AI algorithms cannot fully take over content moderation – yet. In most cases today, the best approach is using both – in the most practical and safe ways.
Combining the power of human judgment and computer vision holds enormous potential for handling the tons of violent, pornographic, illegal and inappropriate visuals that are being posted online daily. This approach allows a significant workload reduction for hundreds of thousands of psychologically harming content moderator positions. It is also cost-effective for companies, as a large part of the content is processed by AI.
At the same time, the participation of expert moderators who would contribute to the improving of the algorithms and setting the overall content moderation guidelines of a business is crucial. AI is developing quickly and presenting great options for content moderation, allowing for high levels of accuracy and scaling. However, the human input remains decisive, as only people are able to provide full understanding of context and cultural relevance, as well as emotional processing.
Imagga’s content moderation platform is designed to work in sync with human moderators. They get notified for pre-defined scenarios that require human judgement for the AI-flagged content in images, videos and live streaming. Moderators can manually set threshold ranges regarding when human review is necessary – for different types of moderation issues, such as NSFW, violence, weapons, and more. Companies can choose to include their own moderation teams in the process, or to hire a moderation team from Imagga.
With the use of powerful AI for image moderation, human labor is minimized dramatically. This helps companies optimize their content moderation processes. The results are higher efficiency and improved quality, as well as easier scaling of moderation efforts, which is key for UGC-based businesses today.
Get started with AI-powered content moderation
Companies operating with large-scale user-generated and curated content require moderation in order to protect their customers from harmful and abusive content, as well as to protect themselves from legal, economic and reputational damage. The approach depends on each business’s choice – the right mixture between the impressive capabilities of AI and the still required human input. Yet the benefits of image moderation through computer vision are indisputable.
Explore how Imagga’s content moderation platform can help your business handle content review safely and efficiently.