How to Use Imagga’s CM Platform for Efficient Content Moderation Process

Content moderation (CM) is a priority for a wide variety of online platforms that want to ensure a safe environment for their users in line with their Trust and Safety programs. 

Imagga has the right solution for accomplishing just that. Our fully-automatic content moderation (or semi-automatic, if you decide to involve humans in the loop) is a powerful, easy-to-use and scalable option with whose help you can monitor all visual content that users generate on your platform. 

With Imagga, you overcome several crucial issues at once: 

  • The harm that unmoderated user-generated content can bring to both your users and your brand; 
  • The limitations of manual moderation that can’t cope with the ever-growing content volume;
  • The ethical problems and psychological burden on human moderators caused by disturbing content. 

One of the superpowers of our platform is that it allows you to handle content in a faster and more efficient way than ever before. Quick moderation is essential for working with large amounts of content — and it’s important for boosting the productivity and growth trajectory of your online business. 

Let’s dig into the features and benefits that Imagga’s full-stack content moderation platform offers — and how to make the best of them for managing your visual and livestream content. 

The Tools in Imagga’s CM Platform 

Our content moderation platform consists of three modules. It is designed to flexibly accommodate your needs, combining automatic AI-powered moderation with the necessary input from your in-house or outsourced human moderators. 

The API (Application Programming Interface) is where the AI rolls sleeves to get the work done. It boasts self-improving state-of-the-art deep learning algorithms that identify inappropriate visual content on the basis of visual recognition technology. You can use the platform in the way that best suits your business operations and legal framework — in the cloud or on-premise. You can stick with just this API component if you don’t need human moderation. 

The Admin Dashboard is the web and mobile UI where you get all the functionalities and settings in one place. You can control the different aspects of the moderation process, so that you can skillfully combine the automatic features of the platform for filtering and flagging with human moderation whenever it’s necessary.  

The Moderation Interface is where your human moderators can easily interact with the CM platform. When they open Imagga Content Moderation UI, they’ll see the batches of items that have been assigned to them for moderation and will be able to act on them immediately.  

The API

Once you create an Imagga API account, you can securely analyze and moderate your visual data using our REST API. 

The AI-powered pre-trained model will start processing the information you provide for analysis and screening it for different categories of inappropriate content. You can set the categories for filtration flexibly, depending on your moderation goals. The best part is that the process is automated but the system can also learn on the go from the moderating decisions of human moderators, if such are involved in the process.

While you have the powerful Admin Dashboard from where you can control a broad range of settings for the content moderation process, you can also use the Admin API for that purpose — and not only for feeding items for moderation. You can: 

  • Create projects, add moderators, and set categories of inappropriate content and rules for moderation 
  • Access moderation items’ status and logs 

You can also import data from different sources through the API endpoints specified for this purpose. 

The Admin Dashboard: Your Control Hub

Sorting content for quick moderation is an easy and intuitive process with Imagga’s Admin Dashboard. 

When you open your dashboard, you have access to three important sections: Projects, Moderators, and Rules. Let’s review what you can do in each of them in detail. 

Projects 

In your Admin Dashboard, you can keep tabs on a number of projects at once. You can create a new project based on the different types of content (or supported language) you want to moderate. 

Let’s say that you want to use content moderation for your travel platform. You can set up, for example, two separate projects for the different streams of content — Accommodation Reviews for monitoring user reviews of properties, and Accommodation Photos for monitoring the visuals uploaded for each property. 

For each project, there are a number of settings you can control.

SLAs

You can choose the number of hours within which items in this project have to be moderated. 

This is especially useful when you have a single moderation team that needs to handle different projects simultaneously.

Priority Level

You can further prioritize a project by setting its overall priority level in your dashboard.

This priority level overrides the SLA setting, so it pushes a project up a moderator’s list. 

Batch Size

You can set the number of items that a moderator should handle at once when working on a project. Only when they complete one batch, they’ll be able to review the items from the next batch.

With this setting, you can manage the workload of your moderators, ensuring that content is reviewed in the best possible order too. 

Content Privacy 

You have two options for ensuring you meet GDPR and content privacy regulations — blurring faces of people and of car plates.

This setting is especially relevant if you’re working in a heavily regulated field. 

Retention Policy

You can choose how long an item stays in the system before it gets deleted.

This is necessary for the learning purposes of the AI algorithm, which improves over time based on moderators’ feedback on its previous work. 

Add Moderators

You can assign different moderators to different projects. Once you assign a moderator to a project, they’re allowed to flag items for all categories of inappropriate content in this project.

That’s how you make sure the right person is working on specific projects. It also helps you stay on top of managing moderators' workloads.  

Categories Management

You can set the different categories of inappropriate content that you’d like to moderate. You can create new categories and name them according to your platform’s needs. For example, you can set categories like ‘Inappropriate’, ‘Irrelevant’, and others.

For each category, you can choose different options for:

  • AI model for content moderation
  • Threshold range for forwarding an item to a human moderator - everything outside this range is considered as properly moderated in an automated fashion
  • Number of moderators to review a single item in this category, for ensuring better quality and less bias;
  • Excluding moderators you don’t want to work on a specific category within a project.

In addition, you can write down the guidelines for moderation of this specific category, so moderators can access them easily whenever they work on it. 

When you add a new category, it is added to the system in real-time, so it can be used immediately. 

Moderators

You can create profiles for the different moderators on your team. They consist of their name, photos, and the languages they use. 

You can set flexible rules individually for each moderator and assign priorities. You’re also able to review the number of items assigned to each person, as well as the rules.  

Rules

In the Rules section of the Admin Dashboard, you can create custom rules for your moderation process. 

For example, you can create rules for the different languages used and their priority levels. Then you can assign the rules to specific moderators - i.e. a certain person has to prioritize English-language content, while another one — Spanish. 

The Moderation Interface

The moderators on your team have access to Imagga’s CM platform through a dedicated interface — the Moderation web and mobile UI. 

When a moderator logs in, they can immediately see the assigned projects and the respective batches of items within each project. On the left hand side of their screen, they can review attributes of each item, like item ID and URL, and additional information like the date when the item was submitted for moderation. There is also an option to translate content if it’s not in English, which is great for multi-language moderation.

On the right hand side, the moderator can see the categories for moderation and click on the ones that the item belongs to, i.e. ‘Irrelevant’ or ‘Inappropriate’ — or alternatively, approve the item if it doesn’t breach your platform’s guidelines. 

Moderators can use hotkeys to make the moderation process as quick as possible. The reasons for inappropriateness of a visual are numbered from 1 to 9, so moderators can use that, or the Skip / Approve / Disapprove hotkeys. 

Ace Your Content Moderation with Imagga

With Imagga’s semi-automatic content moderation, you can combine the best of machine and human moderation in one. Our AI-powered system helps you optimize the moderation process, while also protecting moderators from vast amounts of harmful content. 

Don’t have an internal moderation team? Don’t worry — we can get you a highly-qualified external one too. 

Ready to give it a go? Get in touch with us to boost your content moderation with Imagga.

How to Use Imagga's CM Platform Video

https://youtu.be/FrZkIWkJ1lA


Automated Content Moderation

Automated Content Moderation

Due to the gigantic amounts of user-generated content that goes online continuously, it’s difficult to stay on top of content moderation. Plus, the risks of exposing human moderators to harmful content make manual moderation less and less desirable. This is where automated content moderation comes in. 

Content moderation is crucial for the functioning of digital platforms today. It provides a much-needed safety net that protects online users from harmful and illegal content. 

The reasons are numerous: online businesses have a moral obligation to protect their users from inappropriate content, uphold their brand reputation as a secure space, and comply with applicable regulations that require the removal of illegal and offensive materials. 

What Is Automated Content Moderation?

Automated content moderation entails the use of technology to speed up the removal of inappropriate and harmful content and to automate the tedious tasks of reviewing every single posting by hand. 

While named ‘automated’, it consists of a mixture between algorithms and a final human review in most cases. That’s why it’s sometimes referred to as a semi-automated. Technology does the heavy lifting, while the input of human moderators is necessary only after an automatic prescreening or in certain situations. 

Automating the process of content moderation is done by using AI-powered algorithms. They identify inappropriate content on the basis of previously fed data. 

The moderation platform filters content by recognizing illegal, sexually explicit or harmful elements in visuals, text, videos and even live streams. Depending on the thresholds for moderation, it may prompt for human input for cases that cannot be discerned by the AI. 

Benefits of Automated Content Moderation

The advantages of using automated moderation are numerous. The new technology has practically revolutionized how platforms based on user-generated content are handling these processes.

1. Speed

Our digital world requires a fast pace of moderation. No one would like to wait days before their social media post goes live because it has to be reviewed by a human moderator.

One of the biggest promises of automation — which it truly delivers on — is speed. Moderating the tons of content that go online every second seems like a mission impossible without technology. But with its help, it can occur in real time. 

With algorithms, the moderation process gets faster and more efficient. Content that is undoubtedly harmful or illegal can be immediately taken down. Dubious content automatically gets flagged and forwarded for human review. This makes the whole cycle quicker, providing for end users the immediacy of online media. 

2. Moderation at Scale

For online businesses, speed relates to scale too. If you’re running a platform that’s bound to grow, you need a viable moderation solution for handling an exponentially growing amount of user-generated content. 

Manual moderation wouldn’t be a feasible way to manage the huge amounts of content — it would require a very large team of human moderators and a very tight work schedule. The logical solution is automation, which can take over a huge chunk of the moderation process. 

Using automated content moderation helps digital platforms grow, while allowing them to preserve their security standards and to uphold their Trust and Safety programs. 

3. Protection of Human Moderators

A significant benefit of automated content moderation is related to the work of human moderators. It prevents them from having to go through the most disturbing content, as it gets automatically prescreened. 

The negative psychological effects of moderation are not a secret, and the job is deemed as a truly challenging one. The more these risks can be curbed, the better it gets for both moderators and the platforms they work for. 

Automated Content Moderation
Photo by Djim Loic on Unsplash

Limitations of Automated Content Moderation

While it’s mostly a winning move, there are some drawbacks to using automation in the content moderation process. Still, with good planning, they can also be easily overcome. 

For the time being, the moderation process is not fully automated. Instead, a semi-automated approach is the best solution for now. The input of human moderators is still necessary for certain decisions and sensitive cases because of their ability for critical reflection. 

The hard work is being done by the algorithms, and people are involved only at the last stages. The content has been pre-screened, but needs a final yes or no by a moderator. While not fully automatic, this saves tremendous amounts of work hours, plus reduces the exposure to harmful content. 

While the moderation technology is still learning and being improved, there may be some mistakes in the identification of harmful content. Technology still struggles with recognizing context in certain cases. Some inappropriate content may slip through, while compliant content may get flagged. The platform needs to be updated regularly to include the latest news and cultural symbols. 

While all of this is challenging, AI- and machine learning-powered systems are becoming better and better at recognizing illegal and harmful content. They are learning with every new input they’re processing, which sets a high promise for their future performance. 

Another hurdle for automated content moderation is the limited ability of technology to grasp contextual variations in speech, images and in cultural norms as a whole. The use of certain words or slang phrases in one region may be perfectly fine, while they may be offensive in other places. Nuances and variations in language and behavior may also be difficult to catch for automated platforms. Contextual use of imagery can be a tough nut to crack too. 

How Does Automated Content Moderation Work?

Automated content moderation can be used in different ways, depending on the needs of your platform:

  • Pre-moderation: algorithms screen all content before it goes live
  • Post-moderation: content is screened shortly after it’s gone live; this is the most popular method
  • Reactive moderation: users report posts for inappropriateness after they have been published  

Whichever method you choose, your first step will be to set your moderation policy. You’ll need to define the rules and types of content that have to be removed, depending on the overall strategy of your platform. Thresholds also have to be set, so that the moderation tool has clear demarcation when content violates your standards. 

In the most common case of post-moderation, all user-generated content is processed by the moderation platform. On the basis of the set rules and thresholds, clearly inappropriate content is immediately removed. Due to the automation, this can happen quite soon after it’s been published. Some items are considered trickier by the algorithm, and they are fed for human review. Content moderators access the questionable items through the moderation interface. Then they make the final decision to keep or remove the content.   

Whenever content is forwarded for manual moderation, the training data from the human moderators’ actions feeds back into the automated moderation platform. In this way, the AI learns from the subtleties in human decisions to remove or keep certain content. With time, the new learnings enrich the algorithms and make the automatic process more and more accurate. 

What Type of Content Can You Moderate Automatically?

You can use automated content moderation with all types of content — visual, textual, and even with moving images. 

1. Visuals

With the help of computer vision, automated platforms can identify inappropriate content in images through object detection mechanisms. They use algorithms to recognize unwanted elements and their position for an understanding of the whole scene. Offensive text can also be spotted, even if it is contained in an image. 

The types of inappropriate visuals you can catch with fully automated content moderation include:

  • Nudity and pornography
  • Self-harm and gore
  • Alcohol, drugs, and forbidden substances
  • Weapons and torture instruments
  • Verbal abuse, harsh language, and racism
  • Obscene gestures
  • Graffiti and demolished sights
  • Physical abuse and slavery
  • Mass fights
  • Propaganda and terrorism
  • Infamous or vulgar symbols
  • Infamous landmarks
  • Infamous people
  • Horror and monstrous images
  • Culturally-defined inappropriateness

2. Text 

Natural language processing (NLP) algorithms can recognize the main meaning of a text and its emotional charge. Automated moderation can identify the tone of the text and then categorize it thanks to sentiment analysis. It can also search for certain keywords within textual content. Additionally, built-in knowledge databases can be used to predict the compliance of texts with moderation policies. 

Algorithms can screen for:

  • Bullying and harassment
  • Hate Speech
  • Trolling
  • Copyrighted text
  • Spam and scam 
  • Fraudulent text
  • Pornographic text

3. Video

Video moderation requires the most complex process. The whole file has to be reviewed to ensure its compliance. Computer vision identifies inappropriate elements in the visual parts, while other algorithms are used to analyze the audio too. Automated content moderation is applicable even for live streaming where the screening process is in real time. 

Automated Content Moderation Solutions

Imagga’s content moderation platform provides you with all the tools you need to automate your moderation process. It’s a powerful and easy-to-use solution that you can integrate in your operations — and prepare your platform for scaling.

Imagga’s AI-powered pre-trained systems analyze all content on the basis of the moderation guidelines that you have set. Our API learns on the go too, so it improves with every project it processes. 

In the Admin Dashboard, you can create different categories of inappropriate content to look for and define the parameters for each. You can set priority levels for projects, as well as thresholds for flagging and forwarding content for human moderation. You can also control data retention length. 

The Moderation Interface is crafted to make your human moderators’ job easier. They get automatically prescreened content that they can review faster and with reduced risks because the most horrible content has already been removed. Moderators can use handy hotkeys and organize their work effectively in the interface.   

With Imagga’s content moderation platform, you can effectively ensure the protection of your users, your brand reputation, and your human moderators. You can use our tools in the cloud or on premise — and you can easily plug them in your current processes, whether you have an in-house or an outsourced moderation team

Automated Content Moderation Case Studies

1. Live Streaming Use Case

  • Simultaneous moderation of live video streams is needed
  • Can’t be done via manual moderation due to privacy concerns
  • Automated moderation guarantees the privacy
  • Done in short interval of time, and if a problematic stream is detected, escalated to the website admins to follow nsfw policies - sending warning or/and terminating the stream, etc

2. Dating website Use Case

  • similar to above but images uploaded for profile, videos and live stream chat if supported 
  • Different levels of moderation regarding country of operation and type of dating website. 
  • Automated CM removes the privacy concerns as it might be very sensitive when it comes to dating websites

3. Travel web sites Use Case

  • Moderation of both images and texts - travel sights are better thanks to the reviews visitors leave - moderation of the text and images/videos is needed 
  • Automated CM makes possible real time publishing of reviews when they pass the auto CM filter

Frequently Asked Questions

What Is Automated Content Moderation?

Automated content moderation entails the processing of user-generated content by computer vision and AI-powered algorithms. This makes the moderation process faster and more efficient, and it protects human moderators from the most disturbing content online. 

How Do Automated Content Moderation Tools Work?

Automated moderation is powered by technology that screens content for inappropriate words and images — be it in text, visuals, or video. It prescreens user-generated content and forwards to human moderators only items that are questionable. 


what is content moderation

What Is Content Moderation?

The digital world is in a constant state of flux, and one of its powerful propellers is user generated content. Today, people are more likely to trust the opinion shared by people online rather than information provided by businesses and institutions. Read on to learn to find out what is content moderation.

Unimaginable quantities of text, images and video are being published daily — and brands need a way to keep tabs on the content that their platforms host. This is crucial for maintaining a safe and trustworthy environment for your clients, as well as for monitoring social influences on brand perception and complying with official regulations. 

Content moderation is the most effective method for achieving all of that. It helps online businesses provide a safe and healthy environment for their users. 

what is content moderation

What Is Content Moderation?

Content moderation refers to the screening of inappropriate content that users post on a platform. The process entails the application of pre-set rules for monitoring content. If it doesn’t satisfy the guidelines, the content gets flagged and removed. The reasons can be different, including violence, offensiveness, extremism, nudity, hate speech, copyright infringements, and similar. 

The goal of content moderation is to ensure the platform is safe to use and upholds the brand’s Trust and Safety program. Content moderation is widely used by social media, dating websites and apps, marketplaces, forums, and similar platforms.

Why Is Content Moderation Important?

Because of the sheer amount of content that’s being created every second, platforms based on user generated content are struggling to stay on top of inappropriate and offensive text, images, and videos. 

Content moderation is the only way to keep your brand’s website in line with your standards — and to protect your clients and your reputation. With its help, you can ensure your platform serves the purpose that you’ve designed it for, rather than giving space for spam, violence and explicit content. 

Types of Content Moderation

Many factors come into play when deciding what’s the best way to handle content moderation for your platform — such as your business focus, the types of user generated content, and the specificities of your user base. 

Here are the main types of content moderation processes that you can choose from for your brand. 

1. Automated Moderation

Moderation today relies heavily on technology to make the process quicker, easier and safer. AI-powered algorithms analyze text and visuals in a fraction of the time that people need to do that, and most of all — it doesn’t suffer psychological traumas from processing inappropriate content. 

When it comes to text, automated moderation can screen for keywords that are deemed as problematic. More advanced systems can spot conversational patterns and relationship analysis too. 

As for visuals, image recognition powered by AI tools like Imagga offers a highly viable option for monitoring images, videos and live streams. Such solutions identify inappropriate imagery and have various options for controlling threshold levels and types of sensitive visuals. 

While tech-powered moderation is becoming more and more precise and effective, it cannot fully obliterate human review, especially in more complex situations. That’s why automated moderation still uses a mixture between technology and human moderation. 

2. Pre-Moderation

This is the most elaborate way to approach content moderation. It entails that every piece of content is reviewed before it gets published on your platform. When a user posts some text or a visual, the item is sent to the review queue. It goes live only after a content moderator has explicitly approved it.

While this is the safest way to block harmful content, this process is rather slow and not applicable for the fast-paced online world. However, platforms that require a high level of security still employ this moderation method. A common example are platforms for children where security of the users comes first.  

3. Post-Moderation

Post-moderation is the most typical way to go about content screening. Users are allowed to post their content whenever they wish to, but all items are queued for moderation. If an item is flagged, it gets removed to protect the rest of the users. 

Platforms strive to shorten review times, so that inappropriate content doesn’t stay online for too long. While post-moderation is not as secure as pre-moderation, it is still the preferred method for many digital businesses today.

4. Reactive Moderation

Reactive moderation entails relying on users to mark content that they find inappropriate or that goes against your platform’s rules. It can be an effective solution in some cases. 

Reactive moderation can be used as a standalone method, or combined with post-moderation for optimal results. In the latter case, users can flag content even after it has passed your moderation processes, so you get a double safety net. 

If you opt to use reactive moderation only, there are some risks you’d want to consider. A self-regulating platform sounds great, but it may lead to inappropriate content remaining online for far too long. This may cause long-term reputational damage to your brand. 

5. Distributed Moderation

This type of moderation relies fully on the online community to review content and remove it as necessary. Users employ a rating system to mark whether a piece of content matches the platform’s guidelines.

This method is seldom used because it poses significant challenges for brands in terms of reputation and legal compliance. 

types of content moderation
Photo by John Schnobrich on Unsplash

How Does Content Moderation Work?

To put content moderation to use for your platform, you’ll first need to set clear guidelines about what constitutes inappropriate content. This is how the people who will be doing the job — content moderators — will know what to mark for removal. 

Besides types of content that have to be reviewed, flagged and removed, you’ll also have to define the thresholds for moderation. This refers to the sensitivity level that content moderators should stick to when reviewing content. What thresholds you’ll set would depend on your users’ expectations and their demographics, as well as the type of business you’re running. 

Content moderation, as explained in the previous section, can take a few different forms. Pre-moderation, or reviewing content before it’s published, is usually considered too slow for today’s user generated content volume. That’s why most platforms choose to review content after it’s gone live, and it gets immediately placed on the moderation queue. 

Post-moderation is often paired with automated moderation to achieve the best and quickest results.  

What Types of Content Can You Moderate?

Moderation can be applied to all kinds of content, depending on your platform’s focus - text, images, video, and even live streaming. 

1. Text

Text posts are everywhere — and can accompany all types of visual content too. That’s why moderating text is one of the prerogatives for all types of platforms with user generated content. 

Just think of the variety of texts that are published all time, such as:

  • Articles
  • Social media discussions
  • Comments 
  • Job board postings 
  • Forum posts

In fact, moderating text can be quite a feat. Catching offensive keywords is often not enough because inappropriate text can be made up of a sequence of perfectly appropriate words. There are nuances and cultural specificities to take into account as well. 

2. Images

Moderating visual content is considered a bit more straightforward, yet having clear guidelines and thresholds is essential. Cultural sensitivities and differences may come into play as well, so it’s important to know in-depth the specificities of your user bases in different geographical locations. 

Reviewing large amounts of images can be quite a challenge, which is a hot topic for visual-based platforms like Pinterest, Instagram, and the like. Content moderators can get exposed to deeply disturbing visuals, which is a huge risk of the job. 

3. Video

Video has become one of the most ubiquitous types of content these days. Moderating it, however, is not an easy job. The whole video file has to be screened because it may contain only a single disturbing scene, but that would still be enough to remove the whole of it. 

Another major challenge in moderating video content is that it often contains different types of text too, such as subtitles and titles. They also have to be reviewed before the video is approved. 

4. Live Streaming

Last but not least, there’s live streaming too, which is a whole different beast. Not only that it means moderating video and text, but it has to occur simultaneously with the actual streaming of the content. 

	content moderation definition
Photo by Adam Satria on Unsplash

The Job of the Content Moderator

In essence, the content moderator is in charge of reviewing batches of content — whether textual or visual — and marking items that don’t meet a platform’s pre-set guidelines. This means that a person has to manually go through each item, assessing its appropriateness while reviewing it fully. This is often rather slow — and dangerous — if the moderator is not assisted by an automatic pre-screening. 

It’s no secret today that manual content moderation takes its toll on people. It holds numerous risks for moderators’ psychological state and well-being. They may get exposed to the most disturbing, violent, explicit and downright horrible content out there. 

That’s why various content moderation solutions have been created in recent years — that take over the most difficult part of the job. 

Content Moderation Solutions

While human review is still necessary in many situations, technology offers effective and safe ways to speed up content moderation and to make it safer for moderators. Hybrid models of work offer unseen scalability and efficiency for the moderation process. 

Tools powered by Artificial Intelligence, such as Imagga’s content moderation solution, hold immense potential for businesses that rely on large volumes of user generated content. Our platform offers automatic filtering of unsafe content — whether it’s in images, videos, or live streaming. 

The platform allows you to define your moderation rules and to set thresholds on the go. You can tweak various aspects of the automated moderation to make the process as effective and precise as you need.

You can easily integrate Imagga in your workflow, empowering your human moderation team with the feature-packed automatic solution that improves their work. The AI-powered algorithms learn on the go, so the more you use the platform, the better it will get at spotting the most common types of problematic content you’re struggling with.

You can use Imagga’s platform in the way that best fits your work — either in the cloud, or as an on-premise solution. 

Frequently Asked Questions

Here are the hottest topics you’ll want to ask about content moderation. 

What does content moderation mean?

Content moderation is the job of screening for inappropriate content that users post on a platform. The goal is to safeguard the users from any content that might be unsafe or inappropriate and in turn might ruin the online reputation of the platform its been published on. 

How content moderation work?

Content moderation can be done manually by human moderators that have been instructed what content must be discarded as unsuitable, or automatically using AI platforms for precise content moderation. In some cases, combination of manual and automated content moderation is used for faster and better results. 

What are examples of content moderation?

Content moderation is essential when content needs to be delivered to minors. In this case disturbing, violent or explicit content needs to be carefully monitored and flagged as inappropriate. Content moderation can be applied to text, images, video and live streams.  

Do you have any questions about what is content moderation? Let us know in the comment section or don't hesitate to reach out to us.


Content Moderation: In-House vs Outsourced

Effective community management for your online platform relies on high-quality content moderation. With tons of user-generated content that needs to be reviewed continuously, you’re surely pondering how to go about handling your moderation needs — with an outlook for scaling your business.

The in-house vs. outsource dilemma is common for online companies across industries that have to ensure adequate review of user content. It’s an important decision that you need to take early on in your platform’s development. This is how you can ensure a good application of your Trust and Safety program for users’ sake, compliance with relevant regulations, and protection of your brand reputation. 

Both in-house and outsourced content moderation have pros and cons — and it’s a matter of weighing which option is most suitable to your current needs. Let’s dig into the pros and cons of each option, so you get an overview of what would work for you — while knowing you can always count on Imagga’s AI-powered automatic moderation platform, whichever choice you make.

In-House Content Moderation

Many companies prefer to build a content moderation team in-house. This is a long-term investment that certainly brings a number of benefits — but does have its drawbacks too. 

Pros

Having an in-house review team allows you to have a high level of control over the moderation process. Being in the same office, you can easily tweak procedures and guidelines to adapt to new circumstances and trends. 

Tackling moderation in-house also provides you with an in-depth view of the functioning of your online platform. You can have a fully hands-on approach in its development, recognizing challenges as they arise.

With an internal team, you can ensure improved consistency in moderation policies, as it’s easier to teach new staff and even pass down intuitive knowledge on difficult cases. This means shorter times for decisions on controversial or politically charged cases since your team members are aware of your company values and better understand what’s the best move in terms of your Trust and Safety policy.

Add Imagga’s full or semi-automated AI-first Content Moderation to the mix and your internal team will be relieved from reviewing repetitive inappropriate content and left to deal only with controversial or sensitive cases. 

Cons 

Creating an in-house moderation team can be expensive. It can be difficult to hire the necessary number of employees at once, which means you may be slow to start with your content review. 

Another potential downside of in-house moderation is that your newly founded team will need training. Even if staff members are experienced in the field, they’ll still need to get up to speed with the right content moderation strategy for the project at hand. You’ll also need to provide moderators with adequate psychological support due to the job’s increased risks.

Outsourced Moderation

Hiring an external team to take care of content moderation is a popular option for many online businesses, including giant social media platforms like Facebook, Twitter, and YouTube. It can be a good choice for your company too — but let’s first look at its pros and cons.

Pros

Outsourcing content moderation can save you time and money. Typically, a third-party vendor would have a faster and better-trained moderation team. You’d be able to benefit from their expertise immediately. 

This is also important considering the growing online risks. An outsourced expert team that keeps tabs on security threats would provide you with the top strategies and methods for moderation

Having an outsourced team bundled with Imagga’s Automated Content Management Platform will further shorten the moderation time and make it, even more, cost-efficient as huge amounts of content that needs to be reviewed can be sorted out automatically. 

Cons 

Collaborating with your external moderation team may be challenging. It’s important to have a contact person at the third-party vendor, so you have a one-stop communication point. 

With moderation outsourcing, ensuring flexibility, adaptability, and continuous learning may also be more difficult. You’ll need to make sure that your moderation partner understands thoroughly the cultural and linguistic specificities of your business. Frequent exchanges are also key to ensuring continuity in the moderation process.

How Imagga Can Help You Ace Your Content Moderation

Whether you opt-in for in-house or outsourcing, Imagga is your trusted content moderation partner. We offer a complete AI-powered content moderation solution that employs the best from artificial and human intelligence. 

Our automatic platform uses advanced Artificial Intelligence algorithms to provide precise filtering of unsafe content, saving tons of work hours that otherwise moderators have to spend in reviewing

The automated content screening is self-learning, so it gets better the more you use it. Plus, you can fully control the settings of the moderation process: you can select and set up different projects, batch sizes, priority levels, privacy issues, and assigned moderators, to mention a few.  

With Imagga’s content moderation platform, you have full freedom to choose whether you work with your in-house moderation team — or an outsourced one. You can easily plug in your in-house moderators in our AI platform, or hire external help from us that’s going to seamlessly enhance the automatic platform. 

Ready to get started? Contact us for a demo of our AI-powered moderation solution or to learn more about Imagga’s content moderation services.


Why Bad Content is bad for your Business

Unsafe Content Is Bad for Business. Here’s How to Get Rid of It

The ubiquitous creation of different types of content — from text and audio to images, video, and live streaming — is driving the development of online platforms across locations and industries.

This makes content moderation a top priority for a wide variety of online businesses, including marketplaces with user-generated content, dating websites, online communication platforms, and gaming websites, among many others. 

As Trust and Safety programs are steadily becoming the basis of building a safe digital environment, you just can’t skip on adequate content screening if you want to grow a successful online business. 

Detecting problematic content is important for a variety of reasons: protecting the users, meeting national and international digital safety regulations, and building up your reputation as a safe online platform. 

Here’s why you should take care of content moderation skillfully — and how you can go about doing that with ease. 

Why Bad Content Is Bad for Business

The unimaginable amounts of content created online every day is both a blessing and a curse for online businesses. 

Platforms want to give space to their users to express themselves — yet this comes at the price of having to monitor tons of user-generated content and removing the ‘digital garbage’. The content that has to be flagged and removed for safety reasons includes illegal, obscene, insulting, and inappropriate materials, as well as any other content that doesn’t meet the platform’s guidelines. 

If left unsupervised, problematic content can get out of control and jeopardize the very existence of a platform. 

It’s Harmful to Your Users

Unsafe content is a direct threat for the very people you want to have on your website, whether you’re running a travel platform, a marketplace, or a dating platform. 

As the owner of the platform, you have a moral responsibility towards users to ensure a safe and secure environment. It’s especially important to protect vulnerable groups and to prevent discrimination, insults, and threats as much as possible. 

With content moderation, you can prevent bullies, trolls, and other people with harmful intentions to reach the rest of your user base and taking advantage of them and of your brand. 

It’s an Issue with Legal Compliance 

Beyond the ethical duties, your online business may be liable for the content you publish. There are various national and international regulations regarding safe content that you may need to comply with to stay in business. While previously social media platforms, for example, were exempt from liability for illegal content, this is changing. 

The UK is moving towards such regulations, having its communications regulator screen for illegal content and fine platforms that expose their users to this. Similar steps have been taken in France, Germany, U.S., Brazil, and many other countries. 

While there is a pushback to content moderation legislation because of censorship considerations, such regulations are steadily gaining ground, including the EU’s Digital Services Act

It’s a Challenge to Your Brand Reputation

Last but not least, leaving harmful content published by ill-willing users on your platform is a risk for your brand reputation. 

If your regular users get exposed to violence, propaganda, child nudity, weapons, drugs, hate symbols, and a long list of other unsafe content, they’re very likely to stop using your services. 

The word of mouth about the permissibility of a platform towards problematic content spreads around fast — especially in a world as digitally connected as ours. This makes it difficult to protect your reputation if you have allowed unsafe content to circulate freely. You may also face legal problems if the case is brought to the attention of state and international authorities. 

The Key to Successful Content Moderation: Imagga

Content moderation is undoubtedly crucial for online platforms — but it’s no easy feat. The last years have seen the gradual move from manual moderation done by people to automatized moderation provided by technology

Imagga offers a fully or semi-automatic content moderation solution, powered by our extensive experience and achievements in Artificial Intelligence. Our real-time content screening works at scale and ensures that any Trust and Safety program to protect your users and reputation is successful.

With the automatic filtering of unsafe images, video, and live stream, your moderation teams can breathe in relief — as their work is significantly reduced and they’re protected from the sheer amount of harmful content they need to process. You can use different scopes of content moderation which you can deploy in the way that works for you, whether it’s cloud, on-premise, or edge deployment. And the best part: the self-learning AI gets better over time!

Interested to give it a try? Get in touch today to learn how you can ace your content moderation with the help of Imagga.

https://youtu.be/-SHFncU4h18

Automatic Image Tagging for Lightroom Is Here: Meet Wordroom

Whether you’re using Lightroom to organize and edit your private photos or to process your customer’s images, keeping your visual collection under control can be a nightmare. All of us want to have well-organized photo albums, yet finding the time for tedious sorting is not on top of our wishlist. 

One of the proven ways to put order in the piles of images is to add metadata to each of them, so that their content is identifiable. By assigning keywords that describe the most important elements - people, objects, places, colors, and themes, you can create a searchable visual database. Then finding the right picture is done in a matter of seconds, whether it’s the favorite family photo or a specific stock image that you need for a client.  

While keyword tagging in Lightroom is undoubtedly useful, handling it manually is a never-ending battle. It’s time-consuming and boring, yet there’s barely any other way to go about organizing your visuals. 

That’s where Imagga’s Wordroom comes in - a simple yet powerful Lightroom keyword plugin that revolutionizes the way you organize your image collection. Backed by machine learning algorithms, the tool ‘sees’ the image and automatically offers diverse keywords with high accuracy. You then add them to the image’s metadata with a single click. 

wordroom-screenshot-tagging

Let’s dig in the process of keyword tagging in Lightroom and explore how Wordroom can make your work with visuals easier.  

The need for keyword tagging in Adobe Lightroom

Organizing a large photo collection is a tough nut to crack. For hobbyist and professional photographers alike, the need to have a system for sorting manifests almost immediately - in the first moment when the images start to pile up on the hard drive. 

The most common way to solve this is to introduce keywords for each photo. They describe the major elements and themes in the image, providing it with a content description, as well as a contextual one. You have to enter the relevant words when organizing your collection, so later on, you can search for a specific image or for a certain topic. 

The manual approach to keyword tagging is infamously hard and has produced an endless amount of jokes and complaints among visual professionals. It entails spending long hours in reviewing each image. You need to note the words that best describe the people, animals, objects, as well as colors, themes and emotions that can be seen on every photo. It’s not only about mentioning the objects and activities on it, but also considering its deeper meanings and context. 

Despite the efforts, the manual keyword system often proves inconsistent and difficult to scale. The process is doable for a few images, but when you have to sort hundreds of them, it becomes a serious burden. While copy-paste works for a while, most of us would agree that there are much better ways to spend one’s time. Here’s how AI image recognition helps. 

Wordroom: the power of automatic tagging 

wordroom plugin logo

Computer vision technology is a powerful asset in numerous industries today, but its application is ever more important in digital photography and videography. Ploughing through hundreds of thousands of images and videos is a central issue for photographers, videographers and stock website contributors, as well as for photo hobbyists. For all of them, keyword tagging is the much hated item on the to-do list that always gets postponed or done halfway. 

With a simple plugin, the process of keyword tagging in Lightroom becomes a breeze. Wordroom uses image recognition based on artificial intelligence in order to identify the main elements and attributes of each visual. They include objects, colors, shapes, and even actions, emotions, timeframes, and abstract terms. Once the plugin ‘reads’ the information, it automatically provides you with a list of keywords that you can edit and add to. You can use its capabilities for both small and large collections alike. 

The additional benefit of using an AI-powered tool like Wordroom is that it gets better with time. The more people use it to auto-tag photos, the more data the plugin can use to improve its accuracy. That’s at the core of machine learning, which means the AI develops with every new completed task. 

Image auto tagging for Lightroom is just one of the many uses of Imagga’s Auto Tagging API. Its image recognition capabilities bring innovation and easier processes for diverse types of professionals and businesses - real estate, cloud and other technologies, media, and commerce and retail, among others. The API is accurate and efficient, while allowing you to handle massive database of images with its customizable tagging. Computer vision also powers image and facial recognition, which are heavily used by social media nowadays. 

How to use Wordroom

It’s easy to get started with image auto tagging for Lightroom. The installation of Wordroom is a seamless process. 

In Lightroom, you have to select Plugin Manager from the dropdown menu File. Then you have to add the plugin Wordroom from the folder on your computer and enable it. Click Done and you’re ready to go. Check out the video below for the full details. 

The next step is to try out the tool. From the dropdown menu Library, select Plug-in Extras > Auto-keyword. You will then see the window of the plugin. It will display the automatic keyword suggestions for the currently selected photo. The plugin supports both RAW and non-RAW image formats. 

Wordroom offers up to 30 keywords for each image. You can deselect the ones that you find irrelevant. You can also add an unlimited number of extra keywords manually, if needed. When you click the button “Add to keyword tags,” Wordroom will display the selected automatic and the manually added keywords in Lightroom’s Keyword tags panel. 

Once a photo has been tagged, you will be able to see the keywords in the metadata panel. You can copy them and paste them for other images, if they are similar. You can also edit manually the keywords in the panel as well. Each photo that already has keywords is displayed with a tag icon on its thumbnail. 

For up to 100 photos, you can use the plugin without registration. To continue using Wordroom, you need to sign up for a free plan. It allows you to auto-tag up to 2,000 photos per month. Professional photographers would need a larger bandwidth. If that’s your case, you can use the next plan, which includes automatic keyword tagging of up to 12,000 photos per month for $14.  

In order to analyze your photos, Wordroom transfers them to the cloud. This means that you need a working internet connection while using it. However, your photos’ safety is guaranteed, as they are not stored locally for the image recognition process.

If you want to contribute to the plugin’s improvement, you can click the option Agree to allow the tool to use your visual data. That’s how it can improve its image recognition technology. 

Want to try image auto tagging in Lightroom?

The simple premise of Wordroom is to help you handle your photo collections easily and with less effort. The AI-powered technology behind the plugin speeds up the keyword tagging process, freeing up your time, so you can focus on what’s truly important in your work. 

Keen on checking out the automatic keyword functionality of Wordroom for your Lightroom photo collection? You can get started for free today. Just enter your email, and you’ll receive the download link.  


content-moderation-head-banner

How to Handle Content Moderation with the Human Factor in Mind

User-generated content (UCG) is at the heart of the online world we live in, dramatically shaping how we humans exchange information. While it brings unseen freedom in terms of communication channels and forms of expression, as we well know, there is a dark side to every revolution. Scamming, plagiarism, cyber bullying, not safe for work content (NSFW) and outright scary stuff - they’re all out there in the pictures, videos, articles and audio clips being freely posted online. 

For years, brands have been trying to figure out effective ways to sort out disturbing and harmful user-generated and curated content. Content moderation is seen as the only way to ensure a safe online environment for digital users and prevent abuse and harmful practices online. 

For businesses, efficient moderation is the answer to the ever-growing risks that they face if they don’t eliminate illegal and abusive content: harm to the brand’s reputation, legality and compliance issues, and financial consequences. Moderation is the only solution for excluding content that contains violence, extremism, child abuse, hate speech, graphic visuals, nudity and sex, cruelty, and spam. 

Yet the price to pay is high, typically involving endless hours of traumatic work for low-paid employees. In the beginning of November 2019, the top content moderation company Cognizant, serving giants like Facebook and Google, declared it’s leaving the business. The shocking news came amid scandals revolving around its working conditions and the harmful psychological effects that content moderation jobs have on people. 

While still developing its capabilities, AI-powered moderation is of enormous help in solving many of the issues with illegal and abusive content. Computer vision alleviates a large part of the burden on human moderators, while increasing productivity and optimizing the moderation process for businesses. 

What’s the solution to the harsh reality of content moderation? Let’s dig in and explore the options. 

The risks of human content moderation

The job of the online content moderator is in high demand: more than 100,000 people around the world are involved in content review for online companies. Many of them are based in the Silicon Valley, but a significant proportion of the work is also outsourced - such as in the Philippines and Mexico. 

A recent New Yorker interview with Sarah T. Roberts, author of Behind the Screen book about content moderation workers, reveals the ‘underworld’ of this job. Employees have to go through thousands of visuals per day, typically reported by end users as inappropriate, illegal or disturbing. It would be then up to the moderators to decide, in a matter of seconds, whether the image contradicts the company’s and country’s policies, or not. This means moderators have to make super-quick decisions about the appropriateness of a piece of content, while considering business guidelines and ethical codes. 

According to the interviews that Roberts conducted with moderators around the world, the emotional toll is enormous. Burnout and desensitization are only the surface, followed by PTSD, social isolation and depression. On top of the negative psychological effects, Roberts’ research shows that content moderators typically receive entry-level salaries and have to agree with subpar working conditions. Tech companies, as well as content moderation service providers, consider this type of work as rather ‘mechanical,’ ignoring its heavy effects. 

Besides the ethical and psychological considerations, content moderation done by humans is slow and cannot be easily scaled. With millions of visuals posted daily, online companies are facing waterfalls of content that needs to be reviewed for legal and ethical reasons. So what’s on the table today, coming from Artificial Intelligence applied to content moderation? 

What AI has to offer in content review

Image recognition is revolutionizing the online world in countless ways. Its most ‘humane’ and ethical benefit, though, may as well be its contribution to automatizing image moderation. Computer vision allows the rapid analysis and identification of visual content, saving hours of work that would otherwise be performed manually by people. 

How does it function? When an image is submitted for AI review, it is first analyzed by the API, which is the pre-moderation stage. Then the algorithm makes its conclusion regarding what the picture represents, which is done with near-perfect accuracy. On the basis of this keyword categorization, the computer models identify if any objects on the image match its list of inappropriate content. 

For images screened by the AI that it deems safe, it’s possible for businesses to set automatic posting, as well as automatic categorization of the content. In case the algorithm detects potentially problematic content (the grey zone), the visuals are referred for human screening. While manual review is still necessary, the toughest part of the work - the identification of objects on the monstrous volume of visuals - is completed by the AI. The content moderation software also offers automatic filtering. If a piece of content is considered by the algorithm as too offensive, it’s not redirected for moderation, but directly removed. 

In addition to reviewing potentially harmful and disturbing content, AI-powered content moderation can bring extra benefits to businesses. For example, it helps with reducing clutter in websites where users post sales listings. Computer vision allows the filtering of offers, so that buyers are not swamped with irrelevant, as well as inappropriate or disturbing listings. This has immediate and important benefits for online marketplaces and e-commerce. 

Image recognition can also be used to ensure quality by removing low-resolution visuals, as well as identifying unwanted watermarks or symbols and preventing copyright infringements. This is especially useful for social media, stock websites, and also for retailers. Last but not least, AI content moderation can contribute to fighting disinformation and the spreading of fake news.  

Blending AI and human power 

Numerous cases in the last decade illustrate that relying solely on human moderation is expensive and difficult to scale, in addition to being a hazard to people conducting the job. At the same time, AI algorithms cannot fully take over content moderation - yet. In most cases today, the best approach is using both - in the most practical and safe ways. 

Combining the power of human judgment and computer vision holds enormous potential for handling the tons of violent, pornographic, illegal and inappropriate visuals that are being posted online daily. This approach allows a significant workload reduction for hundreds of thousands of psychologically harming content moderator positions. It is also cost-effective for companies, as a large part of the content is processed by AI. 

At the same time, the participation of expert moderators who would contribute to the improving of the algorithms and setting the overall content moderation guidelines of a business is crucial. AI is developing quickly and presenting great options for content moderation, allowing for high levels of accuracy and scaling. However, the human input remains decisive, as only people are able to provide full understanding of context and cultural relevance, as well as emotional processing. 

Imagga’s content moderation platform is designed to work in sync with human moderators. They get notified for pre-defined scenarios that require human judgement for the AI-flagged content in images, videos and live streaming. Moderators can manually set threshold ranges regarding when human review is necessary - for different types of moderation issues, such as NSFW, violence, weapons, and more. Companies can choose to include their own moderation teams in the process, or to hire a moderation team from Imagga. 

With the use of powerful AI for image moderation, human labor is minimized dramatically. This helps companies optimize their content moderation processes. The results are higher efficiency and improved quality, as well as easier scaling of moderation efforts, which is key for UGC-based businesses today. 

Get started with AI-powered content moderation

Companies operating with large-scale user-generated and curated content require moderation in order to protect their customers from harmful and abusive content, as well as to protect themselves from legal, economic and reputational damage. The approach depends on each business’s choice - the right mixture between the impressive capabilities of AI and the still required human input. Yet the benefits of image moderation through computer vision are indisputable. 

Explore how Imagga’s content moderation platform can help your business handle content review safely and efficiently.  


7 Image Recognition Applications of the Future

Did you know that image recognition is one of the main technologies that skyrockets the development of self-driving cars?

Image identification powered by innovative machine learning has already been embedded in a number of fields with impressive success. It is used for automated image organization of large databases and visual websites, as well as facial recognition on social networks such as Facebook. Image recognition makes image classification for stock websites easier, and even fuels marketers’ creativity by enabling them to craft interactive brand campaigns.  

Beyond the common uses of image recognition we have gotten accustomed to, the revolutionizing technology goes far beyond our imagination. Here are seven daring applications of computer vision that might as well belong in a science fiction novel - but are getting very close to reality today.

#1. Creating city guides

Can you imagine choosing your next travel destination on the basis of real-time location information from Instagram photos that other tourists have posted? Well, it’s already out there. Jetpac created its virtual “city guides” back in 2013 by using shared visuals from Instagram.

By employing image recognition, Jetpac caught visual cues in the photos and analyzed them to offer live data to its users. For example, on the basis of images, the app could tell you whether a cafe in Berlin is frequented by hipsters, or it’s a wild country bar. This way, users receive local customized recommendations at-a-glance.  

In August 2014, Jetpac was acquired by Google, joining the company’s Knowledge team. Its knowhow is said to be helping Google’s development of visual search and Google Glass, the ‘ubiquitous computer’ trial of the tech giant.

#2. Powering self-driving cars

In the last years, self-driving cars are the buzz in the auto industry and the tech alike. Autonomous vehicles are already being actively tested on U.S. roads as we speak. Forty-four companies are currently working on different versions of self-driving vehicles. Computer vision is one of the main technologies that makes these advancements possible, and is fueling their rapid development and enhanced safety features.

To enable autonomous driving, artificial intelligence is being taught to recognize various objects on roads. They include pathways, moving objects, vehicles, and people. Image recognition technology can also predict speed, location and behavior of other objects in motion. AI companies such as AImotive are also instructing their software to adapt to different driving styles and conditions. Researchers are close to creating AI for self-driving cars that can even see in the dark.

https://www.youtube.com/watch?v=sIlCR4eG8_o

#3. Boosting augmented reality applications and gaming

Augmented reality experiments have long tantalized people’s imagination. With image recognition, transposition of digital information on top of what we see in the world is no longer a futuristic dream. Unlike virtual reality, augmented reality does not replace our environment with a digital one. It simply adds some great perks to it.

You can see the most common applications of augmented reality in gaming. A number of new games use image recognition to complement their products with an extra flair that makes the gaming experience more immediate and ‘real.’ With neural networks training, developers can also create more realistic game environments and characters.

Image recognition has also been used in powering other augmented reality applications, such as crowd behavior monitoring by CrowdOptic and augmented reality advertising by Blippar.

#4. Organizing one’s visual memory

Here’s for a very practical image recognition application - making mental notes through visuals. Who wouldn’t like to get this extra skill?

The app Deja Vu, for example, helps users organize their visual memory. When you take a photo, its computer vision technology matches the visual with background information about the objects on it. This means you can instantly get data about books, DVDs, and wine bottles just by taking a photo of their covers or labels. Once in your database, you can search through your photos on the basis of location and keywords.

#5. Teaching machines to see

Besides the impressive number of image recognition applications in the consumer oriented market, it is already employed in important manufacturing and industrial processes. Teaching machines to recognize visuals, analyze them, and take decisions on the basis of the visual input holds stunning potential for production across the globe.

Image recognition can make possible the creation of machines that automatically detect defects in manufacturing pipelines. Besides already known faults, the AI-powered systems could also recognize previously unknown defects because of their ability to learn.

There is a myriad of potential uses of teaching machines to perceive our visual world. For example, Xerox scientists are applying deep learning techniques to enable their AI software mimic the attention patterns of the human brain when seeing a photo or a video.

#6. Empowering educators and students

Another inspiring application of image recognition that is already being put in practice is tightly connected with education again - but this time, with improving education of people.

Image recognition is embedded in technologies that enable students with learning disabilities receive the education they need - in a form they can perceive. Apps powered by computer vision offer text-to-speech options, which allow students with impaired vision or dyslexia to ‘read’ the content.

Applications of image recognition in education are not limited to special students’ needs. The technology is used in a range of tools that push the boundaries of traditional teaching. For example, the app Anatomy3D allows discovery of the interconnectedness between organs and muscles in the human body through scanning of a body part. It revolutionizes the way students can explore anatomy and learn about the way our bodies function. Image recognition uses can also help educators find innovative ways to reach ever more distracted students, who are not susceptible to current methods of teaching.   

#7. Improving iris recognition

Iris recognition is a widely used method for biometric identification. It’s most common application is in border security checks, where a person’s identity is verified by scanning their iris. The identification is conducted by analyzing the unique patterns in the colored part of the eye.

Even though iris recognition has been around for a while, in some cases it is not as precise as it’s expected to be. The advancement of image recognition, however, is bringing new possibilities for iris recognition use across industries with improved accuracy and new applications. Most notably, iris identification is already being used in some consumer devices. The smartphones Samsung Galaxy Note7 and Galaxy S8, and Windows Lumia 950 are among the ones already equipped with such a capability.

While recognition is becoming more precise, security concerns over biometrics identification remain, as recently hackers broke the iris recognition of Samsung Galaxy S8. Together with the advancement of computer vision, security measures are also bound to improve to match the new technological opportunities.    

Have you had an experience with AI technology from a movie that years later you seen in real life? Share with the rest of the group and if it enough people like it we can build it together.

The uses of image recognition of the future are practically limitless - they’re only bound by human imagination. What is the practical application of computer vision that you find the most exciting or useful? We’d love to read about it in the comments below.

free image recognition with imagga


The Top 5 Uses of Image Recognition

Not long ago, artificial intelligence sounded like a science fiction prophecy of a tech future. Today machine learning has become a driving force behind technological advancements used by people on a daily basis. Image recognition is one of the most accessible applications of it, and it’s fueling a visual revolution online.

Мachine learning embedded in consumer websites and applications is changing the way visual data is organized and processed. Visual recognition offers exciting opportunities similar to the ones in science fiction movies that made our imagination run wild.

Image recognition has grown so effective because it uses deep learning. This is a machine learning method designed to resemble the way a human brain functions. That’s how computers are taught to recognize visual elements within an image. By noticing emerging patterns and relying on large databases, machines can make sense of images and formulate relevant categories and tags.

From image organization and classification to facial recognition, here are here are six (updated since the initial publication of the blog post) of the top applications of image recognition in the current consumer landscape.  

#1. Automated Image Organization - from Cloud Apps to Telecoms

image organization

One of the most popular applications of image recognition that we encounter daily is personal photo organization. Who wouldn’t like to better handle a large library of photo memories according to visual topics, from specific objects to broad landscapes?

Image recognition is empowering the user experience of photo organization apps. Besides offering a photo storage, apps want to go a step further by giving people better search and discovery functions. They can do that with the automated image organization capabilities provided by machine learning. The image recognition API integrated in the apps categorizes images on the basis of identified patterns and groups them thematically.

Take Eden Photos, a Mac app for photo organization, as an example. It uses Imagga’s image recognition to offer its users image tags, automatic keywording of photos, and auto-categorization on the basis of visual topics. Users can sync their photos’ metadata on all devices and get keyword search in the native Photos app on their iPhones too.

Telecoms are another set of companies that integrate image recognition to improve their users’ experience. They add value to their services by offering image organization and classification for photo libraries, which helps them attract and retain their customers. On the customer side, user experience is improved by allowing people to categorize and order their photo memories.

An illustration of this application is Imagga’s solution for Swisscom. The Swiss telecom needed an efficient and secure way to organize users’ photos for its myCloud online service. With Imagga’s image recognition API installed on premise, Swisscom now offers its customers a safe feature that organizes and categorizes their visual data.   

#2. Stock Photography and Video Websites

clients include stock photography

A powerful commercial use of image recognition can be seen in the field of stock photography and video. Stock websites provide platforms where photographers and videomakers can sell their content. Contributors need a way to tag large amounts of visual material, which is time-consuming and tedious. In the same time, without proper keyword attribution, their content cannot be indexed - and thus cannot be discovered by buyers.  

Image recognition is thus crucial for stock websites. It’s fueling billions of searches daily in stock websites. It provides the tools to make visual content discoverable by users via search. In the same time, image recognition is a huge relief for stock contributors. They get automatic keyword suggestions, which save them a ton of time and efforts. Image recognition can also give them creative ideas how to tag their content more successfully and comprehensively.

Keywording software tools like Qhero have integrated with Imagga’s image recognition AI to help stock contributors describe and tag their content with ease. Such tools analyze visual assets and propose relevant keywords. This reduces the time needed by photographers for processing of visual material. It makes manual keywording a thing of the past by suggesting the most appropriate words that describe an image.

#3. Visual Search for Improved Product Discoverability

Visual Search allows users to search for similar images or products using a reference image they took with their camera or downloaded from internet.

Imagga Visual Search API enables companies to implement image-based search into their software systems and applications to maximize the searchable potential of their visual data. The fashion, home décor and furniture online retailers are already integrating it in their digital shopping experience to increase conversions and decreases shopping cart abandonment while also offering rich media experience to users.

Meanwhile consumers are increasingly adopting this new search habit and Gartner predicts 30% increase in digital commerce revenue by 2021 for companies who redesign their websites and apps to support visual and voice search.  The benefits of Visual Search include enhanced product discovery, delivery where text searches fail and easy product recommendation based on actual similarity. Learn more about the use case of Visual Search in e-commerce and retail.

#4. Image Classification for Websites with Large Visual Databases

A range of different businesses possess huge databases with visuals which is difficult to manage and make use of. Since they may not have an effective method to make sense of all the visual data, it might end up uncategorized and useless.

If a visual database does not contain metadata about the images, categorizing it is a huge hassle. Classification of images through machine learning is a key solution for this. With image recognition, companies can easily organize and categorize their database because it allows for automatic classification of images in large quantities. This helps them monetize their visual content without investing countless hours for manual sorting and tagging.

The best part about automated image classification is that it allows for custom training on top of the general image recognition API. This means that businesses can provide custom categories, which the AI is trained to recognize and use. Our case study on Tavisca is a good example of using custom classifiers in practice and automating the process of hotel photos categorization.

#5. Image and Face Recognition on Social Networks

image recognition

Visual recognition on social media is already a fact. Facebook released its facial recognition app Moments, and has been using facial recognition for tagging people on users’ photos for a while.

While face recognition remains a sensitive ground, Facebook hasn’t shied away from integrating it in users’ experience on the social media. Whenever users upload a photo, Facebook is able to recognize objects and scenes in it before people enter a description. The computer vision can distinguish objects, facial expressions, food, natural landscapes and sports, among others. Besides tagging of people on photos, image recognition is used to translate visual content for blind users and to identify inappropriate or offensive images.   

Image recognition is applied in other ways on social networks too. For example, the SmartHash iOs app employs Imagga’s API to offer its users an easy tool for automatically creating hashtags for their photos. This allows people to successfully share their images online without the need to research and brainstorm hashtags.

Photo recognition has also been embraced by other image-centric services online. Google Photos and Apple’s Photos app cluster photos on the basis of events and places, plus offer face detection. The application of image recognition significantly enhances users’ experience. It helps them organize their photos in meaningful series. They can easily exchange, say, travel photos with friends who were a part of the same trip.

#6. Interactive Marketing and Creative Campaigns

The applications of image recognition are not limited to consumer services only. Advertising and marketing agencies are already exploring its potential for creative and interactive campaigns. It opens new opportunities for learning more about target audiences and serving them with impressive branded content.

Social intelligence today is largely based on social listening. It involves following conversations on social media to learn more about prospects. But today, this knowledge can be gathered from visuals shared online with much higher efficiency. In a sea of abundant and often irrelevant visual content, extracting useful information is possible only through machine learning - or ‘visual listening.’ For example, image recognition can identify visual brand mentions and expression of emotion towards a brand, as well as logo and other brand data that would be otherwise undiscoverable. On the basis of collected information from analyzing images, marketers can better target their campaigns by using customization and personalization.

Besides valuable information about potential customers, image recognition can be used for crafting creative content that engages people and helps build their relationships with brands. To illustrate this: Imagga’s image recognition API was used in a KIA marketing project to create an interactive campaign. By profiling of participants’ image content online, each person is assigned to a different lifestyle group. Then they are matched to the right car that best fits their style among the 36 different car styles offered by KIA.

Celebrating the Power of Image Recognition

Image recognition holds potential for a wide array of uses and industries, so these five examples are certainly not all-encompassing. They do illustrate, though, the diversity of applications that machine learning can offer to businesses that work with large libraries of visual content.

What is your business experience with image recognition? 

free image recognition with imagga

Editor’s Note: This blog was originally published on March 23, 2017 and updated on May 21, 2019 for accuracy and comprehensiveness.


4 Image Recognition Uses By Facebook That Can Improve Digital Asset Management

Using image recognition to identify friends and locations is a common feature on Facebook but this post is not about those simple capabilities of computer vision. We are going in-depth about image recognition and what social networks can do with image recognition.

Besides photo tagging and categorization, Facebook is employing image recognition for a number of other important purposes. The most notable ones include scanning for inappropriate content in its network and enabling visually impaired users to get image descriptions of shared content. Image recognition is also likely to fuel better visual search on the social media.

Facebook’s adoption of image recognition is helping lead the way for a full-fledged revolution of how visual content is handled online. While it’s not the only company that’s developing an image recognition technology, the top social media is used by billions of people around the world daily. This makes it able to set trends and push innovations that other players might not be able to. Facebook has also made a move to open sourcing its image recognition AI, which may turn out to be a game changer in the field.

How Facebook has been using image recognition

Improving user experience with photo tagging and search

The most well-known application of image recognition is the photo tagging feature that Facebook offers since 2010. It’s based on facial recognition that enables automatic suggestions for tagging people on photos. Image recognition also can identify the location of where photos were taken. It’s the technology that powers the ‘Memories’ feature in users’ newsfeed too. While it’s been a huge boost in user experience, the feature has also turned controversial in terms of privacy and even led to lawsuits against Facebook.

In the last months, Facebook announced that it has ramped up its image recognition API. Its image recognition engine called Lumos will power a new way for users to search through their uploaded photos. Instead of looking for an image based on the date, location and tags, they would be able to find a photo by entering a keyword signifying what’s on it.  

If you’re curious to know what Facebook can ‘see’ in your photos, you can check out the Chrome extension that displays the automated image tags it has generated.

Image reader for the visually impaired

Facebook uses image recognition to fuel another important feature in its toolset. It’s the ‘automatic alternate text’ that identifies images to visually impaired users. The image recognition AI generates descriptions of what’s on a photo. In this way, it provides an alternative way for blind users to perceive the content of images. Twitter introduced such a feature in March 2016, and Facebook followed in April 2016.

With the improvements of its image recognition AI in February 2017, Facebook now provides visually impaired people with even better descriptions of photos. Initially, they could get basic information about the main figures on an image. Now the engine can describe what action is taking place in the photo.

Content moderation in action

Learn to smart crop and use it in as thumbnail saving you up to 30% posting time..

Image recognition is used by Facebook to scan for abusive, hateful and obscene content on the social media. It is steadily replacing the need for manual monitoring.

Imagga has developed its own NSFW classifier and offers content moderation solutions that can be used by any company that needs to keep tabs on large amounts of visual content.

Why Facebook’s image recognition breakthroughs matters

While Facebook uses of image recognition can be fascinating, they would not have been so important if the company was not the major player on the tech market that it is today. The significance of its machine learning for visual recognition go far beyond its immediate applications.

Facebook is irrevocably making image recognition mainstream. Other tools and applications are steadily following the example and embedding image recognition in their products and services. This trend holds potential to transform many business venues - from creative agencies and marketing companies to e-commerce businesses. Special applications like Aipoly are even pushing the boundaries and providing a way for blind people to ‘see’ the world around them.

In the marketing field, the popularity of image recognition can fuel enormous amounts of research about online users and overall trends. Advertisers have already realized the huge potential of Facebook’s image recognition capability. Data collected from visual analysis of shared content can inform better-targeted campaigns.

Facebook’s technology is popularizing image recognition, but it’s far from being the only player in the field. Imagga’s API powers the image recognition needs of businesses that do not want to share data with big players like Facebook and Google through its on-premise offering.

What uses of image recognition do you think will change the web and how? Share in the comments section and let’s have a talk.

free image recognition with imagga