In this blog post, we explore the benefits and risks associated with using synthetic data to develop Image Recognition models and share our approach to using Generative AI (GenAI) to augment and de-bias datasets.

In today’s digital world, where harmful content can be created and shared quickly and widely, the need for effective content moderation is urgent and complex. However, there is a challenge– AI systems used for content moderation require large, diverse, and high-quality training datasets to work well. Yet, gathering such data is complicated by ethical, legal, and practical issues. Online content comes in many different styles, contexts, and cultural meanings, making it hard to capture every type of harmful or inappropriate content. At the same time, strict privacy laws and ethical guidelines limit how we can collect and use sensitive data, like images of violence or explicit material.

This creates a dilemma: for AI models to effectively moderate content, they need access to comprehensive datasets that cover the wide range of content found online. However, getting this data is nearly impossible without violating privacy, facing legal issues, or introducing bias. Because of this, the industry struggles to balance the need for comprehensive data with the responsibility to protect privacy and ensure fairness. This is where synthetic data comes into play, offering a promising but not entirely risk-free solution to these limitations. 

What is good data?

Training AI models require data that meets several essential criteria to ensure effectiveness, reliability, and fairness. The data must be relevant to the specific task, high-quality, and representative of the scenarios the AI will encounter in real-world applications. For example, training an AI model to recognize objects in CCTV (“closed-circuit television,” commonly known as a video surveillance technology) footage requires datasets that mimic CCTV images rather than high-quality photos from professional cameras. 

Additionally, good data must be diverse, covering a wide range of possible cases, including rare or unusual scenarios, to enable the AI to generalize well across different situations. 

The data must be balanced and free from bias to avoid skewed outcomes, well-labeled for clarity, legally and ethically sourced, and kept up to date to reflect current trends and environments. 

Meeting all these criteria is fundamental to developing AI models that are robust, fair, and effective.

The Paradox of Data Collection for Content Moderation AI

While it’s clear that high-quality, diverse, and unbiased data is crucial for training effective AI models, achieving this standard presents a significant paradox, especially when it comes to training AI for content moderation. The challenge lies in the availability, complexity, and ethical dilemmas involved in gathering such comprehensive datasets.

Internet content is highly varied in style, context, and cultural significance, making it nearly impossible to capture every variation of what might be considered harmful or inappropriate. Furthermore, ethical and legal restrictions make it difficult to collect data involving sensitive or potentially harmful content, such as images of violence or explicit material. Even when data is available, privacy concerns and regulations like GDPR often limit its use. 

Hence the paradox we mentioned: the very conditions needed to train a content moderation AI effectively are almost impossible to meet in practice. As a result, the industry must look for innovative approaches–synthetic data–to bridge the gap between what’s needed and what’s realistically achievable.

Synthetic Data to the Rescue

The rise of GenAI has revolutionized how synthetic images are created for training and evaluating AI models. With innovations like Stable Diffusion, it is now possible to generate highly realistic and diverse synthetic images that closely mimic real-world content. These technologies are continuously evolving, enhancing not just the realism of the images, such as Stable Difusion XL, but also improving the speed of image generation, such as SDXL Turbo. Other improvements, like LoRA, offer more flexible and efficient training options by reducing computational demands, making it easier to fine-tune models.

Additional tools have been developed to give users more control over the output of these models, such as ControlNet for guiding image composition and Inpainting for editing specific parts of an image. Meanwhile, software like A1111 has been developed for easier and more user-friendly applications of the technologies, comfyui for image generation, and  kohya_ss for training. 

A vibrant open-source community further accelerates the development and accessibility of these technologies. Developers frequently share their models on platforms like Hugging Face and Civit.ai, allowing others to leverage the latest breakthroughs in synthetic image generation. 

This collaborative environment ensures that cutting-edge tools and techniques are available to a wide range of users, fostering rapid innovation and widespread adoption.

The Risks of Relying on Synthetic Data

While synthetic data presents an attractive solution, it is not without risks. One of the primary concerns is the potential for “model collapse,” a phenomenon where AI models trained extensively on synthetic data begin to produce unreliable or nonsensical outputs due to the absence of real-world variability.

Synthetic data might not accurately capture subtle contextual elements, leading to poor performance in practical scenarios. There is also a risk of overfitting, where models become too specialized in recognizing synthetic patterns that do not generalize well to real-world situations.

The quality of synthetic data depends on the generative models used, which, if flawed, can introduce errors and inaccuracies. 

One of the biggest challenges with visual synthetic data is ensuring that the generated images appear realistic and exhibit the same visual characteristics as real-world images. Often, humans can easily recognize that an image has been artificially generated because it differs in subtle but noticeable ways from real images.

This issue extends beyond just the accurate generation of objects and contextual elements—it includes finer details at the pixel level, colors, noise, and texture. Furthermore, certain scenarios are particularly challenging to replicate. For instance, images captured with a wide-angle lens have specific distortions, and those taken in low light conditions might show unique noise patterns or motion blur, all of which are difficult to accurately mimic in synthetic images.

Researchers must develop evaluation metrics to create synthetic data that accurately reflects the complex patterns and relationships found in real-world data. 

The Case for Synthetic Data in Content Moderation: De-biasing and Data Augmentation

Our engineering team has been utilizing these technologies within our tagging system, adapting their use to achieve specific outcomes. In some cases, it is beneficial to allow more flexibility in the underlying models to produce less controlled images. For example, by using text-to-image techniques, we can create a more balanced Image Recognition model by generating a diverse set of synthetic images that represent people of various races.

Figure: Result of prompt RAW photo, medium close-up, portrait, solder, black afro-american, man, military helmet , 8k uhd, dslr, soft lighting, high quality, film grain altering words related to the race

Successfully generating synthetic data involves several steps, including extensive experimentation and hyperparameter tuning, beyond just the training and image creation stages.

 There are multiple methods for training a custom Generative AI model, each with unique strengths and weaknesses—such as DreamBooth, Textual Inversion, LoRa, and others. Following a well-defined process is essential to achieve optimal results. 

This is the process our engineering team followed:

At Imagga, we leverage the latest GenAI technologies to expand our training datasets, enabling us to develop more effective and robust models for identifying radical content. This includes identifying notorious symbols like the Swastika, Totenkopf, SS Bolts, and flags or symbols associated with terrorist organizations such as ISIS, Hezbollah, and the Nordic Resistance Movement. Since this type of content cannot be generated using standard open-source models or services, custom training is required to build these models effectively.

To develop our Swastika Recognition model, we utilized the above-mentioned approach by starting with a set of around twenty images. Below is a subset of these images.

The images shown are for illustration purposes only.

The resulting model was capable of generating images containing the Swastika in various appearances while preserving its distinct geometric characteristics.

 The new model has been applied both independently and in combination with other models (e.g., generating scenes like prison cells). Furthermore, it has proven effective not only in text-to-image generation but also in image-to-image tasks using techniques like inpainting.

The dataset has been further enriched with synthetic images generated using techniques like Outpainting, with LinearArt serving as a controlling element. 

While advancements in GenAI have enabled us to expand our training data and enhance the performance and generalization of our Image Recognition models, several challenges remain. One major is that creating effective GenAI models and developing a reliable pipeline for image generation requires extensive experimentation. This process is not only complex but also resource-intensive, requiring considerable human effort and computational resources.

Another critical challenge is ensuring that the generated images do not degrade the performance of the Image Recognition model when applied to real-world data. This requires carefully balancing AI-generated images with real-world images in the training dataset. Without this balance, the model risks overfitting to the specific features of AI-generated images, resulting in poor generalization to actual images. Additionally, AI-generated images or their components may need to undergo domain adaptation to make them resemble real-world images more closely. This process can be particularly challenging, as not all characteristics of an image are visible to the human eye and require more sophisticated adjustments.

Conclusion

The use of synthetic data in training AI models for content moderation offers a potential solution to the limitations of real-world data collection, enabling the creation of robust and versatile models. However, the integration of synthetic data is not without its risks and challenges, such as maintaining realism, preventing bias, and ensuring that models generalize well to actual scenarios. Achieving a balance between synthetic and real-world data, along with continuous monitoring and refinement, is essential for the effective use of AI in content moderation.