Two key pieces of legislation, the Digital Services Act or DSA and the Artificial Intelligence Act or AI Act are affecting how businesses operate in the digital space. These laws introduce vigorous compliance requirements aimed to create safer and more transparent online environments while providing ethical AI practices. If you run an online marketplace, a social platform, or an AI-driven service, you need to know how these laws impact you – before regulators come knocking.

However, legal texts are dense, full of jargon, and nearly impossible for a business owner to digest quickly. That’s why we’ve invited Maria Catarina Batista, legal consultant, certified data protection officer through the European Center of Privacy and Cybersecurity, as well as certified privacy implementer. Together with Imagga co-founder Chris Georgiev, Maria will translate the implications of the Digital Service Act and the AI Act for online marketplaces, social platforms, and AI-driven services.

Watch the video with the full conversation

The Digital Services Act: Making Online Platforms Safer

The DSA is Europe’s rulebook for making the internet safer, fairer, and more transparent. It applies to online platforms operating in the EU, including social media sites, marketplaces, and search engines. It covers platforms that offer services to users in the EU, regardless of whether the platform is established in the EU or not.

Key obligations:

  • Removal of Illegal Content: the DSA requires platforms to expedite the removal of illegal content, such as hate speech, counterfeit goods, and terrorist content.
  • Algorithm Transparency: Platforms must provide more transparency regarding their algorithms, especially those used for ad targeting and content recommendation.
  • Very Large Online Platforms (VLOPs): Platforms with over 45 million monthly active users in the EU are classified as VLOPs and are subject to stricter obligations under the DSA.

Which businesses are affected by the DSA?

The DSA applies to any company offering intermediary services in the EU, regardless of whether they are established in the EU or not. These services are categorized into:

Mere Conduit Services: internet service providers like Vodafone, which merely transmit data without altering it.
Caching Services: services like Cloudflare, which temporarily store data to speed up access.
Hosting Services: this category encompasses web hosts and cloud storage providers, which store user-generated content.
Online Platforms: social media platforms, e-commerce sites, and search engines.
Very Large Online Platforms (VLOPs): platforms with over 45 million active users in the EU are classified as VLOPs and face additional scrutiny and stricter obligations.

The AI Act: Regulating Artificial Intelligence for Safety and Transparency

The AI Act represents the world’s first comprehensive regulation of artificial intelligence, aiming to ensure that AI systems work for the benefit of people, not against them. This groundbreaking legislation seeks to establish a framework that promotes safety, transparency, and accountability in the development and deployment of AI technologies.

Key Takeaways

Classification of AI systems by risk levels

Prohibited AI
The AI Act prohibits the use of AI systems that pose an unacceptable risk to individuals’ rights and freedoms. Examples include social scoring systems and certain forms of biometric surveillance that infringe on privacy rights.

High-risk AI
This category includes AI systems used in critical areas such as employment (e.g., hiring tools), law enforcement, and critical infrastructure management. These systems are subject to stringent regulations, including mandatory conformity assessments and registration in an EU database.

Limited-risk AI
This category involves AI systems that require transparency obligations. Examples include chatbots and deepfakes, where users must be informed that they are interacting with AI.

Low-risk AI
Most AI applications fall into this category, which includes systems like AI-enabled video games and spam filters. These are largely exempt from specific regulations but must comply with existing laws.

Obligations for Developers and Deployers

The AI Act places significant responsibilities on developers and deployers of AI systems, particularly those classified as high-risk. They must ensure that their AI systems are fair, transparent, and accountable. This includes providing detailed documentation about the system’s development and operation.

Applicability

The AI Act applies to any AI system that impacts the EU, regardless of where it was developed. This means that companies outside the EU must designate authorized representatives within the EU to ensure compliance with the regulations.

Overall, the AI Act sets a new standard for AI governance globally, emphasizing the need for responsible AI development and deployment that prioritizes human well-being and safety.

The AI Act applies to:

  • AI Providers: Companies that develop and sell AI solutions.
  • AI Deployers: Businesses that use AI to interact with users, make decisions, or analyze data.
  • Any AI System Impacting EU Citizens: The AI Act applies to any AI system that affects EU citizens, regardless of where it was developed.

In general the DSA’s scope is broader, covering all intermediary services, while the AI Act focuses specifically on AI systems impacting the EU.

Key Legal Challenges and Risks for Businesses

There are several challenges and potential risks for businesses.

Content Moderation: The Fine Line Between Compliance and Censorship

The DSA requires online platforms to remove illegal content swiftly, but this must be done without over-moderating, which could lead to accusations of censorship. This delicate balance is crucial for protecting both user rights and freedom of expression.

One effective approach is to use AI-powered moderation tools combined with human oversight. This ensures that content is reviewed accurately and that decisions are made with a nuanced understanding of context, helping to maintain user trust while complying with regulatory requirements.

Transparency & Algorithm Accountability

The DSA mandates that platforms provide clear explanations about how their algorithms work, particularly those used for content recommendation and ad targeting. This transparency is essential for building trust with users and complying with regulatory demands.

Platforms must offer detailed disclosures about their algorithms, explaining how recommendations and targeted ads are generated. This not only helps users understand why they see certain content but also demonstrates compliance with the DSA’s transparency requirements.

3. AI Bias and Explainability Issues

While the DSA does not directly address AI bias, ensuring that AI systems are fair and unbiased is vital for maintaining user trust. The broader regulatory environment, including the AI Act, emphasizes the importance of AI explainability.

Implementing AI explainability frameworks and conducting bias audits can help address these concerns. By providing insights into how AI systems make decisions, businesses can demonstrate fairness and accountability, even if the DSA does not explicitly require this for all AI systems.

Heavy Fines for Non-Compliance

Both the DSA and AI Act impose significant penalties for non-compliance, highlighting the importance of adhering to these regulations.

  • DSA Fines: Violations can result in fines of up to 6% of a company’s total worldwide annual turnover.
  • AI Act Fines: For prohibited AI use, fines can reach up to €35 million or 7% of global turnover (whichever is higher).
  • Daily Penalties: Persistent non-compliance under the DSA can lead to daily penalties of up to 5% of global daily turnover.

Best Practices for Compliance with the DSA and AI Act

Ensuring compliance with the Digital Services Act (DSA) and the AI Act requires a proactive and structured approach. Here are some best practices to help businesses navigate these regulations effectively.

  1. Set Up Strong Content Moderation Mechanisms

Implementing robust content moderation is crucial for compliance with the DSA. Here are some key strategies:

  • User-Friendly Reporting Systems: Establish easy-to-use mechanisms for users to report illegal content. This helps ensure that platforms can respond quickly and effectively to user concerns.
  • AI Moderation with Human Oversight: Utilize AI-powered moderation tools to streamline the process, but ensure that human reviewers are involved to provide context and oversight. This balance helps prevent over-moderation and ensures that decisions are fair and accurate.
  • Detailed Logs for Compliance Audits: Maintain comprehensive records of moderation actions. These logs are essential for demonstrating compliance during audits and can help identify areas for improvement.

2. Improve AI Transparency & Governance

Enhancing transparency and governance in AI systems is vital for compliance with the AI Act and broader regulatory expectations:

  • Conduct Risk Assessments: Perform thorough risk assessments for AI systems to identify potential issues and implement mitigation strategies.
  • Explain AI Decisions: Provide clear explanations of how AI decisions are made. This transparency helps build trust with users and regulators.
  • Regular Bias and Fairness Audits: Regularly audit AI models for bias and fairness to ensure they operate equitably and comply with regulatory standards.

3. Being proactive is key to avoiding compliance issues and potential fines

  • Determine Your Business Status: Identify whether your business qualifies as a Very Large Online Platform (VLOP) under the DSA or uses high-risk AI systems under the AI Act. This classification affects the level of regulatory scrutiny and obligations.
  • Maintain Detailed Compliance Records: Keep comprehensive records of compliance efforts, including moderation actions, AI audits, and risk assessments. These records are crucial for demonstrating compliance during regulatory audits.
  • Internal Audits: Conduct regular internal audits to identify compliance gaps before regulators do. This proactive approach helps address issues promptly and reduces the risk of fines.
  • Employee Training: Train employees on AI literacy and legal risks associated with AI and online platforms. Educated staff can help identify and mitigate compliance risks more effectively.

Why Compliance is a Business Imperative?

The DSA and AI Act are just the beginning. The EU is known for influencing global regulations (the “Brussels Effect”), meaning similar laws will likely emerge worldwide.

Key future trends to watch:

  • Expanding AI regulations: New AI categories and more stringent compliance.
  • Regular updates: The EU will update lists of prohibited AI systems and VLOP designations.
  • Stronger penalties: Expect even harsher fines for AI misuse and data privacy breaches.

Understanding and complying with the DSA and AI Act isn’t just about avoiding fines – it’s about building trust, protecting users, and ensuring long-term business viability. As regulations evolve, businesses must stay ahead by investing in transparency, compliance, and responsible AI practices.

For digital platform owners, the choice is clear: adapt now or risk severe consequences.

How can Imagga help?

At Imagga, we offer AI-powered content moderation solutions tailored to your company’s specific needs. Whether you need moderation of images and videos, real-time moderation or AI-assisted workflows our experts can help you stay ahead of regulations while keeping your platform safe.

Learn more about how we can support your business. Or get in touch to discuss your need.