Contents
How Are Websites Affected by UK Government Internet Safety Regulations?
With the exponential rise of user-generated content shared online, the need for content screening moderation is increasing. And if until now it was up to the businesses to decide whether they screen content or not, soon they will be required to do so by official content moderation regulations.
The UK government recently announced that Ofcom, the country’s communications regulator, will be put in charge to oversee the internet for two specific areas covering illegal and harmful content. Ofcom will be watching that illegal content is immediately taken down with a particular focus on terrorism and child abuse images, but also prevent the posting of such content in the first place. The organization will also be empowered to fine platforms for user-generated content (UGC) such as social media for serving harmful content to its users.
The proposed legislation yet to be passed but it is clear that it will soon be a reality. With the UK leading the pack, other governments will follow and more companies globally will be affected.
There’s a convenient and affordable solution. In this short video, our CEO Georgi Kadrev explains how your company can benefit from an AI-powered content moderation solution.
Any website which published UGC is affected by content moderation regulations
Virtually any website which operates with user-submitted content needs to screen it in order or reduce online harm. And there are many viable reasons for this with or without official legislation in place. The content referred to includes images, videos, and users’ comments – thus affecting a large percentage of the online platforms and websites.
What are the options for content moderation?
Content moderation can be performed by humans, AI or both. Companies are still largely relying on human moderation, but this approach is expensive, difficult to scale and takes an emotional toll on the people performing the job. And while AI-powered content moderation addresses both the ethical and economic side of the problem, the algorithms are not yet sophisticated enough to fully take over. In most cases today, the best approach is using both human and AI content moderation – in the most practical and safe ways. Combining the power computer vision and human judgment holds huge potential for moderating a massive amount of violent, pornographic, exploitative and illegal content and protect the internet from online bullies and criminals.