Australian Tech Giants Expected to Combat AI-Generated Child Abuse Images

JJohn September 7, 2023 10:57 PM

A new Australian industry standard compels tech giants like Google and Microsoft to actively eliminate AI-generated child abuse images and terrorist propaganda from search engine results. The innovative regulation, considered a world-first, also demands these companies to develop their AI tools to counter this severe issue.

Potential dangers of AI

Artificial intelligence, while boasting countless benefits, possesses potential dangers that must be acknowledged and confronted. The eSafety Commissioner of Australia has voiced concerns over the possibility of AI tools being manipulated to generate illicit content such as child abuse images and terrorist propaganda. This alarming prospect highlights the dark side of technology advancements that regulators and tech firms must face head-on.

New industry code against AI abuse

Technological advancements can be a double-edged sword. Thus, Australia has introduced a new industry standard that requires major tech companies to actively filter their search results, eliminating any material that involves child abuse. Moreover, the code insists that these firms ensure that their generative AI products do not have the capability to create deep fake versions of abusive content. This groundbreaking regulation aims to curtail the misuse of AI technologies.

The new industry code is not a one-time solution but rather an ongoing commitment. It necessitates search engines to carry out regular audits and enhancements of their AI tools. These efforts aim to ensure that 'class 1A' material encompassing child sexual exploitation, pro-terror, and extreme violence content does not appear in search results. This directive underscores the importance of continual vigilance and proactive measures in the fight against digital exploitation.

In addition to purging explicit content from search results, tech companies are also entrusted with the task of researching technologies to detect and identify deep fake images. As deep fakes become increasingly prevalent and sophisticated, these companies have a significant role to play in countering this digital deception. This initiative signifies the tech industry's responsibility not just in preventing harmful content, but also in empowering users to discern truth from falsehood.

Incorporate regulation from the start

Julie Inman Grant, the eSafety Commissioner, emphasized the importance of incorporating regulatory measures from the get-go. Instead of adopting a reactive approach, she advocated for proactive regulation that anticipates and addresses potential issues before they emerge. This mindset shift signifies a call for the tech industry to prioritize safety and ethical considerations right from the design and deployment phase of AI tools.

More articles

Also read

Here are some interesting articles on other sites from our network.