Google announced on Monday that it is employing a new artificial Intelligence technology to combat spreading of content that involves child sexual abuse.
Google said its cutting-edge AI technology uses neural networks for image processing to help discover and detect child sexual abuse material (CSAM) online.
The new tool based on the deep neural networks will be made available for free to non-governmental organizations (NGOs) and other “industry partners,” including other technology companies, via a new Content Safety API service that could be offered upon request, Xinhua reported.
“Using the Internet as a means to spread content that sexually exploits children is one of the worst abuses imaginable,” Google Engineering Lead Nikola Todorovic and Product Manager Abhi Chaudhuri wrote in the company’s official blog post.
The new AI technology will significantly help service providers, NGOs and other tech firms to improve the efficiency of CSAM detection and reduce human reviewers’ exposure to the content, said the two Google engineers.
“Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse,” they noted.
“We’ve seen firsthand that this system can help a reviewer find and take action on 700 percent more CSAM content over the same time period,” they added.
Many tech companies are now more willing to leverage AI to detect various kinds of CSAM contents such as nudity and abusive comments, and Google’s announcement represents its fresh commitment to fighting online CSAM contents by sharing “the latest technological advancements.”
Google has been working with some of its partners in against online child sexual abuse, including the Britain-based charity the Internet Watch Foundation, the Technology Coalition and the WePROTECT Global Alliance, as well as other NGO organizations.