Intensifying its fight against child abuse images floating on the web, Google is now deploying artificially intelligent systems to help organisations identify and report child sexual abuse material (CSAM) images.
In its blog, Google announced it will start using cutting-edge AI that will add on to its existing technologies to help service providers, NGOs, and other technology companies review disturbing content at scale.
The approach will leverage deep neural network for image processing and identify new images quickly, thereby cutting down on the response time. It will also flag content which went unflagged previously.
Google will be providing the new AI tools for free to NGOs and industry partners through its Content Safety API toolkit that has more capacity to review CSAM online and requires minimal human inspection.
Recently, Tech giant Google also said that it is working with multiple partners to use new-age technologies like artificial intelligence (AI) and machine learning (ML) to improve response time to natural disasters like floods as well as addressing healthcare challenges.
Google Technical Project Manager (TensorFlow) Anitha Vijayakumar said the company has developed a system that uses AI for early and accurate flood warnings, including guiding over areas that are dangerous and prone to flooding. Earlier this year, Google had partnered with India's Ministry of Water Resources on a pilot for flood warning based on AI and ML. She said the pilot was undertaken in areas prone to flooding and the "initial results are promising".
Vijayakumar said the pilot can be extended to other parts of the country and expressed hope that the system will eventually be "able to give longer lead times to people, helping them seek safety". Floods have wreacked havoc across states this year, with Kerala being the latest. Hundreds have lost their lives, while thousands have been rendered homeless. More than 7.8 lakh people are estimated to be in relief camps.
With inputs from ANI