close
close

AI-generated images of child sexual abuse reach ‘tipping point’, says watchdog | Artificial Intelligence (AI)

AI-generated images of child sexual abuse reach ‘tipping point’, says watchdog | Artificial Intelligence (AI)

Child sexual abuse images generated by artificial intelligence tools are becoming increasingly common on the open web, reaching a “tipping point”, according to a security watchdog.

The Internet Watch Foundation said the amount of AI-generated illegal content it has seen online in the past six months has already exceeded the previous year’s total.

The organization, which runs a UK hotline but also operates globally, said almost all of the content was found on publicly accessible areas of the internet rather than on the dark web, which must be accessed using special browsers.

The IMF’s interim executive director, Derek Ray-Hill, said the level of sophistication of the images showed that the AI ​​tools used had been trained on images and videos of real victims. “The last few months show that this problem is not going away, but is actually getting worse,” he said.

According to an IMF analyst, the situation with AI-generated content reached a “tipping point” where security watchdogs and authorities didn’t know whether an image was of a real child in need of help.

The IWF took action against 74 reports of AI-generated child sexual abuse material (CSAM) that was realistic enough to breach UK law in the six months to September this year, compared to 70 in the 12 months to March . A single report could refer to a web page that contains multiple images.

In addition to AI images depicting real victims of abuse, the material viewed by the IWF also included “deepfake” videos in which adult pornography was manipulated to resemble CSAM. In previous reports, the IMF has said that AI was used to create images of celebrities who were “aged” and then portrayed as children in sexual abuse scenarios. Other examples of CSAM observed included material where AI tools were used to render “naked” images of clothed children found online.

More than half of the AI-generated content reported by the IWF in the last six months is hosted on servers in Russia and the US, with Japan and the Netherlands also hosting significant amounts. The addresses of the websites containing the images are uploaded to an IWF URL list that is shared with the tech industry so they can be blocked and made inaccessible.

According to the IWF, eight out of 10 reports of illegal AI-generated images came from members of the public who found them on public websites such as forums or AI galleries.

Meanwhile, Instagram has announced new measures to combat sextortion, where users are tricked into sending intimate images to criminals, typically posing as young women, who are then threatened with blackmail.

Skip the newsletter advertising

The platform will introduce a feature that will redact all nude images sent to users in direct messages and encourages them to exercise caution when sending direct messages (DM) that contain a nude image. Once a blurred image is received, the user can choose whether to view it or not and will also receive a message reminding them that they have the option to block the sender and report the chat to Instagram.

The feature will be enabled by default for teen accounts worldwide starting this week and can be used for encrypted messages, although images flagged by the on-device detection feature will not be automatically reported to the platform itself or the authorities.

It will be an opt-in feature for adults. Instagram will also hide follower and following lists from potential sextortion scammers who have been known to threaten to send intimate images to these accounts.

Related Post