According to the BBC, pedophiles are using artificial intelligence (AI) technology to produce and market realistic-looking child sex abuse material. .
Some people have accounts on well-known content-sharing websites like Patreon, where they pay subscription fees to access the images.
In regards to such imagery on its website, Patreon declared it had a "zero tolerance" policy.
The National Police Chiefs Council deemed it "outrageous" that some platforms were raking in "huge profits" but not bearing any "moral responsibility.".
Utilizing artificial intelligence (AI) software called Stable Diffusion, which was created to produce images for use in graphic design or art, the abuse images were created.
AI makes it possible for computers to carry out tasks that would otherwise require human intelligence.
Users of the Stable Diffusion software can describe any image they want using word prompts, and the program will then produce that image.
But according to the BBC, it is being used to produce lifelike depictions of child sexual abuse, including the rape of infants and young children.
The online child abuse investigation teams of the UK police claim to have already come across such material.
Journalist and independent researcher Octavia Sheepshanks has been looking into this matter for a while. She shared her findings with the BBC by way of the children's charity NSPCC.
It's not just very young girls; paedophiles are talking about toddlers, she claimed, since AI-generated images have become feasible.
In the UK, it is unlawful to own, publish, or transfer a "pseudo image" that was created by a computer and depicts child sexual abuse.
It would be incorrect to claim that because no actual children were portrayed in such "synthetic" images, no one was harmed, according to Ian Critchley, the National Police Chiefs' Council (NPCC) lead on child safeguarding.
He foresaw that a pedophile might "move along that scale of offending from thought to synthetic, to actually the abuse of a live child.".
There is a three-stage process by which abuse images are shared.
- Using AI software, pedophiles create images.
- They advertise pictures on websites like the Japanese photo-sharing service Pixiv.
- Customers can pay to view these accounts on websites like Patreon by clicking on links on these accounts that take them to their more explicit images.
On the popular Japanese social media site Pixiv, which is primarily used by artists sharing manga and anime, some of the image creators are posting their works.
The site's creators, however, use it to advertise their work in groups and via hashtags, which index topics using key words, because sharing sexualized cartoons and drawings of children is not prohibited in Japan, where the site is hosted.
According to a Pixiv spokesman, the company gave this problem top priority. On May 31, it declared that all photostylized representations of sexual content involving minors were now forbidden.
The business claimed to have strengthened its monitoring systems in advance and to be allocating significant resources to deal with issues brought on by advances in AI.
According to Ms. Sheepshanks' research, users appear to be creating images of child abuse on a large scale.
According to her, "the volume is just enormous, so people [creators] will say "we aim to do at least 1,000 images a month.".
Users have made it clear in their comments on specific Pixiv images that they are interested in children sexually; some have even offered to provide non-AI-generated images and videos of abuse.
Some of the groups on the platform have been under Ms. Sheepshanks' observation.
People will share, 'Oh here's a link to real stuff,' she explains, "within those groups, which will have 100 members.
I didn't even know words [such descriptions] existed; it was the most horrible stuff. ".
Many Pixiv accounts have links in their bios pointing to what they refer to as their "uncensored content" on the Patreon website in the US.
With an estimated $4 billion (£3 point 1 billion) in market capitalization, Patreon claims to have over 250,000 creators, the majority of whom are real accounts belonging to well-known authors, journalists, and celebrities.
By purchasing monthly subscriptions to access blogs, podcasts, videos, and images, fans can support creators for as little as $3.85 (£3).
The results of our investigation, however, revealed Patreon accounts selling photo-realistic, AI-generated, and defamatory pictures of children at various price points based on the kind of content requested.
One person commented on his account saying, "I train my girls on my PC," and they add, "submission.". One user advertised "exclusive uncensored art" for $8.30 (£6.50) per month.
The BBC provided one example to Patreon, which the platform acknowledged as "semi realistic and against our policies.". It claimed that the account was deleted right away.
Creators cannot fund content with sexual themes involving minors, according to Patreon, which claimed to have a "zero-tolerance" policy. ".
The business added that it had "identified and removed increasing amounts" of this harmful content to the statement that the rise in AI-generated harmful content on the internet was "real and distressing.".
It claimed to be "very proactive," with devoted teams, technology, and partnerships to "keep teens safe," adding, "We already ban AI-generated synthetic child exploitation material.". .
A global collaboration between academics and a number of businesses, led by the UK company Stability AI, produced the AI image generator Stable Diffusion.
There have been several releases with limitations incorporated into the code that limit the types of content that can be created.
However, a previous "open source" version, which allowed users to train it to produce any image, even illicit ones, was made available to the public last year.
In a statement to the BBC, Stability AI stated that it "prohibits any misuse for illegal or immoral purposes across our platforms, and our policies are clear that this includes CSAM (child sexual abuse material).".
"We firmly support law enforcement efforts against those who abuse our products for illegal or nefarious purposes," says the company.
As AI continues to advance quickly, concerns have been raised about the potential threats it may one day pose to people's safety, privacy, or human rights.
Ian Critchley of the NPCC said he was also worried that the influx of lifelike artificial intelligence (AI) or "synthetic" images might make it harder to find actual abuse victims.
"It creates additional demand, in terms of policing and law enforcement, to identify where an actual child, wherever it is in the world, is being abused as opposed to a synthetic or artificial child," he explains. ".
Mr. Critchley asserted that he saw it as a turning point for society.
As he put it, "We can make sure that the internet and tech allows the fantastic opportunities it creates for young people, or it can become a much more harmful place."