A scary new porn trend is spreading to Australian schools at an alarming rate.
Parents are being warned about new artificial intelligence technology that allows users to seamlessly place one person's face on another's body, known as ‘deepfakes'.
While it may sound like a bit of Snapchat or TikTok fun, the technology is being used maliciously and illegally – and it's awfully easy to do so.
Adolescents in US schools Earlier this month, it was reported that a deep fake app was being used to create pornographic images of classmates.
Although technically fake, the photos appear very real and are usually designed to embarrass, humiliate and intimidate others – completely indistinguishable from the real thing.
They can be used as a tool to manipulate people and practice “extortion”, the practice of extorting money or sexual favors from a person at the risk of revealing intimate content.
Now the conversation has turned to Australia, where experts have warned of the alarming trend in schools across the country.
Not only have there been cases of images of school children being used in this way, but there have also been reports of children creating deeply fake pornographic images of their teachers.
A cyber security expert told news.com.au that the process of creating Deep Fake material is surprisingly simple.
“The first deep fake was created in the film industry where new technologies helped with special effects,” Netskope Asia Pacific Vice President Tony Burnside told news.com.au.
Think of Forrest Gump in the scenes where he meets Jeff K or John Lennon, for example.
He explained that for a long time, the enormous cost of such technology meant that it was limited to creative professionals.
“However, advances in artificial intelligence in recent years have made this task easier, and malicious actors have taken advantage of this opportunity,” Mr. Burnside explained.
“In the late 2010s, they started creating deep fakes for large-scale, mostly political, disinformation campaigns, where a single fake image or video can affect millions.
“Today, you don't need to be a cyber criminal or have extensive skills to create deep fakes.”
Australian children are at risk
Artificial intelligence expert Anuska Bandara, founder of Melbourne-based Elegant Media, added that children and young adults are particularly vulnerable to deep-fake technology.
“Following the AI frenzy in November 2022, marked by the release of OpenAI's flagship product, ChatGPT, the conversation has taken a worrying turn with the rise of deepfake technology,” Mr Bandara told news.com.au.
“This issue could have far-reaching consequences for Australians, particularly children and young adults who are increasingly vulnerable.
“Younger demographics have become avid followers of their favorite influencers, whether it's animated characters or sports personalities, who often unquestioningly receive their messages on social media.
“The danger is that real people have no control over what deep fakers, created using advanced AI techniques, can communicate. Using this technology, fraudsters use deep fakes to influence unsuspecting individuals, lead them into dangerous situations or even engage in the distribution of explicit content.
Have you ever been a victim of deep fake technology? Keep talking: [email protected]
“The consequences of this abuse pose a significant threat to the well-being and safety of younger generations as they navigate the online landscape.”
Mr. Bandara said children's photos could easily be used to create salacious content without their parents' knowledge.
“It certainly can happen, especially with the publicly available Internet,” he said.
“It's critical to understand the privacy policies and settings associated with sharing online content involving your children.
He explained that because photos can be easily manipulated, even with more basic tools like Photoshop, parents need to be aware of their children's images or who can access them.
“Many tools are available to create deep fake videos effortlessly. It is important to teach your children to recognize such content,” explained Mr. Bandara.
“Be wary of content from unverified sources and always trust material from reputable publishers, including mainstream media.”
Lifelong psychological impact
Psychologist Katrina Lines, who is also the CEO Act for childrentold news.com.au that with issues like sextoring on the rise, it's a very dangerous time for deepfake technology.
He added that it is vital to educate parents and children about the potential dangers of posting content online, no matter how well-intentioned it may seem.
“The issue of sextortion is growing and it's directly related to sharing content,” Ms Lines said.
“Some teenagers are easily fooled into thinking they are sending explicit pictures of themselves or someone their age.
“But now there is the issue of deep fakes and that makes things more complicated. “You have no control over it, people think you've sent obvious stuff when you haven't.
“This is sexual abuse and it has lifelong psychological effects.
“I know that child sexual exploitation material is digitally altered and recirculated in many parts of the dark web.
“It's just constant sexual abuse of children and it's just horrible.”
Ms Lines urged parents to be careful about what they share online.
“We all like to post happy pictures of our family and stuff online, but it's really important to understand that once a photo is posted, you usually can't get it back,” she warned.
“There is no real way to know if your child's pictures are being used online. “Most of the time, it doesn't exist on the normal web, but on the dark web, and it's harder for normal, everyday people to find.”
It's easier to get hurt
Australia's eSafety Commissioner, Julie Inman Grant, confirmed they had received an increasing number of complaints about pornographic deepfakes since the start of the year.
He also said that with the ease of creating deep fakes, it was easier to harm others.
“The rapid adoption, increasing sophistication and popular uptake of generative AI means it no longer needs massive computing power or massive amounts of content to create convincing deep fakes,” Ms Grant told news.com.au.
“This means it's getting harder and harder to tell what's real and what's fake online. And it's much easier to do a lot of damage.
“eSafety has seen a small but growing number of complaints about deep fakes exposed through our image-based abuse scheme since the start of the year.
“We expect this number to grow as generative AI technology becomes more advanced and widely available – and as people find more creative ways to abuse it.
“We also received a small number of deep fake accounts of cyberbullying where children were using technology to bully other children online.
“All of this should give us all pause. and empower the industry to take action to stem the tide of further misuse and abuse. “
Ms Grant said it could be “devastating” for someone to find out their image had been used in an apparent deep fake and urged anyone in the predicament to report it online.
“Deepfakes, especially deepfake pornography, can be devastating to those whose images are hijacked and maliciously altered without their knowledge or consent,” he said.
“Availability and investment in deepfake detection tools lags far behind, denying victims potential validation or recourse.
“We encourage Australians who experience any form of image-based abuse, including deepfakes, to report it. eSafety.gov.au.
“Our investigators are committed to supporting Australians dealing with this disturbing abuse and have an 87 per cent success rate in removing this material.”