REGINA — With the rise in public access to artificial intelligence (AI) technologies, both police and criminals are taking advantage of its capabilities.
"Like with anything, it can be used for good and for bad," says Cpl. Phillipe Gravel, who is an investigator with the RCMP's National Child Exploitation Coordination Centre (NCECC).
According to Gravel, police officers are seeing AI being used to create child sex abuse material for the first time.
"Their world works on a barter system - they need to have material to trade to get more," says Gravel. "And, they use AI to create it for them."
In the last year, the RCMP has conducted only a small number of investigations in which AI-generated child sex assault material was found on the hard drive of a suspect. But Gravel predicts more will come.
"The wave of AI-generated child sexual abuse material is coming," says Gravel.
"There are new ways of doing things coming out all the time and, as criminals learn them, we can expect more."
Tech talk
AI technology works by creating a new image from bits and pieces of existing images that it pulls from the internet, based on the parameters given to it by a user.
Pornographic drawings and digital images depicting children are illegal in Canada. Under the Criminal Code, it is a criminal offence to have or create them.
"The material used for AI-generated pornography comes from somewhere; they are real people," he says, insisting the crime is not victimless.
Hidden dangers
In a digital culture where posting pictures and videos of every aspect of life has become the norm, Gravel offers a word of advice to help prevent more people from becoming victims.
"If you post something, consider that it's there forever even if you delete it, because you don't know who's shared, saved, or screenshot that image," he says.
Having your children's images used is only one thing parents should think about. Objects in the background of pictures and videos can easily give criminals the information they need to find you or your child. "The best way to not expose yourself, or your children, is to just not post," adds Gravel.
Alert and aware
The Canadian Anti-Fraud Centre (CAFC) is also aware the technology is out there and could be used in scams and frauds, according to Jeff Horncastle, the acting client and communications outreach officer at the centre.
He says victims have reported suspected use of AI in phone scams and fake investment opportunities.
There's evidence that scammers could be using voice-cloning technology to mimic a real person speaking, according to Horncastle. "The technology is not like it was even two years ago, where the voice was robotic," he says. "It's so believable, it's almost like you're interacting with a real person."
He says it's very likely the technology is also be used in romance scams and sextortion. Fraudsters are now able to use software that uses deep-fake personas and images to appear in video chats and virtual meetings that look and sound like a real human being. Armed with a picture of a victim, they can also use that same technology to make it appear like the victim themselves is present and speaking.
Deep-fake video technology has also been seen in faking celebrity endorsements of fraudulent products and cryptocurrency, says Horncastle. AI is also suspected in phishing and spear phishing emails, which are used to trick people into giving out personal and banking information.
"They can intercept a business email request for money, such as accounts payable or invoices," says Horncastle. "The victim thinks they're emailing with the president of the company, co-worker, or supplier and gives them the necessary banking information and releases money directly to the scammer."
But, he says, the way to prevent being scammed hasn't changed. "Scammers thrive on creating a sense of urgency, panic, or fear," says Horncastle. "So, take your time, don't react, and verify the authenticity of suspicious communications."
Eyeing the evidence
Despite the potential for criminal uses the technology can and is being used for good. For example, the National Child Exploitation Coordination Centre is using it to speed up investigations of suspected online child sexual exploitation, helping to identify children and remove them from an abusive environment or other harm.
Police scanning a suspect's hard drive are able to use AI-enabled technology to identify images that match the criteria for child sexual exploitation material, something that is normally done manually.
"Some of the investigators and specialists on the unit are parents — for some, that's why they do this job — so, the less time they can spend looking at this stuff, the better for their mental health," says Gravel. It also means less exposure for the employees and faster review of the material, which increases in volume every day.
Culture of transparency
Operational technologies like AI play a critical role in modern policing. An operational technology is any technology-based tool, technique, device, software, application, or data set that will be used to support an investigation or to gather criminal intelligence. They include such things as search tools, automated licence plate recognition, voice dictation, data extraction filtering, translation, facial recognition, and text-to-speech functionality.
These tools are used to combat crime, investigate suspects, protect children and vulnerable groups, collect evidence, improve data analytics, strengthen police accountability, and advance law enforcement and public safety objectives.
RCMP units like the NCECC consult with the National Technology Onboarding Program (NTOP) when considering the use of new technologies to ensure that they are effective and compliant with both the Canadian Charter of Rights and Freedoms and the Privacy Act.
"Our team is motivated to help all RCMP program areas bring together the information they need to make sure they're using the tools appropriately and effectively," says Michael Billinger, who's in charge of transparency and outreach at NTOP.
The Program was created in 2021 in response to the privacy concerns with the RCMP's use of Clearview AI facial-recognition technology and the subsequent investigation, which resulted in the Privacy Commissioner's recommendation for the organization to have a more structured approach to adopt and integrate new technologies. NTOP was established to ensure the responsible use of operational technologies by the RCMP and to encourage more public transparency. To this end, NTOP recently published the Transparency Blueprint: Snapshot of Operational Technologies.
Billinger says he's seen a culture change in how the RCMP is looking at privacy and its commitment to increase transparency.
"RCMP employees are thinking privacy first and informing themselves on what the privacy concerns are, and not just relying on us to educate them," says Billinger. "It's a change in thinking about information as classified and confidential, to increasing transparency to foster trust and confidence with the public. I think that's a big win."