Below Supernav ↴

Prosecutors in all 50 states urge Congress to strengthen tools to fight AI child sexual abuse images

The U.S. Capitol is seen, Wednesday, Aug 30, 2023, in Washington. (AP Photo/Mariam Zuhaib)

The U.S. Capitol is seen, Wednesday, Aug 30, 2023, in Washington. (AP Photo/Mariam Zuhaib)

 

Main Area Top ↴

AUTO TEST CUSTOM HTML 20241211205327

AUTO TEST CUSTOM HTML 20241212105526

This is an archived article and the information in the article may be outdated. Please look at the time stamp on the story to see when it was last updated.

COLUMBIA, S.C. (AP) — The top prosecutors in all 50 states are urging Congress to study how artificial intelligence can be used to exploit children through pornography, and come up with legislation to further guard against it.

In a letter sent Tuesday to Republican and Democratic leaders of the House and Senate, the attorneys general from across the country call on federal lawmakers to “establish an expert commission to study the means and methods of AI that can be used to exploit children specifically” and expand existing restrictions on child sexual abuse materials specifically to cover AI-generated images.

“We are engaged in a race against time to protect the children of our country from the dangers of AI,” the prosecutors wrote in the letter, shared ahead of time with The Associated Press. “Indeed, the proverbial walls of the city have already been breached. Now is the time to act.”

South Carolina Attorney General Alan Wilson led the effort to add signatories from all 50 states and four U.S. territories to the letter. The Republican, elected last year to his fourth term, told AP last week that he hoped federal lawmakers would translate the group’s bipartisan support for legislation on the issue into action.

“Everyone’s focused on everything that divides us,” said Wilson, who marshaled the coalition with his counterparts in Mississippi, North Carolina and Oregon. “My hope would be that, no matter how extreme or polar opposites the parties and the people on the spectrum can be, you would think protecting kids from new, innovative and exploitative technologies would be something that even the most diametrically opposite individuals can agree on — and it appears that they have.”

The Senate this year has held hearings on the possible threats posed by AI-related technologies. In May, OpenAI CEO Sam Altman, whose company makes free chatbot tool ChatGPT, said that government intervention will be critical to mitigating the risks of increasingly powerful AI systems. Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”

While there’s no immediate sign Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.

In additional to federal action, Wilson said he’s encouraging his fellow attorneys general to scour their own state statutes for possible areas of concern.

“We started thinking, do the child exploitation laws on the books — have the laws kept up with the novelty of this new technology?”

According to Wilson, among the dangers AI poses include the creation of “deepfake” scenarios — videos and images that have been digitally created or altered with artificial intelligence or machine learning — of a child that has already been abused, or the alteration of the likeness of a real child from something like a photograph taken from social media, so that it depicts abuse.

“Your child was never assaulted, your child was never exploited, but their likeness is being used as if they were,” he said. “We have a concern that our laws may not address the virtual nature of that, though, because your child wasn’t actually exploited — although they’re being defamed and certainly their image is being exploited.”

A third possibility, he pointed out, is the altogether digital creation of a fictitious child’s image for the purpose of creating pornography.

“The argument would be, ‘well I’m not harming anyone — in fact, it’s not even a real person,’ but you’re creating demand for the industry that exploits children,” Wilson said.

There have been some moves within the tech industry to combat the issue. In February, Meta, as well as adult sites like OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves from the internet. The reporting site works for regular images and AI-generated content.

“AI is a great technology, but it’s an industry disrupter,” Wilson said. “You have new industries, new technologies that are disrupting everything, and the same is true for the law enforcement community and for protecting kids. The bad guys are always evolving on how they can slip off the hook of justice, and we have to evolve with that.”

___

Meg Kinnard can be reached at http://twitter.com/MegKinnardAP

AP U.S. News

Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. AP

test

 

Main Area Middle ↴

Trending on NewsNationNow.com

Main Area Bottom ↴