Artificial intelligence (AI) has demonstrated incredible capabilities, such as successfully passing the bar exam. It’s also shown its limitations, including the generation of fictitious citations in legal documents.
To some degree, the jury is still out on AI’s broader implications. However, a coalition of attorneys general fears the most serious challenges lie ahead.
Exploitation of children through AI
The National Association of Attorneys General (NAAG), through a joint effort of 54 attorneys general, recently issued a letter urging Congress to study how AI can and is being used to exploit children.
According to the NAAG, AI can:
- Pinpoint a child’s location.
- Mimic their voice to terrifying precision.
- Produce “deep fakes” by studying real images of abused children.
- Generate child sexual abuse material.
The NAAG letter further highlights the grave reality that the accessibility of “open-source” AI tools allows individuals to generate and disseminate harmful content without any significant oversight.
On June 5, 2023, the FBI published a public service announcement explaining that malicious actors have been creating sexually explicit images of children by manipulating benign photographs and videos, often culled from social media accounts, the internet, or requested from the victim.
Call to action
The attorneys general, in their joint letter, have implored Congress to undertake two primary measures to counteract this emerging threat:
- Formulate an expert commission dedicated to examining the ways in which AI can be employed to exploit children. This commission should also propose actionable strategies to prevent and combat such exploitation.
- Following a review of the commission's recommendations, Congress should take decisive action against child exploitation, potentially by extending current bans on child sexual abuse material to unequivocally include AI-generated content.
“While we know Congress is aware of concerns surrounding AI, and legislation has recently been proposed at both the state and federal level to regulate AI generally, much of the focus has been on national security and education concerns. And while those interests are worthy of consideration, the safety of children should not fall through the cracks when evaluating the risks of AI.”
Personal injury claims based on AI abuses
The rise of AI has not only raised concerns related to the exploitation of children but also opened the door to potential personal injury claims arising from AI's misuse.
Personal injury law has traditionally been based on the negligence of human actors. However, with AI systems operating autonomously, determining liability becomes more complex. Who should be held responsible: the AI developers, the users, or the manufacturers?
While some states have begun to formulate laws addressing these concerns, the majority are still grappling with the intricacies of AI and the legal implications surrounding it. The attorneys generals' push for a focused approach to AI's potential to exploit children underscores the need for clearer laws regarding AI and its potential impact on personal safety and rights.
Who signed the letter?
Attorneys general from the following states and territories signed the letter:
Alabama, Alaska, Arizona, Arkansas, California, Colorado, Connecticut, Delaware, District of Columbia, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Maryland, Massachusetts, Michigan, Minnesota, Mississippi, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Carolina, North Dakota, Northern Mariana Islands, Ohio, Oklahoma, Oregon, Pennsylvania, Puerto Rico, Rhode Island, South Carolina, South Dakota, Tennessee, Texas, Utah, Vermont, Virgin Islands, Virginia, Washington, West Virginia, Wisconsin, and Wyoming.