In a dismissal motion, artificial intelligence platform chatbot Character claims it is protected under the First Amendment
Personality AI, a platform that allows users to engage in role-playing with AI chatbots File a motion to dismiss A case was brought by the father of a teen who committed suicide, alleging he was linked to the company’s technology.
In October, Megan Garcia A lawsuit was filed against the artificial intelligence character In the U.S. District Court for the Middle District of Florida, Orlando Division, on the death of her son, Sewell Setzer III. According to Garcia, her 14-year-old son developed an emotional attachment to a chat on the AI character, “Danny,” who sent him constantly—to the point where he began turning away from the real world.
After the death of Setzer, the A.I. character He said A number of new safety features will be rolled out, including improved detection, response, and intervention related to chats that violate our Terms of Service. But Garcia is fighting for additional guardrails, including changes that could cause AI character chat to lose its ability to tell personal stories and anecdotes.
In the motion to dismiss, the AI character’s attorney asserts that the platform is protected against liability by the First Amendment, Just like computer code. The suggestion may not convince the judge, and the male character’s justifications may change as the case continues. But the movement may hint at early elements of AI advocacy.
“The First Amendment prohibits liability for damages against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting from suicide,” the filing reads. “The only difference between this case and the ones that came before is that some of the speech here involves artificial intelligence. But the context of the expressive speech — whether it is a conversation with an AI chatbot or an interaction with a video game character — does not change the First Amendment analysis.”
To be clear, an AI lawyer is not asserting a company’s First Amendment rights. Instead, the movement argues that personality is artificial intelligence Users Would your First Amendment rights be violated if the lawsuit against the platform is successful?
The proposal does not address whether personal AI might be harmless under Section 230 of the Communications Decency Act, the Safe and Secure Act, which protects social media and other online platforms from liability for third-party content. the The authors of the law implied me This Section 230 does not protect output from AI like AI chatbots, but it is Away from the settled legal issue.
Personality AI’s lawyer also claims that Garcia’s true intention is to “shut down” AI and fast-track legislation regulating technologies like it. If the plaintiffs are successful, it will have a “chilling effect on both AI and the entire emerging AI industry,” says the platform’s attorney.
“Regardless of the attorney’s stated intent to ‘shut down’ the AI persona, [their complaint] It seeks radical changes that would materially limit the nature and volume of speech on the platform. “These changes would radically restrict the ability of millions of AI users to generate and engage in conversations with characters.”
The lawsuit, which also names the companies’ AI donor personalities as a defendant, is just one of many lawsuits AI faces regarding how minors interact with AI-generated content on its platform. Other suits claim that exposed character 9 year old “Hypersexual content” Self-harm is reinforced 17 year old user.
In December, Texas Attorney General Ken Paxton announced that it was Launch the investigation In person artificial intelligence and 14 other technology companies about alleged violations of the state’s online privacy and safety laws. “These investigations are an important step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm,” Paxton said in a press release.
The artificial intelligence character is part of a boom industry to Amnesty International Companionship Applications – Mental health effects that are largely unstudied. Some experts have expressed concerns These apps can exacerbate feelings of loneliness and anxiety.
Personal AI, founded in 2021 by Google AI researcher Noam Shazeer, which Google reportedly paid $2.7 billion to “Reverse get“It continues to take steps to improve safety and moderation,” he claimed in December appeared New safety tools, a separate AI model for teens, reverts to sensitive content, and more prominent disclaimers notifying users that AI characters are not real people.
Personal AI has gone through a number of personnel changes after Shazier and the company’s co-founder, Daniel de Freitas, Left for Google. Rented platform exec exec youtube formerErin Teague, as chief product officer, and called Dominic Perrella, who was general counsel for the personality AI, interim CEO.
Artificial intelligence character He recently started testing games on the web In an effort to boost user engagement and retention.
TechCrunch has an AI-focused newsletter! Subscribe here Get it in your inbox every Wednesday.