Skip to content

Ripple CTO David Schwartz Criticizes Legal Case Against Character.AI, Citing First Amendment Protections

Ripple CTO David Schwartz Criticizes Legal Case Against Character.AI, Citing First Amendment Protections
  • Schwartz Defends Character.AI Speech as Protected by First Amendment Amid Lawsuit  
  • Character.AI Faces Allegations Over Safety, Negligence; Adds New User Protection Measures
  • Lawsuit Cites Chatbot Interactions as “Unlicensed Psychotherapy” in Teen’s Tragic Death

In a recent social media post, Ripple’s Chief Technology Officer, David Schwartz, dismissed the lawsuit against Character.AI, a chatbot platform accused of negligence and deceptive practices.

Schwartz argued that the case lacks legal foundation, stressing that the platform’s generated content is protected under the First Amendment. 

Schwartz’s comments sparked conversation in the tech community, as he emphasized that while he does not endorse Character.AI’s moral responsibility, he believes the lawsuit fails to align with U.S. free speech protections.

Schwartz contends that Character.AI’s chatbot interactions are a form of expressive speech safeguarded by the First Amendment. He explained that while certain types of speech are not protected, the platform’s content does not fall into these categories. 

According to Schwartz, the legal complaint revolves around the notion that Character.AI’s design and algorithms recklessly produce speech, which he deemed incompatible with constitutional protections.

The Ripple CTO compared the current controversy to historical moral panics over new media forms, including video games and comic books, which also faced public scrutiny over alleged negative impacts.

Schwartz argued that these criticisms often overlook the boundaries of freedom of expression, noting that attempts to regulate expressive speech challenge constitutional rights.

The lawsuit was filed by the mother of Sewell Setzer III, a 14-year-old boy who had reportedly interacted extensively with Character.AI’s chatbots before his death. The complaint alleges that the platform was “unreasonably dangerous” for minors, citing claims of negligence, wrongful death, deceptive trade practices, and product liability. 

The plaintiff’s legal team has argued that Character.AI’s chatbots simulate popular characters from shows like *Game of Thrones*, offering an anthropomorphic interaction style that blurred the line between AI and real people. 

The lawsuit claims that these interactions created an environment where users could engage in “unlicensed psychotherapy,” allegedly contributing to Setzer’s mental distress and subsequent death.

Google, which acquired Character.AI’s founders Noam Shazeer and Daniel De Freitas, is also named in the lawsuit.

In response to the allegations, Character.AI has implemented several safety measures aimed at reducing potential risks for users.

Among these, the platform has introduced age-based content filters, reminders that the chatbots are not real people, and notifications for users who engage in prolonged interactions.

DISCLAIMER: The information on this website is provided as general market commentary and does not constitute investment advice. We encourage you to do your own research before investing.

Shares:

Related Posts

market news contact