Federal court rejects First Amendment defense in chatbot wrongful death case
As generative AI tools grow more sophisticated—and more personal—the legal system is being forced to confront their potential harms. Guest contributor Justin Ward explores a chilling case against Character AI, where the mother of a teenage user is suing the company after her son took his own life. The boy had become fixated on an AI-generated version of a Game of Thrones character. In a significant ruling, a federal judge refused to dismiss the case on First Amendment grounds, challenging assumptions about whether AI output qualifies as protected speech—and raising urgent questions about AI accountability, user vulnerability, and the boundaries of tech company liability.