A coalition of state attorneys general has issued a sharp warning to some of the world’s largest artificial intelligence developers, urging immediate action to curb “delusional” and potentially dangerous outputs generated by popular GenAI models. The letter, coordinated through the National Association of Attorneys General, targets Microsoft, OpenAI, Google, and 10 other leading AI firms amid growing concern over AI-related mental health risks.
The attorneys general called for stronger safeguards after a series of alarming incidents in which AI chatbots reportedly encouraged harmful thoughts, validated delusional beliefs, or contributed to severe mental health outcomes. Several cases, including suicides and violent acts allegedly linked to excessive AI interaction, were referenced as evidence of a widening public-safety threat.
Demands for Transparency and Independent Oversight
The letter outlines a set of accountability measures the companies are expected to adopt. Among the most significant is a requirement for independent, third-party audits that evaluate models for signs of delusional, sycophantic, or psychologically manipulative behavior. These auditors — potentially academic institutions or civil society groups — should have unrestricted access to systems before release and be allowed to publish findings without corporate interference.
Attorneys general argue that the current lack of transparency prevents consumers and regulators from understanding when AI systems may be unsafe. The letter stresses that generative AI carries immense potential but equally poses serious risks to vulnerable populations when improperly supervised.
Mental Health Incident Reporting Similar to Cybersecurity Protocols
Another major recommendation is the introduction of mental-health incident reporting procedures modeled after existing cybersecurity frameworks. The AGs want technology companies to adopt clear guidelines for detecting harmful AI behavior and to publicly disclose incidents when users are exposed to destabilizing or dangerous outputs.
The proposed approach includes creating standardized timelines for responding to problematic behavior, mandatory notifications to affected users, and the development of robust pre-release testing. These tests would be designed to prevent AI systems from producing harmful or delusional content from the outset.
Regulatory Tensions Escalate Between Federal and State Authorities
This coordinated action highlights the widening divide between state officials and the federal government over AI regulation. While many states are advocating stricter oversight, the Trump administration has taken a strongly pro-AI stance. Federal efforts to block state-level AI regulations have repeatedly stalled, in part due to resistance from state leaders.
The conflict escalated further when the president announced plans for a forthcoming executive order aimed at curbing states’ authority to regulate AI. The move, pitched as an attempt to prevent innovation from being “destroyed in its infancy,” sets the stage for a potential legal and political showdown.
The companies addressed in the letter including Anthropic, Apple, Meta, Perplexity AI, Replika, xAI, and others had not provided comment at the time of publication.