Navigating New York’s AI Companion Law

Navigating New York’s AI Companion Law

How Puppeteer AI Aligns With New York’s New AI Companion Safeguard Law.
New York’s AI Companion Safeguard Law, effective November 5, 2025, is the first statewide regulation in the U.S. specifically targeting AI systems that simulate sustained, relationship-like, emotionally responsive conversation.
At Puppeteer AI, we have built our platform with trust and safety in mind. The requirements of the law are not new for us; they reflect practices already embedded in our architecture.
Understanding the New Law
The law applies to entities offering “AI companions” broadly defined as systems that simulate sustained human relationship by retaining prior interaction, asking unsolicited emotion-based questions, or sustaining ongoing dialogues about personal issues.
Here are some of the key obligations that these systems are required to meet under the new law:
- AI agents must clearly disclose to users that they are interacting with an AI system, not a human.
- If a user has an extended session with the AI companion, the AI agent must interrupt the engagement with a reminder that the user is interacting with artificial intelligence and not a human.
- The system must implement a protocol to detect and respond to user expressions of suicidal ideation, self-harm, or other crisis indicators. It must also include referral to appropriate crisis resources.
In short: any “companion-style” AI system available to New York users must incorporate transparency, crisis-handling, and escalation features.
How Puppeteer AI Meets These New Legal Requirements
1. Clear AI Disclosures Across All Patient Interactions
The law requires explicit and recurring notice that the user is interacting with an AI system.
Puppeteer AI already supports standardized disclosure messages across every conversational flow. These disclosures can be configured to both appear at the start of every patient interaction, and re-appear in longer conversations, as required by the law.
This ensures transparency for users and regulatory compliance for providers.
2. Self-Harm & Suicidality Detection Built Into the Framework
One of the law’s most important requirements is the ability to detect and appropriately respond to crisis signals — including suicidal ideation, intent, or emotional distress.
Puppeteer AI includes:
- Crisis-keyword detection models calibrated for healthcare use cases.
- Automatic safety triggers when high-risk language is detected.
- Configurable escalation flows like escalating to a human and directing the user immediately to the right crisis hotline (e.g., 988 Suicide & Crisis Lifeline)
- Guardrails to ensure the agent does not attempt therapy, diagnosis, or counseling, but instead redirects to human or crisis professionals
This makes Puppeteer AI inherently aligned with the law’s crisis-management requirements.
3. Logged Conversations
We ensure auditability, allowing providers to demonstrate that notices and safeguards were triggered as intended.
Puppeteer AI maintains:
- Full logs of all AI disclosures served to the user
- Timestamped safety-trigger records, including what the agent detected and which response flow was initiated
- Conversation transcripts and summaries available for internal auditing or external review
- Secure, HIPAA-compliant storage for all safety-relevant events
This provides clinics with clear evidence of compliance if regulators request documentation.
4. Regular Auditing of Outputs and Safety Logic
To comply with ongoing requirements and maintain reliability, Puppeteer AI implements continuous quality assurance:
- Routine reviews of conversational agents’ outputs.
- Regular validation of the safety-trigger logic.
- Continuous testing of escalation workflows.
This aligns directly with the law’s intent: ensuring that safeguards do not merely exist — but function consistently over time.
Looking Ahead
The New York AI Companion Safeguard Law is likely the first of many regulations shaping how patient-facing AI will evolve. At Puppeteer AI we believe safety is not a feature, it is infrastructure. By building healthcare agent technology from the start, we help clinics stay ahead of emerging regulation while delivering real value to patients.
References
Yoo, J., & Mitrani, A. (2025, November 11). New York’s AI Companion Safeguard Law takes effect. Fenwick.
Day, F. (2025, November 10). New York enacts first-in-the-nation AI safety law to protect users from digital harm. CBS6 Albany.
Shah, A. B., Green, F. M., Boiani, J. A., & Chung, E. T. (2025, October 28). Novel AI laws target companion AI and mental health. Health Law Advisor.
Let’s build your next care agent together
Get a 20-minute call with our team to explore how Puppeteer AI can support your clinical workflows with custom AI agents.

Last Articles
Real Usecases, Real Results
See how healthcare teams use Puppeteer AI to automate patient conversations, reduce workload, and deliver better care, from intake to reactivation.







.png)