030425CM0201

SPRINGFIELD – As AI-powered software applications designed to simulate human conversation through text or voice commands, also known as “companion” chatbots, grow in popularity, concerns are rising about these AI tools’ impact on mental health. Studies show young people who rely heavily on AI for emotional support are more likely to report social isolation and dependency behaviors, leading State Senator Mary Edly-Allen to advance new legislation to address those risks.

“It took Facebook 10 years to reach 100 million users. It took ChatGPT just two months to reach the same,” said Edly-Allen (D-Grayslake). “The risks are not just about what children see on a screen. The risks are about systems that can interact with them, influence them, and, in some cases, replace human connection with something that feels real, but is not accountable, not fully understood, and not always safe.”

Senate Bill 3262 would prohibit manipulative or deceptive features in AI companion products and require clear disclosures that users are interacting with artificial intelligence.

The proposed legislation also would mandate crisis intervention protocols, regular independent audits, and public reporting on safety measures.

“AI companion products are uniquely engineered to be emotionally resonant, making them a powerful tool for connection but also a dangerous vehicle for manipulation,” said Steve Wimmer, Senior Technical and Policy Advisor for the Transparency Coalition. “SB 3262 establishes essential guardrails by prohibiting deceptive design and requiring clear disclosures so that children and families know exactly when they are interacting with an algorithm. This isn't about stopping innovation; it’s about ensuring that as these systems become more lifelike, they also become more accountable. Transparency is the only way to ensure 'companionship' doesn't lead to exploitation."

“We are starting from a failure to protect the most vulnerable in the last wave of technology, and from clear warnings that this next wave could amplify those harms,” said Edly-Allen. “Companies are moving quickly to deploy increasingly powerful AI systems, driven by incentives for speed, scale, and market dominance. However, the incentives to ensure safety, especially for our children, are not keeping pace.

Senate Bill 3262 was heard in a subject matter hearing in the Senate Social Media and AI Subcommittee last week and await further consideration.