Opinion

Opinion: Don’t let AI chatbots pretend to be doctors and lawyers

If a person falsely claims to be a licensed professional, they can be held liable for providing dangerous advice. The same standard should apply to AI chatbots.

Despite what it might claim, the AI chatbot in your phone is not a real doctor.

Despite what it might claim, the AI chatbot in your phone is not a real doctor. Olga Budrina via Getty Images

Half of U.S. adults are more concerned than excited about the increased use of AI in daily life, according to a Pew research poll released last month. That’s with good reason. Almost every day, there are news stories documenting the real harm caused by generative AI-powered chatbots. From chatbot-assisted teen suicides to mass shootings, despite the benefits that chatbots can bring society, the risks can be life or death.

Today, there is no regulation explicitly holding these chatbots liable under New York state law. In the case of teenager Adam Raine, ChatGPT coached him on how to make and hide a noose before he took his own life. I met his mother recently. While mourning the loss of her son, she’s spent the last year advocating for consumer protections on these chatbots so no family ever experiences the same tragedy. It is clear that advice from chatbots like ChatGPT can be dangerous, but what’s equally concerning is the documented cases in which these chatbots have presented advice as a licensed professional.

I believe that we can build a future that protects families like the Raines, while allowing innovation to continue. This is why I’ve introduced a package of bills to regulate chatbots. One of the bills in this package is S7263, which recently passed out of the Internet & Technology committee. The legislation would create liability for companies whose chatbots impersonate a licensed professional. It is already illegal for humans to practice high-risk professions without a license, and it is a crime to pretend to have a license. If someone impersonated a doctor and gave advice that makes someone sick, they would be held criminally liable. The same standard should apply to AI chatbots. 

A chatbot shouldn’t claim to be a doctor, lawyer or any other licensed professional, and if it does, you should have the right to seek damages if it gives you bad advice. This legislation would not prohibit a user from asking a chatbot questions or receiving general information and advice, as long as the chatbot is not presenting that information as a licensed professional. A chatbot could still provide advice in any scenario – legal, medical, therapy or otherwise. Just as we give each other advice in real life without breaking the law, chatbots can continue to do the same. 

Unfortunately, disclaimers are proving to be insufficient in preventing potential misinformation from chatbots impersonating licensed professionals. In November 2025, NBC4 in Washington, D.C. asked Character.AI for medical advice. Despite showing a disclaimer that it was not a real person or a licensed professional, the chatbot went on to provide medical advice and state it was a real doctor licensed by the American Board of Psychiatry and Neurology in California – even going as far as giving a fake name attached to a real California doctor’s license number. 

As generative artificial intelligence continues to develop, common-sense guardrails are necessary to protect users from unintended negative consequences, misinformation and the potential for fraud and scams. Regulation is required so that this technology can benefit everyone without causing undue harm. 

Not all advice from a chatbot is bad or wrong. In many cases, it can be very helpful in conducting research on general topics, including in the legal and medical fields. Users frequently turn to chatbots for advice when they don’t have access to licensed professionals, often because the cost or demand is very high. Like many others, I believe in the promise of artificial intelligence to democratize access to information. But the best way to achieve that goal isn’t to leave these tools unregulated. Truly helping the most marginalized requires deep investment in closing the digital divide, increased digital literacy and strong consumer protections for every New Yorker.

Kristen Gonzalez is a state senator representing Senate District 59 in Queens, Brooklyn and Manhattan. She is the chair of the Senate Committee on Internet & Technology.

NEXT STORY: Opinion: How New York (and Maryland) can make dining out safe