Policy
Here’s how NY plans to regulate kids’ use of social media
The state attorney general’s office released draft rules around age verification and prohibitions on showing algorithmic feeds to minors without parental permission.

State Attorney General Letitia James speaks about the SAFE for Kids Act on Oct. 11 2023. Lev Radin/Pacific Press/LightRocket via Getty Images
New York is inching closer to fully enacting a 2024 law meant to protect kids online by restricting their access to addictive algorithmic feeds and nighttime social media notifications. On Monday, over a year after her office first began accepting comments, state Attorney General Letitia James released a draft set of regulations on how the state will enforce the law, including rules for confirming a user’s age.
Gov. Kathy Hochul signed the SAFE for Kids Act last June after strong advocacy from both her and James, who helped to draft the law. The law will require children under 18 to receive parental or guardian consent in order to gain access to algorithmic feeds and overnight notifications from social media. Barring that consent, social media companies will need to prevent minors from making use of those features.
The law left many of the details around enforcement up to the attorney general’s office, like coming up with effective ways for companies to age-gate users. Those rules in particular raised concerns from privacy advocates who warned about risks involved with providing personal information required to determine a person’s age.
The draft regulations from James’ office would give social media companies fairly significant leeway in choosing the methods to verify someone’s age, so long as they choose an existing one proven to be effective (i.e., not simply asking “Are you over 18?”). That can include uploading an image or video, using an email or phone number to cross-check data that would indicate age, or using a government ID. To help assuage privacy concerns, the draft regulations state that companies must offer at least one alternative method for users to pick if the company opts to ask for an ID. Social media companies would also be required to immediately delete information gathered for age verification or parental consent after its intended use.
Under the proposed rules, social media companies cannot proactively request parental consent for access to algorithmic feeds or nighttime notifications. They must first receive approval from the child before making any requests of their guardians.
The regulations would apply to social media companies that have user-generated content and in which 20% of a user’s time on the app or website is spent on the feed. TikTok, for example, would easily fit this description, but a site like GoodReads probably would not.
Research has shown that social media in general and feeds that make use of algorithms to provide relevant content to users, rather than only showing content from one’s followers in a chronological feed, can be addictive and result in negative mental health outcomes in children. “Children and teenagers are struggling with high rates of anxiety and depression because of addictive features on social media platforms,” James said in a statement. “The proposed rules released by my office today will help us tackle the youth mental health crisis and make social media safer for kids and families.”
The SAFE for Kids Act is the first of its kind in the country, though other kinds of online age verification laws exist in other states. This means that New York will serve as a testing ground for the effectiveness of the statute and the ability of the attorney general to ensure compliance. The rulemaking process is lengthy, with comments on the proposed regulations open until Dec. 1. Once the public comment period closes, James’ office will have another year to release the final rules. The law will formally go into effect 180 days after that, which means New York may need to wait until mid-2027 to see the impacts of the new law.