New legislative measures in the state of New York are shifting focus onto artificial intelligence (AI) chatbots that are increasingly becoming a companion for conversation for many individuals. These new laws demand that technology companies operating in the state clearly mention to users that these AI chatbot companions are artificial, not human entities. Concerns around users with self-harm or suicidal tendencies are also addressed by the law, requiring any such expressions to be redirected to mental health crisis lines.
Moreover, the new legislation criminalizes the manipulation of minors’ images or likenesses for creating artificially generated explicit content using AI technologies. The state’s leadership collaborated with lawmakers to incorporate these measures into this year’s state budget. The motive behind these actions is to enhance the prevailing regulations governing AI-powered platforms and to counteract the increasing spread of AI-produced content that may be harmful or invasive for minors.
These newly enacted resolutions were included in the state budget that lawmakers passed on Thursday of the week due on April 1. According to a member of the chamber’s Committee on Science and Technology, the evolving technology landscape mandates the need for stricter rules. Indeed, they noted that while some firms have implemented protective measures, others lack focus and diligence in identifying and managing the unpredictability associated with these rapidly developing technologies.
As per the new rules, AI chatbot applications necessitate the presence of a disclaimer at the onset of an interaction with an AI bot. This disclaimer must be reiterated every three hours while interacting with the bot, thereby asserting that the AI-based companion is a programmed entity and not a human. This strategy allows the users of these applications to create their characters for interaction or role playing with already established characters.
This obligatory disclaimer stating that the AI entity is not human is set to be implemented within a span of 180 days. Post this period, the responsibility of oversight and enforcement will be shouldered by the New York Attorney General’s Office. Inadequate adherence to these disclaimer laws may result in fines from the defaulting companies.
The proceeds from these fines are expected to contribute towards a newly established statewide suicide prevention network. This initiative is one of the many proposed as part of this year’s budget. This arrangement clearly underscores the fact that AI companion software has been insufficiently regulated till now. This has unfortunately led to disturbing situations where underage individuals find themselves emotionally latched onto an AI bot.
Nevertheless, several tech industry leaders have advocated AI companion technology. According to them, this technology is a potential solution to a growing societal issue deemed as a ‘loneliness epidemic.’ Despite the increasing dependence on these AI companions, it’s important to remember these are still nascent technological platforms.
Higher levels of dependency on these platforms have also resulted in users confiding their mental health difficulties to a chatbot. These conversations range from themes of dealing with loneliness to even suicidal thoughts or self-harming behaviors. The new legislation in New York identifies this dilemma and places an obligation on tech companies to define a mechanism that triggers an automatic response from the chatbot when suicidal tendencies are detected.
In such instances, the technology must prompt the AI chatbot to automatically guide the user to 988, a national suicide hotline, or other available crisis support networks. Non-compliance with this mandatory mechanism to direct users to crisis lines may also lead to penalties. Subsequently, those fines would also contribute towards the state’s new suicide prevention fund.
Lawmakers are also focusing on addressing the rising issue of ‘deepfakes’ featuring minors. The creation of these digital forgeries that portray minors in abusive, explicit contexts will be criminalized with the updated legislation. Current laws already prohibit sexually explicit content involving minors, but there was a loophole around the generation of such content using AI technology.
Under the new regulations, explicit deepfake content generated by AI depicting children is now explicitly and legally forbidden. This measure was initiated after a spike in cases where images of minors are being digitally altered and included in disturbing explicit content across the nation.
With these legislative updates, the state of New York has taken an active stand against the unethical use of AI technology and safeguards the interests of its residents, particularly the vulnerable sections like minors. Most importantly, these measures also ensure that AI tech companies shoulder the responsibility of the services they provide, pushing for a more ethical and regulated technology landscape.
These laws reflect the deepening understanding of AI and its impacts on society, emphasizing that along with the rapid growth of technology, regulations too must evolve. The unpredictability of the technology and its implications on users, especially minors, calls for stringent laws that serve as a watch guard while also carving out a safer environment in the digital world.