Users are able to trick ChatGPT into writing code for malicious software applications by entering a prompt that makes the artificial intelligence chatbot respond as if it were in developer mode, Japanese cybersecurity experts said Thursday.

The discovery has highlighted the ease with which safeguards put in place by developers to prevent criminal and unethical use of the tool can be circumvented.

Amid growing concerns that AI chatbots will lead to more crime and social fragmentation, calls are growing for discussions on appropriate regulations at the Group of Seven summit in Hiroshima next month and other international forums.