There is nothing new about chatbots that are capable of maintaining a conversation in natural language, understanding a user’s basic intent and offering responses based on preset rules and data.

But the capacity of such chatbots has been dramatically augmented in recent months, leading to hand-wringing and panic in many circles.

Much has been said about chatbots auguring the end of the traditional student essay. But an issue that warrants closer attention is how chatbots should respond when human interlocutors use aggressive, sexist or racist remarks to prompt the bot to present its own foul-mouthed fantasies in return. Should AIs be programmed to answer at the same level as the questions that are being posed?