Safer Chatbots

6 steps to include safeguarding measures in chatbots for children and young people

About

More and more young people and children use automated messaging-based services like chatbots to get information and advice. They also may use them to seek urgent help or disclose personal, sometimes life-threatening situations, even when these chatbots are not designed to provide such support. Most chatbots have not been set up to detect and respond to users in distress, increasing the risk of further harm.

This Safer Chatbots summary guide outlines 6 steps for providers of chatbots that are not powered by AI, for programming the detection of individual keywords indicative of a high-risk situation, the deployment of a global ‘safe word’ users can type at any time, and a series of compassionate response messages along with contact details for reliable referral services.

For full technical guidelines, that also include tried and tested blueprints for chatbots with higher levels of sophistication including AI, refer to the Safer Chatbots Implementation guide.

6 steps to make your chatbot safer for children & young people
Author(s)
UNICEF

Files available for download