⇓ More from ICTworks

Introducing 10 Responsible Chatbot Usage Principles  

By Guest Writer on January 3, 2024

chatbot principles healthcare

When a new technology is introduced in healthcare, it invites meticulous scrutiny. Conversational AI is no exception.

Exchange sensitive health information with conversation AI chatbots is one of many governance challenges that require careful consideration in order to promote responsible use of chatbots in healthcare. Other challenges include performance assurance, patient considerations, legality, privacy and security, in addition to classic artificial intelligence challenges such as fairness and explainability.

The World Economic Forum assembled a multistakeholder community to address these governance challenges, co-creating the Chatbots RESET framework for governing the responsible use of Conversational AI in healthcare.

The Chatbots RESET framework consists of two parts:

  1. A set of 10 principles carefully selected from AI ethics and healthcare ethics principles and interpreted within the context of the use of chatbots in healthcare; and
  2. Operationalization actions for each principle in the form of recommendations to implement in various stages of deployment of chatbots in healthcare.

The framework is an actionable guide for three groups of stakeholders to promote the responsible use of chatbots in healthcare applications: technology developers, healthcare providers and government regulators.

10 Responsible Chatbot Usage Principles

The 10 Chatbots RESET principles derived, interpreted and curated by the Chatbots RESET project’s multistakeholder community came from both AI and healthcare ethics principles, interpreted for the use of chatbots in healthcare.

1. Safety/Non-maleficence

  • The actions of chatbots shall not result in avoidable harm to humans or other unintended consequences, including deception, addiction and lack of respect for diversity

2. Efficacy

  • Chatbots shall be fully verified for the efficacy of their purported service, in compliance with accepted international standards
  • Chatbot outputs shall be tailored to their intended users, while keeping in mind the medical nature of the information that is being communicated

3. Data protection

  • All data and history of interactions, including intended and unintended revelations of private data and those collected with consent, shall be safeguarded and disposed of properly, respecting applicable privacy and data protection regulations/laws
  • If any data is recorded during a session and/or used across sessions, the chatbot user consent and/ or any applicable ethics body approvals for research and data collection purposes shall be required
  • Chatbot users shall have the right and access to take ownership of personally identifiable information
  • Data collected by chatbots shall not be used for surveillance or punitive purposes, or to unfairly and opaquely deny healthcare coverage to users

4. Human agency

  • Chatbots shall support the user’s agency, foster fundamental rights and allow for human oversight
  • Chatbots shall respect the ability of patients to make their own decisions about healthcare interventions
  • Chatbots whose operating model includes real-time human oversight shall yield to the desire of the user to interact with a human agent at any time the user wishes to do so

5. Accountability

  • An entity (person or group) in the organization shall be accountable for the governance of chatbots
  • Conclusions and recommendations of chatbots shall be auditable Chatbots

6. Transparency

  • Chatbot users shall at all times be made aware of whether they are interacting with an AI or a human or a combination of the two
  • Chatbots shall clearly inform users about the limits of performance of the system, except in situations where not informing is required for the intended purpose of the chatbot
  • Chatbot users shall be immediately informed if the chatbot is unable to understand the user or is unable to respond with certainty, except in situations where such communication interferes with the intended purpose of the chatbot

7. Fairness

  • Chatbots shall not act in a systematically prejudiced manner with respect to ethnicity, geography, language, age, gender, religion, etc.
  • If a chatbot “learns” from data, the training dataset should be representative of the target population

8. Explainability

  • Decisions and recommendations made by chatbots shall be explainable in a way that can be understood by their intended users

9. Integrity

  • Chatbots shall limit their reasoning and responses to those that are based on reliable, high-quality evidence/data, ethically sourced data and data collected for clearly defined purpose

10. Inclusiveness

  • Every effort shall be undertaken to make chatbots accessible to all intended users, with special consideration given to identifying and enabling access for potentially excluded or vulnerable groups

Filed Under: Healthcare, Reports
More About: , , , , ,

Written by
This Guest Post is an ICTworks community knowledge-sharing effort. We actively solicit original content and search for and re-publish quality ICT-related posts we find online. Please suggest a post (even your own) to add to our collective insight.
Stay Current with ICTworksGet Regular Updates via Email

Sorry, the comment form is closed at this time.