Professional Standards

artificial intelligence

Interim guidance from the College provides information to clarify physician obligations on a particular matter where there may not yet be an official standard. With the emergence of artificial intelligence (AI), the College has identified that a professional standard is needed to provide guidance on its use in a physician’s practice. The following interim guidance can be used until such time that the College develops an AI standard.

The impact of AI on quality medical care and services is rapidly evolving and because of the speed of change, there is limited research-based evidence to guide regulatory policy. The College will continue to monitor developments in this field and make an effort to communicate them to registrants and update guidance as more information becomes available.

 

Preamble

Health Canada defines AI as a broad term for a category of algorithms and models that perform tasks and exhibits behaviours such as learning, decision making and predictions. Machine learning is a form of AI that allows machine learning training algorithms to establish machine learning models when applied to data, rather than models that are explicitly programmed. Within this document, machine learning is encompassed in any reference to AI.

Health Canada currently acts as the oversight body for medical devices and systems that use AI. The federal government is currently working on legislation to offer a balanced approach to AI regulation. Because it is unclear when this legislation will be adopted or come into effect, a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems was introduced in September 2023. This code temporarily provides Canadian companies with common standards and enables them to demonstrate, voluntarily, that they are developing and using generative AI systems responsibly until formal regulation is in effect.

In 2023, Health Canada published Draft guidance: Pre-market guidance for machine learning-enabled medical devices. Its purpose is to outline supporting information to consider when manufacturers are demonstrating the safety and effectiveness of a machine learning-enabled medical device for the purposes of applying for or amending a Class II, III, or IV medical device licence, or at any other point in the device lifecycle. Physicians in New Brunswick must only use Health Canada-approved machine learning-enabled medical devices and understand the device classification, the level of evidence supporting the use of the device in clinical practice, and its limitations.

Physicians should be aware that publicly available AI enabled tools, such as ChatGPT and Bard AI, are not recommended by Health Canada for use in medical practice.

This document seeks to fill current gaps and provide guidance for safe and appropriate use of AI in medicine. 

 

College’s Position

AI has become more prominent, allowing healthcare professionals to look at ways to incorporate it into everyday practice. AI has the potential to assist healthcare providers with elements of care, such as diagnosis, creating treatment plans, and writing patient communications. However, physicians must use AI with the same caution as any other technology to ensure patient safety and wellbeing. Registrants are expected to always act in the best interest of the patient and ensure that the use of AI in medical care meets the professional standards of practice, the requirements of the Code of Ethics, and privacy obligations.

When using AI in their medical practice, physicians are expected to adhere to the following broad principles, which are currently set out in Regulation #9-Professional Misconduct and the Code of Ethics:

  1. Transparency: Physicians using AI in their practice must be transparent about the extent to which they intend to use such tools to make decisions. They must explain to the patient how the tools work, and their limitations.
  2. Privacy, confidentiality and consent: Physicians must ensure that patient privacy and confidentiality are maintained when using AI. Personal patient data must be securely stored, accessed, and transmitted. The AI tool must comply with applicable privacy and security laws and regulations, as well as any relevant College standards. Personal patient data must not be transferred from the clinical environment at which care is 

    provided without patient consent, or where required or permitted by law. When seeking patient consent to transfer personal patient data, the physician must explain the nature of the AI being used, potential benefits, limitations, and risks associated with its use. Many AI tools, such as ChatGPT, do not currently comply with privacy and security regulation. Therefore, patient data, identifying data, or Personal Health Information should never be input into these tools.

  3. Accuracy and reliability: Physicians are responsible for ensuring that responses generated through AI are accurate and reliable. AI may appear to generate responses that are accurate and reliable, however they can be partially or completely inaccurate. This can lead to poor decision-making in relation to patient care if relied upon without critical thinking. Physicians must always review AI generated responses to ensure accurate and reliable information and to ensure consistently good patient care.

  4. Interpretability: AI tools can generate results that are difficult to interpret and replicate. Physicians must be capable of interpreting the clinical appropriateness of a result reached and exercising clinical judgement, when using AI tools.

  5. Bias: Physicians must be mindful of the inherent bias and critically analyze AI driven results or recommendations through an equity, diversity and inclusion (EDI) lens. An EDI lens considers an individual’s unique needs, circumstances and lived experiences and may require alternative approaches for interpreting or delivering information.

  6. Monitoring and oversight: If employees in a physician’s practice are using AI, the physician is responsible for ensuring that the above principles are adhered to, and that the AI tool is suited for its intended use.

 

Conclusion

AI has the potential to assist physicians in patient care however, as with any advancements in technology, there must be discretion in how these tools are used. Physicians are expected to use AI in a safe and responsible manner that meets the College’s standards and Code of Ethics. Physicians are expected to provide medical care based on objective evidence and sound medical judgement, using AI to complement, not replace, their own expertise. 

 

ACKNOWLEDGEMENTS

The College acknowledges the assistance of the College of Physicians and Surgeons of Alberta, the College of Physicians and Surgeons of British Columbia, and the College of Physicians and Surgeons of Newfoundland and Labrador in preparing this document.