Artificial intelligence (AI) is defined as the “capability of a computer to mimic human cognitive functions such as learning and problem-solving.”1There are many examples of the potential uses of AI technology in medicine, both “behind the scenes” and at the bedside2,3:
While it is unlikely that AI will replace clinicians completely, it will become increasingly important for clinicians to keep up with standards of care that emerge as a result of this new technology. It is simply impractical for a clinician with a busy practice to stay on top of the explosive quantity of new data and research that is constantly being published.4 AI systems can be designed precisely to perform this function. Moreover, there is a seemingly endless (and growing) list of administrative tasks for which a clinician is responsible. AI technologies can potentially alleviate this burden and empower the clinician to spend more time where it matters: with their patients.2
Although the potential benefits of AI in health care have been widely theorized, the practical and ethical concerns have been less well-characterized. Discussed below are important considerations involving patient privacy (ie, HIPAA concerns) as well as the ethical use of AI in daily clinical practice.
There is a fundamental tension inherent in the use of AI: machine learning (ML) algorithms are only as robust as the datasets that power them. AI systems can learn from5:
• Electronic health record (EHR) data
• Genomic databases
• Google search inquiries for specific symptoms
• Digitized pharmaceutical records
• Smartphone applications such as menstrual cycle trackers
• Real-time health data available from the internet of things (IoT)
• Devices such as wearable activity, step, or health trackers
But who owns this data? How does one balance the potential benefits of innovation with the human right to privacy? How does one know when a privacy violation has occurred?
The major US federal law that has governed the protection of health data since 1996 is the Health Insurance Portability and Accountability Act (HIPAA). The law prohibits ‘covered entities’ (namely, health care providers and insurance companies) from engaging in unauthorized use or disclosure of protected health information (PHI). While PHI may be used for purposes such as direct patient care, quality improvement, and billing purposes, the use of PHI for AI research and development is not authorized under HIPAA without institutional review board (IRB) approval/waiver or explicit patient authorization. However, there are many instances where patient datasets collected by a health system have been used for AI development after undergoing a ‘deidentification’ process, during which each patient record is stripped of 18 patient identifiers specified by HIPAA (names, birthdates, email addresses, etc).6,7
How Much Do You Know About Artificial Intelligence in Health Care? Take our quiz to find out! Click here.
What lawmakers in 1996 did not anticipate, however, was that health data would eventually be derived from multiple sources outside of health care systems themselves. Unfortunately, in the modern era, it is possible to triangulate de-identified data with outside third-party databases, effectively "reidentifying" the dataset by connecting it with a unique individual's identity.7 Given that this is now the case, updated legislation and policies are urgently needed.7
Towards this end, in January 2021 the Food and Drug Administration (FDA) created an Artificial Intelligence/ Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan.8 Incorporating feedback from community workshops, peer-reviewed publications, and marketing submissions, the FDA identified the following near-term goals for regulating the development of AI in health care8:
Inevitably, there will be ethical concerns that clinicians must be mindful of while utilizing AI technology in their practice. Here are 3 considerations that will be nearly universal2:
The “black box” dilemma
While current AI (such as deep neural networks) is capable of recognizing and teaching itself patterns, at times it can be difficult for health care providers to discern why a particular recommendation is made. As clinicians are ultimately responsible for patient care decisions, clinicians must demand transparent, step-wise illustrations of the clinical reasoning process of various AI applications.
An example involves UK researchers who investigated the use of an AI algorithm to predict which pneumonia patients were less likely to die and thus could safely be treated in an outpatient setting. The algorithm learned that patients with a history of asthma were associated with a lower risk of mortality, and thus the AI system (incorrectly) recommended outpatient treatment. However, the reason lower mortality was achieved is because asthma patients tend to be treated more aggressively for pneumonia, and are often admitted to the ICU where they receive a higher level of care. This essential context is why the reasoning process of any AI system must be completely transparent to clinicians.2,9
As mentioned previously, AI algorithms are only as powerful as the datasets used to train them. As such, datasets must be representative of the human populations they are intended to serve. For example, many dermatology datasets contain images of skin lesions from majority Asian or Caucasian patients. This dataset could introduce bias and inaccuracies into the algorithm when attempting to apply AI technology to diagnose patients of other ethnicities.2,3,7,10
Automation bias occurs when clinicians have more trust in the diagnostic capacity of technology than their clinical judgment. Clinicians must remain wary of "rubber stamping" a recommendation made by an algorithm, as the clinician is ultimately responsible for the individual patient under their care. While AI can be leveraged to reduce medical error and maximize treatment effectiveness, clinicians must safeguard against cognitive dependency and atrophy of their own clinical skills.2
With the rise of AI technology in medicine, clinicians and patients alike should be informed of modern-day privacy concerns and demand updated policy and legislation, particularly involving data ownership and access. AI will largely augment clinicians’ abilities to provide quality patient care. However, there are several ethical concerns that every clinician should carefully consider before implementing AI technology in their practice.
The views and opinions expressed in this blog are solely those of the author, and do not represent the views of WoundSource, HMP Global, its affiliates, or subsidiary companies.