engage5w

View Original

Considerations related to the use of AI in healthcare

By Keith A. Hovan

“Artificial Intelligence (AI) has been increasingly utilized in healthcare delivery, from assisting in medical diagnosis to improving patient outcomes. While AI technology brings many benefits, it also raises ethical concerns, particularly in the areas of privacy, bias, and accountability. The potential misuse of AI in healthcare delivery can have serious consequences, including patient harm and violations of individual rights. In this blog post, we will explore the ethical considerations of using AI in healthcare delivery and discuss potential solutions to mitigate these risks. It is important that we engage in a thoughtful and informed conversation on this topic, as the decisions made now will have a profound impact on the future of healthcare delivery.”

The opening paragraph above was not written by me. It is Open AI’s response to my request for it to write an introduction to a blog post on the ethical considerations of using AI in healthcare delivery. It’s not too bad, is it?

Artificial intelligence (AI) seems to be the topic on everyone’s minds, and this is certainly true in healthcare as well. And, as my AI-written introduction states, I will be discussing several considerations healthcare leaders and providers need to ponder as we navigate the introduction of this new and rapidly evolving technology into our delivery of healthcare.

Often, technological terms can come onto the scene and become part of our day-to-day parlance so quickly that we can skip the important step of defining them. So first, I will pause to explain what I mean by AI in this context. AI, also called machine learning, is essentially computer programs that are given massive amounts of data (e.g. text, images, numbers, video—anything that can be encoded), then analyze and draw inferences from patterns within that data. To use an example from healthcare, oncologists could load millions of pictures of normal and abnormal skin onto a computer. The computer, through the power of AI, would then be able to look for patterns enabling it to identify cancerous skin lesions as it matches image patterns. In seconds, AI could do what it would take a human specialist countless hours to do, freeing them up to care for their patients. Examples like this are only the tip of the iceberg.

However, while AI can look for patterns and accrue and arrange coherent data, AI is limited by an inability to synthesize and evaluate the quality of data, form an argument, and make decisions. Machine learning is not going to be a panacea for all problems, and has great potential to create new ones. There are many considerations that healthcare leaders must make as they evaluate AI in their particular contexts.

The need for interpretation

AI technologies currently fall into two different categories, generative and general purpose. Generative AI creates text or images based on a series of previous examples. It enables systems trained on large amounts of data to create an output based on a predictive analysis. The other type of AI is general purpose, which is not built for a specific reason or application and doesn’t perform at the same level as generative AI. For example, ChatGPT is optimized for dialogue, but not ideal for solving complex computations.

At this point, AI still needs a significant amount of human oversight and intervention in order to be of benefit in a healthcare environment. AI will certainly have a role in care delivery, but initially as an adjunct to the human healthcare provider.  The place for AI is in situations where it can do what it does best: be fed large amounts of data and look for patterns.

This function alone opens up many dynamic uses of AI. This technology can be used to promote patient safety by flagging issues, potential complications, or risk factors for a given patient, which would alert providers to the need for early intervention that could lead to better outcomes by preventing or minimizing complications. Machine learning could be crucial in determining the course of cancer treatment. Armed with large amounts of data regarding malignancy types, location, duration, etc., as well as information from patients with a similar disease burden to make recommendations for the best treatment options to promote optimal outcomes.

The above examples still require human interpretation of the alert or flagged data, processing of context, and acceptance of the finding. I view AI as a promising collaborator, but not a technology to be relied upon without question, as there are risks related to accuracy and potential bias. AI provides a starting point requiring human intervention for refinement.

Data relevance

AI on its own is not intelligent; it mimics human intelligence. As such, it must be properly “refereed.” If we’re using AI, we are responsible for ensuring that the output of that AI isn’t biased by what has been put into it. For example, if the health data available for a certain issue or diagnosis is largely based on American Caucasian males, the observations AI offers won’t apply as helpfully or widely. In fact, their use could even be dangerous for patients who don’t meet that profile.

Read full article.