How Are Medical School Execs Thinking About Incorporating AI?


With most healthcare executives trying to figure out how AI will reshape what they do, the same is true for those involved in medical education. During a Nov. 18 webinar conversation hosted by the University of Pennsylvania Leonard Davis Institute of Health Economics, three medical school executives outlined the risks and opportunities AI presents in training the next generation of clinicians. 

As the introduction to the conversation states, “AI tools promise precision learning, adaptive feedback, and new ways to support clinical reasoning, but they also raise concerns about over-reliance, bias, and erosion of core skills.”

Addressing that point, Verity Schaye, M.D., assistant dean for education in the clinical sciences in the Office of Medical Education at the NYU Grossman School of Medicine, stated that the concern is around de-skilling and never skilling vs. the promise of upskilling — the idea that human plus AI is going to be better than human alone or AI alone.
“I think high-order critical thinking is still a human plus AI task for now and for the foreseeable future. Ten years from now, is that still the case? I don’t know,” said Schaye, who also is assistant director of curricular innovation in the Institute for Innovations in Medical Education at the NYU medical school. “You need enough of a skill set to critically appraise: is this the right thing to integrate into my diagnosis or my management plan? You need to have developed that human-alone skill enough to then have the combined forces.”

Brian Garibaldi, M.D., director of the Center for Bedside Medicine and Charles Horace Mayo Professor of Medicine (Pulmonary and Critical Care) at Northwestern Feinberg School of Medicine, said that it is important to remember that the core of clinical reasoning starts with data gathering and data acquisition. “I think it’s great if we’re using clinical reasoning tools to help us to ask the right questions, to help turn our attention to what might matter in that particular moment for that particular patient,” he explained. 

“For example, Courtney Reamer, M.D., at our Center for Bedside Medicine, created an app — it is a prototype, but hopefully we’ll be ready to release it soon — that will actually take information from the history that’s in the electronic health record, or you can add supplemental things to it yourself, and it will help you understand what are the diagnostic possibilities in that moment, in that room with that patient. What are some high-yield things that you can do? What questions can you ask? What maneuvers can you do on physical exam? What signs can you look for? What ultrasound technique might be called into play that will help you determine the likelihood of specific diagnostic possibilities? I think if we remind ourselves that clinical reasoning starts with data acquisition, then we can use these tools to help focus our attention on what matters most to that patient in that moment.”

Holly Caretta-Weyer, M.D., clinical associate professor of emergency medicine and associate dean of admissions & assessment at Stanford University School of Medicine, gave a pragmatic example. In their clinical competency committee meeting last week they had faculty giving feedback to residents to say, we want you to use AI to generate your differential diagnosis. “And several members of the competency committee said, hold on, hold on. Do we actually want them to be using it to generate their differential diagnosis? Basically, we took a time-out, and I said, ‘Everyone, take five minutes, get all of your angst out about it. The use of AI in generation of differential diagnosis — they’re going to be using it. What do we do to help them use it appropriately? What is the workflow, the thought process, because, similar to evidence-based medicine, we want them to augment their clinical reasoning, their diagnostic reasoning. We don’t want them to lose those skills or never develop those skills. But we do want them to learn how to use it responsibly, because they’re going to use it. So how do we teach them to use it responsibly and then put guardrails on?”

Caretta-Weyer described having a resident using an AI tool who put in some sort of prompt for their differential diagnosis and came up with something wildly inaccurate. The faculty member has to guide them at that point, she said. “What was the prompt you put in? How can I help you edit that prompt so you actually get what it is that you’re after? This is the kind of stuff that we’re going to have to be doing to teach our residents to appropriately use this in the clinical space,” she said. 

Caretta-Weyer said the reality is that AI is here and the medical students are going to use it. “Now it is about training the faculty and the residents to actually have that co-productive moment. “How do we put this together such that we’re using it responsibly and we’re getting out of it what we want and need without having it harm either the patient or the resident’s education?”

“We need to be gathering experience in our own clinical practice and our own educational practice,” Garibaldi stressed, “so that we begin to understand, No. 1, how you can use these tools, but probably most importantly, where things can go off the rails a little bit, and where some of the potential problems might be.”

Caretta-Weyer said the story she told of a resident putting in a wrong prompt, getting a completely wrong output for the patient in front of them, and maybe not realizing it until the faculty member stepped in was a good example that everyone can learn from. 

Yes, you should use AI, Caretta-Weyer concluded, and you need to say, “Here are the responsible guardrails. Here are the pitfalls. Here are the ethical considerations. Showing that balanced view is important. I don’t know that any policy or uniform response is going to be wholesale applicable across every program, every school, every context, because context matters so much to this. Having those critical conversations and being transparent about AI use within your context, I think, is going to be very important.”

As this topic of AI in clinical use is researched, Schaye said “it’s ever more important that as educators, we’re at the table. We have to bring in all that we know from educational theory, from clinical reasoning theory about how we develop skills. Those who are purely clinical researchers are not thinking about it from that lens. We have to be at the table for this research.”

We will be happy to hear your thoughts

Leave a reply

Som2ny Network
Logo
Register New Account
Compare items
  • Total (0)
Compare
0
Shopping cart