Integrating AI into Medicine
How Ron Li, MD is Using Machine Learning to Support Patient Care at Stanford
Like many forward-thinking doctors, Ron Li, MD, clinical assistant professor of hospital medicine and biomedical informatics, believes that AI and machine learning have an important place in the future of medicine. In his work as a hospitalist and as the Medical Informatics Director for Artificial Intelligence Clinical Integration at Stanford Health Care, Li is leveraging AI and machine learning to solve some of the most complex issues that plague health care systems, from improving the workflow and communication among large care teams to effectively aligning resources for critically ill patients.
A New Approach
Li takes an unusual approach to his work, however. He sees AI not as an end in itself, but as a means, and begins projects with a question-first philosophy. He prefers that his colleagues or medical professionals approach his group without a “specific ask” to use AI, but rather with a more general, "I have a problem that I think could be improved by a data driven process."
This fits into his larger view about AI and machine learning: you can’t simply plug AI into an existing issue and hope that it fixes it. Integration is key.
“What we tend to underappreciate in AI is the idea of complexity,” Li says. “A health system is a highly complex environment, and in order to understand it you have to fully understand all the stakeholders and processes that go into that particular problem you're trying to solve. Most of that has nothing to do with machine learning.”
Li centers this way of thinking around the following principle: you don’t start with the model and plug in problems it’ll solve. You first think about the problems and see if you might be able to design a model that will help fix them. As Li puts it, “You think: What prediction tasks can the machine learning model do to make the system possible? That's when the machine learning comes in and you design.”
The Clinical Deterioration Project
How can we use data to help decrease unexpected mortality in the hospital?
One of Li’s projects arose from a question from the Quality Department at Stanford Health Care: how can we use data to help decrease unexpected mortality in the hospital?
The question was first posed by Lisa Shieh, MD, PhD, clinical professor and Medical Director for Quality for the Department of Medicine, and her team. Li partnered with Margaret Smith, MBA, Director of Operations for the Healthcare Applied AI Research Team in the Department of Medicine and tested various options before landing on a machine learning model already created and integrated with Epic, the electronic health record (EHR) software used by Stanford doctors. They called it the “clinical deterioration project.”
The model’s aim is relatively simple: to predict whether a patient is “high risk” enough to either need either ICU care or experience a code or rapid response team event (RRT) in the next six to eighteen hours. To determine this, the model “basically spits out a bunch of different probabilities for each patient that translate into a score of 0 to 100,” Li states. Physicians using the model (along with the quality improvement teams) then determine the threshold score that indicates the patient is high risk enough to alert the care team.
The threshold determined at Stanford, for example, means that from Li and his team’s calculations, a patient will have a one in five chance of needing to go to the ICU or experiencing a code or RRT in the next six to 18 hours.
If a patient crosses this threshold, he or she is flagged. An alert goes to the physician’s mobile device, and the patient’s care team enacts a series of procedures including a contingency plan to make sure they’re all agreed on next steps if the patient continues to get sicker.
The goal, Li says, was to bring in the entire care team.
“People aren’t always quite on the same page regarding how sick a patient is and when to dedicate additional resources,” Li says.
They hope the model will help change that, facilitating “better team dynamics and communication structures.” Its predictions are available not just to a patient’s doctors but also to nurses and anyone else important on the care team. Li calls it “the decentralization of the responsibility in figuring out who actually needs additional care.”
Integrating the Model
The model is currently “running in the background” of the EHR, with several user testers and an ideal launch date in September. And in the meantime, the team is working on all the practicalities that come with a launch of this type: what should the alert look like? How should it be delivered? What language should be included?
Li himself is a user tester, and he says that while the model (like all programs) does make mistakes, he’s been “surprised” by some of its predictions. “You have these patients who've been sick for a while,” he explains. “But I think you kind of get used to the patient being sick and they're not actively getting sicker so you may not necessarily think that you need to devote additional attention, especially when there are a lot of other patients who may need more immediate attention.”
The model, though, “takes a more global view of all the different clinical information that goes into that patient.” Li adds, “I think it helps to counter some of the cognitive biases that clinicians experience when caring for patients. I've experienced that and I've found it very helpful, actually.”
The clinical workflows designed to be enabled by this model may also allow it to adapt to various coming challenges. In March, while Li and his team were working on validating the model for Stanford patients, COVID-19 hit, and the model was investigated for possible use on COVID-19 patients to “help with some of the COVID-specific workflows that are very ICU resource intensive.” Luck prevailed and the predicted hospital surge never materialized, but this investigation taught Li and his team that their unique approach to problem solving could prove useful when adapting machine learning models to other pressing problems in the future.
AI and Palliative Care
How do doctors and care teams facilitate advanced planning discussions with seriously ill patients?
Another of Li’s projects began with an issue arising from palliative care: how do doctors and care teams facilitate advanced planning discussions with seriously ill patients?
It’s a fundamentally human-driven question, but AI is also able to play a role in the solution.
Li and his team used AI to help make sure that these crucial conversations actually take place. The team also took this project through the Stanford Clinical Effectiveness Leadership Training (CELT) program in order to integrate principles of quality improvement into their approach.
They are using a machine learning model designed and developed at Stanford by Nigam Shah, MBBS, PhD, associate professor of biomedical informatics and Associate Director for the Stanford Center for Biomedical Research, who worked closely with Stephanie Harman, MD, clinical associate professor of primary care and population health. It predicts twelve-month mortality and flags patients who “may benefit from advanced care planning.”
This project also focused not just on the advance care planning conversation itself, but also the human systems in which these conversations occur. “We needed to have that improved team dynamic between the physician and other members of the care team,” Li says, “who are quite capable of contributing to this conversation but in the current state don't feel as empowered to do so, because usually the task of identifying a poor prognosis rests solely on the physician.”
They made the predictions of the machine learning model available to a larger group of people for this very reason, not just physicians but also social workers, occupational therapists, nurses, and others who “told us they really would like to be part of this process.” The whole thing, li explains, is “kind of a democratization of the advanced care planning process.”
Once the model alerts the care team, the conversation itself takes precedence. For this intervention, Li and his team also worked with Winnie Teuteberg, MD, clinical associate professor of primary care and population health. She developed guidelines for advanced care planning conversations through her Serious Illness Care Program, which came out of the Ariadne labs, a program that was founded by Atul Gawande at Harvard.
This project is also currently live at Stanford, with a few adjustments still pending to “ramp up workflow.”
An Expansive Future
Li’s plans don’t end with these two projects, however. For him, the future is all about expansion. He is working with partners such as Nigam Shah and Christopher (Topher) Sharp, MD, clinical professor of medicine and the Chief Medical Information Officer of Stanford Health Care to build a pipeline for physicians and medical professionals all across Stanford to solve problems using AI.
He hopes that in the future people from all areas of medicine (especially those without machine learning backgrounds) will contribute to these efforts. “We want the Stanford community to know that this process is inclusive,” he says. “If you have a problem or if you're embedded in some kind of clinical operations or clinical process, there may be an opportunity for you to work in an AI project.”
After all, it funnels into a larger desire: to build a better system of care. And that, Li says, is exciting. “It's highly multi-disciplinary work,” he states. “I work with all sorts of people across the school and the health system and I think that's what makes these projects so rewarding. What we're developing is not just a machine learning model. It's a system made possible by machine learning.”
For more information on Ron Li's views about implementing AI, please see this article.