Leveraging Artificial Intelligence to Support Medical Decision-Making

Prediction models can process factors that modify patient risk and transform them into a single probabilistic prediction that can be used to help patients and doctors make good choices and fairly allocate care.

What is the role of artificial intelligence in the context of acute kidney injury (AKI)? In what ways does bias enter data? How can we reduce bias in AKI algorithms?  

During a recent webinar offered by ASN’s AKI!Now Artificial Intelligence Workgroup, Karandeep Singh, MD, an assistant professor at the University of Michigan, addressed these kinds of questions. “Artificial intelligence, specifically prediction models, appears capable of predicting the onset of acute kidney injury, in some cases as many as 48 to 72 hours in advance,” he said.

The workgroup aims to help clinicians, patients and researchers use artificial intelligence to improve the quality, accessibility, affordability, and equity of care. Dr. Singh, a workgroup member and director of the Machine Learning for Learning Health Systems laboratory, discussed how artificial intelligence can distill a patient’s medical record, at any given moment in time, down to a probability of experiencing AKI.

“Here is an open question in the field: Do we need to target our interventions to patients with a high risk of AKI before its onset?” he asked. “Any time we're trying to make a complex medical decision, we have to take into account a lot of different things.”

Patient factors such as age, findings from a physical exam, current labs, social and family history, and hereditary conditions are important, as well as considerations regarding costs and resource availability. Prediction models can process factors like these that modify patient risk and transform them into a single probabilistic prediction that can be used to help patients and doctors make good choices and fairly allocate care.

“We can use that probability to guide clinical actions while still considering other things, like patient preferences, costs, and resource availability,” said Singh.

A prediction model is supposed to encode a representation of the data, but if that data is biased, the model can also code for that bias, or contain that bias. In the context of artificial intelligence, bias means that a prediction model is not working equally well for all people who are subject to its use.

“We can also think of this in terms of fairness, particularly when our data isn't a fair representation of our society,” explained Singh. “Because of the way the data was represented, we may not even realize that we're capturing more false positives of false negatives for a given group that we're applying a model to because that group wasn't fairly or equally represented in our training data.”

But even if the model population is representative of the US population, we still may not have an adequate measure for how well these models do in minority populations. Oversampling minorities to obtain valid estimates of how well models perform in these populations is one way to help determine whether a model is fair and delivering in a credible way with generalizable implementation. 

“On the other extreme, there is this recognition that if you change the eGFR equation, you change people's lives,” Singh points out. “It's not just one decision that's guided by someone's eGFR, it's many decisions: whether to start a medication, stop a medication, dose a medication. And many decisions in nephrology care are linked to how well we think kidney's are doing in terms of their kidney function. So changes [in the equation] can affect kidney transplant eligibility and donor eligibility.”

So what can we do to prevent bias from entering algorithms?  If we are using routinely collected data from our health systems and not everyone has access, one way to get more complete data is by expanding access to people, or finding ways to encourage or expand access.

“Same as if we're conducting trials, or even observational studies: if we're going out there and recruiting, we need to make sure that we're spending time and effort to recruit a diverse cohort so that anything we then go ahead and do with that data is really reflective of our community at large.” 
 

Save