Intelligible Machine Learning for Critical Applications Such As Health Care
In the mid-90’s we trained a neural net to predict the probability of death (POD) of pneumonia patients. All patients in the data set already had been diagnosed with pneumonia. The goal of the model was to predict which patients were high or low risk. Low-risk patients could be treated as outpatients, but high-risk patients needed to be hospitalized. We trained many different machine learning models on the data. The neural net was the most accurate model.
After training the neural net we considered using it to make predictions for real patients, but decided it was too dangerous to use the neural net clinically. The neural net is a black-box model. Although it is very accurate, we do not understand what it learned or how it makes predictions. One might think high accuracy on the test set would be sufficient to make us confident in the model. It was not. Here’s why:
One of the other models being trained on the same data was a rule-based model. This rule-based model was not as accurate as the neural net on the test set, but the rules it learned were very easy to understand. One night it learned a surprising rule:
A History of Asthma Lowers a Patient’s Chance of Dying From Pneumonia
You don’t need a background in medicine to question this rule. We asked the doctors about this rule at the next project meeting. They thought carefully and said that it probably was a true pattern in the data. They consider asthma a serious risk factor for pneumonia. Most patients presenting with pneumonia who had a history of asthma probably were immediately admitted to the hospital, or possibly even to the ICU (Intensive Care Unit). Moreover, patients with a history of asthma probably pay more attention to how well they are breathing and go to care sooner if they are having difficulty breathing. They get to healthcare faster. This combination of getting to care faster, being treated as high risk once you reach care, and then being aggressively treated is so effective that it reduces the chance of dying for asthmatic pneumonia patients below that of the general population, which may not always get to care so rapidly and receive such aggressive treatment.
It is great news for healthcare that the treatment asthmatics with pneumonia receive is very effective and reduces their chance of dying. But potentially this is bad news for machine learning, which is learning that asthmatics have reduced risk. It is true that asthmatics have less risk, but only if we treat them quickly and aggressively. Unfortunately, our plan was to use machine learning to predict which patients were high or low risk, so that low risk patients could be treated as outpatients. If the model predicted that pneumonia patients with a history of asthma have lower risk, then the model might suggest that asthmatics did not need to be hospitalized. This, of course, could be bad for the asthmatics. Fortunately, we knew it was risky to use a black-box model like a neural net to make these kinds of predictions for patients, so we didn’t field the neural net despite the fact that it was accurate on test data.
There also is research to develop machine learning methods that are very accurate, but white-box instead of black-box. Researchers at Microsoft and Cornell recently developed an improvement to a method originally invented by statisticians in the 1980’s (GAMs: Generalized Additive Models) that allows GAMs to be as accurate as random forests and neural nets on problems such as healthcare, but also to be extremely intelligible. We applied this new white-box GAM method to the pneumonia data set from the 1990’s. The resulting model is as accurate as the best neural nets we trained on this data years ago, but unlike the neural nets, is very transparent. We could easily see that the new model had learned that asthmatics have lower risk of dying from pneumonia. This is to be expected because this is a true pattern in the data.
But the new transparent model also learned that patients with a history of chest pain, and patients with heart disease, also had less risk of dying from pneumonia. We assume these also are true patterns in the data: patients with a history of chest pain and heart disease presumably pay more attention to how well they are breathing, are already plugged into healthcare, probably get to care faster, and probably receive more aggressive treatment. These are all very good things for patients who have now come down with pneumonia.
We believe the neural net trained back in the 90’s also learned these things about chest pain and heart disease, but we didn’t know it back then because the rule-based system did not learn rules about chest pain and heart disease. It was only when we saw the modern, intelligible GAM model that we recognized that the data had more complexity than just the asthma problem.
The good news is that the new GAM models appear to be as accurate as neural nets on healthcare problems, but are very transparent and editable. This makes it easy to detect when the model has learned bad things (e.g., that pneumonia patients with a history of asthma, chest pain or heart disease have less risk), and to fix the model when these kinds of problems are detected.
Learning that pneumonia patients with a history of asthma, chest pain and heart disease have less risk of dying from pneumonia is not necessarily right or wrong. If an insurance company wants to use predictions of patient mortality to decide what to charge for insurance and how much money to set aside for the healthcare of patients that will survive, then a model that takes into account factors like asthma when predicting mortality will be more accurate and will help the insurance provider make more accurate financial decisions. If, however, the plan is to use the risk predictions to help decide which patients should be treated as in- or outpatients, then the model should not predict that patients with a history of asthma, chest pain, and heart disease have less risk --- they actually have increased risk if not properly treated. Thus the correctness of the learned model depends on exactly what the model will be used for. The data, and models trained on the data, are not in and of themselves inherently right or wrong. This underscores the importance of being able to understand what a model has learned so that informed decisions can be made about the suitability of the model for the purpose for which it will be used.
These intelligible models are useful in domains other than healthcare. For example, in domains where we might be concerned about the model learning to be biased based on factors such as race, gender, and socioeconomic status, GAM models can be used to make what is learned from these variables transparent, and to then allow the bias to be removed from the models before they are deployed. This capability is critical if we are going to use machine learning in important applications that affect people’s health, welfare, and social opportunity. The models we are developing are not perfect, but they represent an important step forward in the accuracy vs. intelligibility vs. edit-ability landscape.
Summary:
We trained a neural net to predict the risk of dying from pneumonia
The neural net predictions were very accurate on test data
Because the neural net is a black box we don’t really know what it learned
That’s not good!
Rules trained on the same data learned that asthma lowers risk of death from pneumonia
This is true in the data because the asthmatics received faster, and more intensive care
But our goal was to use the model to predict who needs hospitalization
A model that learns asthmatics are lower risk might predict they don’t need hospitalization
That could be bad for asthmatics who have pneumonia
If the rule-based system learned this, we’re pretty sure the neural net learned it, too
We decided not to use the neural net because we couldn’t understand it
Later we discovered chest pain and heart disease also were “good” for pneumonia patients
Good thing we didn’t deploy the neural net!
It can be dangerous to use a machine learning model you don’t understand
The problem is not in the machine learning, the problem is in the data
It can’t be solved be removing variables for asthma, chest pain and heart disease from the data
A good way to deal with “bias” variables such as asthma, heart disease, race, and gender is to keep them in the data, but make sure the learned model is intelligible and can be edited